Description
First, I develop two frameworks that borrow methodologically from cognitive science to identify deviations in the expected behavior of machine learning systems. Second, I forge a connection between a classical approach to building computational models of human cognition, hierarchical modeling, and a recent technique for small-sample learning in machine learning, meta-learning. I use this connection to develop algorithmic improvements to machine learning systems on established benchmarks and in new settings that highlight their inability to come close to human standards. Finally, I argue that machine learning should borrow methodologically from cognitive science, as both are now tasked with studying opaque learning and decision-making systems. I use this perspective to construct a computational model of machine learning systems that allows us to formalize and test hypotheses about how these systems operate.