Note: This list is not yet complete.
This is a list of papers that might be useful if you're trying to create an artificial agent that can learn a language or a concept in a manner identical to human learning. Before you start working with RNNs, LSTMs and other complicated deep-learning architecture, I think it is important that you understand the basics of how humans learn a concept or a language and also some basic information about causal inference. This would mean reading a few papers written by Psychologists and Cognitive Scientists.
I have divided this list into a number of sections.
- A history of Acquisition and Bickerton's Bioprogram
- A Comparison of Inference Learning and Classification Learning
- Correlation versus prediction in children’s word learning
- Probabilistic models of cognition
- Generality, Similarity and Bayesian Inference
- Word Learning as Bayesian Inference
- Building Machines That Learn and Think Like People
- Human-level concept learning through probabilistic program induction
- Introduction to causal inference