AI ↔ Cognition
What language models reveal about human reasoning and development.
AI ↔ Cognition
Large language models double as computational subjects. This theme captures how BabyLM, cognitive batteries, and psycholinguistic analyses expose where models follow-or diverge from-human developmental trajectories.
Featured Work
- Teaching Language Models to Grow Up - Developmentally staged corpora and cognitive test batteries.
- How Well Do Deep Learning Models Capture Human Concepts? - Typicality effects and concept representations.
Highlights
- Developmental Alignment
- Age-restricted corpora (BabyLM challenge) for staged learning
- Curriculum experiments that mimic child-directed input
- Psych-inspired Evaluation Batteries
- Garden-path recovery, numerical magnitude, typicality gradients
- Linking hypotheses mapping behavioral signatures to model internals
- Theory ↔ Practice Loop
- Insights that inform safer training objectives
- Cognitive science collaborations that keep the benchmarks grounded
Get Involved
- Submit to the next BabyLM workshop
- Run your model through the cognitive battery and share results
- Co-design new linking hypotheses bridging psych data and LLM latents
Reach out if you want to pair up-this line thrives on interdisciplinary partnerships.