
The Human Core of Large Language Models
Why larger LMs actually mimic human cognition better than we thought
This research challenges previous claims that larger language models are less cognitively plausible, revealing that internal representations in these models actually align well with human sentence processing.
- Internal layers of larger LMs show strong correlation with human reading behavior
- Earlier research conclusions were limited by focusing only on final model layers
- Analysis of next-word probabilities from internal layers demonstrates human-like processing
- Findings reconcile the seemingly contradictory observations about model size and human alignment
This work provides critical evidence for linguists and cognitive scientists that modern large language models may indeed process language in ways that parallel human cognitive mechanisms, suggesting promising directions for both NLP advancement and cognitive modeling.