
Hallucinations: Bridging Human and AI Cognition
What machine 'hallucinations' teach us about human cognition
This theoretical exploration compares erroneous outputs in AI systems with false perceptions in human cognition, revealing striking parallels in predictive processing mechanisms.
- Both humans and LLMs engage in predictive processes that can generate outputs disconnected from reality
- Human brains actively fill information gaps under uncertainty, similar to how LLMs generate completions
- Neural mechanisms for error correction exist in both systems but function differently
- Understanding these parallels offers insights for developing more reliable AI systems
For biology and neuroscience, this research provides a comparative framework to understand human cognition through the lens of current AI limitations, potentially informing both improved neural models and more human-like artificial intelligence.
I Think, Therefore I Hallucinate: Minds, Machines, and the Art of Being Wrong