
BrainWavLM: Teaching AI to Think Like Humans
Fine-tuning speech models with actual brain response data
This research breaks new ground by fine-tuning speech models with human brain activity data, achieving improved brain response predictions across the cortex.
- Uses Low-Rank Adaptation (LoRA) to efficiently fine-tune the WavLM speech model
- Achieves significant prediction improvements in auditory processing regions
- Shows successful transfer learning between different brain regions
- Demonstrates better generalization across subjects versus traditional approaches
For medicine and neuroscience, this approach creates more accurate models of how the brain processes language, potentially enabling better diagnostic tools for language disorders and new insights into human cognition.
BrainWavLM: Fine-tuning Speech Representations with Brain Responses to Language