
Teaching Machines to Understand Human Subjectivity
How language models can predict and model diverse human opinions
This research explores how large language models can be trained to understand subjective human judgments in cognitive appraisal tasks—a critical advancement for human-centered AI applications.
- Developed novel benchmarks to evaluate language models' ability to predict human agreement levels in subjective tasks
- Demonstrated that models can be fine-tuned to better capture the distribution of human opinions rather than just majority views
- Revealed that specialized instruction tuning significantly improves models' ability to predict the subjectivity of human responses
- Established a foundation for more nuanced, human-centered AI that respects diverse perspectives
For education, this research opens pathways to develop AI systems that better understand student perspectives, recognize subjective elements in learning assessments, and provide more personalized educational experiences that acknowledge different viewpoints.
Modeling Subjectivity in Cognitive Appraisal with Language Models