
Harnessing LLMs to Combat Climate Misinformation
Using AI with human oversight to detect false climate claims
This research demonstrates how Large Language Models can be evaluated and aligned to effectively identify climate misinformation with expert human oversight.
- Compares proprietary vs. open-source LLMs on climate misinformation classification tasks
- Evaluates LLMs against human expert annotations on social media content
- Explores how AI can be part of the solution rather than contributing to misinformation
- Provides framework for enhancing LLM governance through human expertise
For security professionals, this research offers valuable insights into creating robust systems that combine AI capabilities with human oversight to detect and mitigate harmful content, establishing accountability in AI governance.