Harnessing LLMs to Combat Climate Misinformation

Harnessing LLMs to Combat Climate Misinformation

Using AI with human oversight to detect false climate claims

This research demonstrates how Large Language Models can be evaluated and aligned to effectively identify climate misinformation with expert human oversight.

  • Compares proprietary vs. open-source LLMs on climate misinformation classification tasks
  • Evaluates LLMs against human expert annotations on social media content
  • Explores how AI can be part of the solution rather than contributing to misinformation
  • Provides framework for enhancing LLM governance through human expertise

For security professionals, this research offers valuable insights into creating robust systems that combine AI capabilities with human oversight to detect and mitigate harmful content, establishing accountability in AI governance.

Enhancing LLMs for Governance with Human Oversight: Evaluating and Aligning LLMs on Expert Classification of Climate Misinformation for Detecting False or Misleading Claims about Climate Change

44 | 104