Explainable AI for Fact-Checking

Explainable AI for Fact-Checking

Bridging the gap between AI systems and human fact-checkers

Research exploring how automated fact-checking systems can provide transparent explanations that align with human fact-checkers' needs and workflows.

  • Fact-checkers require justifiable evidence and traceable reasoning to trust and effectively use AI systems
  • Current explainable AI approaches often fail to meet the specific requirements of professional fact-checkers
  • Research identifies the need for explanations that provide source provenance, reasoning transparency, and verification pathways

This research is crucial for security as it addresses how to build trust in automated systems designed to combat misinformation, ensuring human experts maintain oversight while benefiting from AI assistance.

Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking

81 | 141