Combat Fake News with LLMs

Combat Fake News with LLMs

Using Defense Among Competing Wisdom framework to explain detection results

This research introduces a novel explainable fake news detection framework that leverages LLMs to evaluate competing perspectives rather than relying on majority opinions.

  • Proposes the Defense Among Competing Wisdom (DACW) framework that generates multiple stances on news veracity
  • Implements a three-step process: stance generation, defense evaluation, and final verdict determination
  • Achieves superior performance over baseline methods while providing transparent justifications for detection results
  • Demonstrates how LLMs can be used to create security tools with built-in explainability

This research addresses critical security concerns by helping users understand why content is classified as misinformation, potentially reducing the spread and impact of fake news in information ecosystems.

Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom

8 | 104