
Combating Misinformation with AI
Comparing LLM-based strategies for detecting digital falsehoods
This research evaluates the effectiveness of Large Language Models (LLMs) like GPT-4 and LLaMA2 in detecting misinformation across different approaches.
- Text-based detection: Analyzes LLMs' capabilities to identify false claims from text alone
- Multimodal analysis: Examines how LLMs perform when processing both text and visual content
- Agentic approaches: Explores how LLM-powered agents can verify information through active inquiry
Security implications: As misinformation threatens social cohesion and institutional trust, this research provides crucial insights into how AI systems can be deployed as defensive tools in our digital information ecosystem.