Combating LLM Hallucinations

Combating LLM Hallucinations

Fine-Grained Detection of AI-Generated Misinformation

This research introduces a model-aware approach to identifying specific text spans where large language models produce hallucinations across 14 languages.

  • Develops specialized techniques for detecting and highlighting exact segments of hallucinated content
  • Provides a nuanced understanding of how different LLMs produce various types of hallucinations
  • Creates a multilingual framework applicable across diverse language contexts
  • Contributes to establishing reliability benchmarks for LLM outputs

Security Impact: By precisely identifying AI hallucinations, this research helps protect against misinformation, enhances trust in AI systems, and enables better filtering of unreliable AI-generated content - critical for enterprise deployment of LLM technologies.

HausaNLP at SemEval-2025 Task 3: Towards a Fine-Grained Model-Aware Hallucination Detection

118 | 141