Combating LLM Hallucinations

Combating LLM Hallucinations

A multilingual approach to detecting fabricated information in AI outputs

HalluSearch offers a robust pipeline that combines retrieval-augmented verification with factual splitting to identify hallucinations in LLM outputs across 14 languages.

  • Leverages search-enhanced RAG techniques to verify factual accuracy
  • Performs fine-grained analysis to precisely locate fabricated text spans
  • Demonstrates competitive performance in hallucination detection tasks
  • Addresses critical security concerns related to AI-generated misinformation

This research is vital for securing AI systems by ensuring information integrity and preventing the spread of unreliable content in multilingual contexts.

HalluSearch at SemEval-2025 Task 3: A Search-Enhanced RAG Pipeline for Hallucination Detection

137 | 141