Detecting Hallucinations in AI Radiology

Detecting Hallucinations in AI Radiology

Fine-grained approach for safer AI-generated medical reports

ReXTrust introduces a novel framework that precisely identifies false statements in AI-generated radiology reports by analyzing hidden states in vision-language models.

Key Innovations:

  • Produces finding-level hallucination risk scores for precise error detection
  • Leverages sequences of hidden states from vision-language models
  • Evaluated on the MIMIC-CXR dataset with promising results
  • Addresses critical patient safety concerns in medical AI applications

Why This Matters: As radiology increasingly adopts AI-generated reports, robust hallucination detection is essential for maintaining diagnostic accuracy and patient safety. ReXTrust offers a targeted approach to identify potentially harmful errors before they impact clinical decisions.

ReXTrust: A Model for Fine-Grained Hallucination Detection in AI-Generated Radiology Reports

23 | 85