Combating Hallucinations in LLMs

Combating Hallucinations in LLMs

Delta: A Novel Contrastive Decoding Approach

Delta is an inference-time method that reduces hallucinations in large language models without requiring retraining or additional data.

  • Works by randomly masking parts of input text
  • Employs contrastive decoding to identify and reduce fabricated content
  • Improves reliability without sacrificing model performance
  • Particularly valuable for high-stakes domains

For healthcare applications, Delta significantly reduces the risk of generating factually incorrect medical information, making LLMs safer and more trustworthy for clinical decision support, patient education, and medical documentation.

Delta -- Contrastive Decoding Mitigates Text Hallucinations in Large Language Models

71 | 141