
Fighting Hallucinations in Large Language Models
Delta: A Novel Contrastive Decoding Method Reduces False Outputs
Delta is an inference-time technique that significantly reduces hallucinations in LLMs without requiring model retraining or additional data.
- Works by randomly masking parts of the input and contrasting different generation paths
- Improves reliability by identifying and eliminating potentially fabricated content
- Operates during inference, making it compatible with existing LLM deployments
Medical Impact: Delta addresses a critical challenge in healthcare applications where factually incorrect AI outputs could lead to harmful clinical decisions, enhancing LLM trustworthiness in sensitive medical contexts.
Delta -- Contrastive Decoding Mitigates Text Hallucinations in Large Language Models