Fighting Hallucinations with Highlighted References

Fighting Hallucinations with Highlighted References

A new technique for fact-grounded LLM responses

Highlighted Chain-of-Thought (HoT) prompts LLMs to mark which parts of their responses are grounded in the input text, reducing hallucinations and improving verifiability.

  • Uses XML tags to explicitly link claimed facts to source information
  • Enhances transparency by distinguishing between factual and non-factual statements
  • Improves user ability to verify LLM outputs without specialized tools
  • Particularly valuable for security-critical applications where factual accuracy is essential

From a security perspective, HoT addresses a fundamental vulnerability in LLM deployments by providing built-in verification mechanisms that reduce the risk of users acting on hallucinated information.

HoT: Highlighted Chain of Thought for Referencing Supporting Facts from Inputs

104 | 141