Combating LLM Overreliance

Combating LLM Overreliance

How explanations, sources, and inconsistencies influence user trust

This research identifies effective strategies to foster appropriate reliance on large language models by examining how users evaluate LLM-generated content.

  • Explanations matter: Supporting details significantly impact user trust and evaluation of responses
  • Source citations increase scrutiny and help users better assess information quality
  • Inconsistencies serve as valuable signals that prompt users to question potentially incorrect responses
  • Combining these elements creates a more transparent AI experience that balances trust with healthy skepticism

For security teams, this research provides practical design guidelines to reduce risks of user overreliance on potentially incorrect AI outputs in high-stakes environments.

Fostering Appropriate Reliance on Large Language Models: The Role of Explanations, Sources, and Inconsistencies

79 | 141