
Smart Deferral Systems in Healthcare AI
Enhancing trustworthiness through guided human-AI collaboration
This research introduces a guided deferral system that intelligently combines human expertise with LLM capabilities to overcome hallucination risks and privacy concerns in healthcare.
- Proposes a two-stage framework that automatically identifies when AI should defer to human judgment
- Achieves 91.9% accuracy on medical disorder classification while maintaining high efficiency
- Demonstrates significant improvement in trustworthiness compared to standalone AI systems
- Provides an open-source, transparent alternative to proprietary LLM implementations
This approach addresses critical barriers to AI adoption in healthcare by balancing performance with reliability and regulatory compliance, making AI tools more practical for clinical settings where errors can have serious consequences.
Trustworthy and Practical AI for Healthcare: A Guided Deferral System with Large Language Models