
Controlling Clinical Text Generation
Enhancing LLM accuracy while reducing clinician oversight
This research demonstrates how to condition Large Language Models (LLMs) for better clinician control in medical documentation while achieving state-of-the-art results.
- Uses automated dataset augmentation with LLMs serving as human proxies
- Achieves new state-of-the-art results on the BioNLP ACL'24 Discharge Me! Shared Task
- Employs simpler methodology while maintaining performance
- Reduces hallucinations and factual inconsistencies common in clinical text generation
Why it matters: Medical documentation requires extreme accuracy, but current LLMs need significant human oversight. This approach could dramatically reduce clinician workload while maintaining quality, potentially enabling wider adoption of AI assistance in clinical settings.
Towards Conditioning Clinical Text Generation for User Control