
Smarter LLMs Through Reranking
Using communication theory to reduce hallucinations and improve output quality
This research introduces a novel communication-theoretic framework for improving LLM outputs by generating multiple alternatives and intelligently selecting the best one.
- Conceptualizes LLM generation as a noisy communication channel that can be improved through redundancy
- Establishes mathematical reranking laws that predict performance gains based on the number of generated candidates
- Demonstrates significant improvements in factuality and answer quality across multiple tasks
- Validates the approach using medical data translation with TowerInstruct 13B
Medical Impact: The reranking approach shows particular promise for medical applications by reducing hallucinations and improving factual accuracy in specialized domains where precision is critical for patient safety.
Reranking Laws for Language Generation: A Communication-Theoretic Perspective