KEDiT: Making LLMs Better at Dialogue with Knowledge

KEDiT: Making LLMs Better at Dialogue with Knowledge

Efficient fine-tuning for knowledge-grounded conversations in specialized domains

KEDiT offers an efficient method to enhance large language models with external knowledge for more accurate and informed dialogue generation.

  • Compresses retrieved knowledge into learnable parameters using an information bottleneck approach
  • Maintains essential information while filtering out irrelevant content
  • Demonstrates superior performance on knowledge-grounded dialogue tasks
  • Provides a scalable solution requiring fewer computational resources

Medical Impact: KEDiT shows particular promise for medical applications, enabling more accurate responses in clinical conversations by incorporating up-to-date medical knowledge from sources like PubMed not present in the LLM's original training.

Efficient Tuning of Large Language Models for Knowledge-Grounded Dialogue Generation

69 | 78