Controlling Conversations with LLMs

Controlling Conversations with LLMs

Zero-Shot Dialog Planning for Safer AI Interactions

This research introduces a novel approach for goal-directed dialog management using large language models without requiring specific training data.

  • Enables LLMs to plan toward dialog goals rather than responding to each turn in isolation
  • Reduces hallucination risks in sensitive domains like medicine and law
  • Provides controllable conversation steering while maintaining natural interactions
  • Demonstrates potential for zero-shot learning in complex dialog scenarios

For medical applications, this approach ensures patient interactions receive factually correct information and appropriate guidance, reducing risks associated with hallucinated medical advice from AI systems.

Towards Zero-Shot, Controllable Dialog Planning with LLMs

22 | 116