
Making Robots Smarter and Safer
Aligning AI Uncertainty with Task Ambiguity
This research introduces Introspective Planning to help robots handle ambiguous instructions and reduce unsafe actions when using large language models.
- Addresses LLM hallucination problems that lead to unsafe robot actions
- Introduces a novel calibration approach that aligns model uncertainty with task ambiguity
- Creates a benchmark for evaluating safe mobile manipulation
- Demonstrates significant improvements in both compliance and safety metrics
This advancement is crucial for security as it helps prevent robots from confidently executing potentially harmful actions, ensuring AI systems operate reliably in critical scenarios where safety is paramount.
Introspective Planning: Aligning Robots' Uncertainty with Inherent Task Ambiguity