Smarter Robot Planning Through Language

Smarter Robot Planning Through Language

Using LLMs to Transform Vague Instructions into Clear Task Plans

This research introduces a feedback-driven framework that enables robots to understand and plan tasks from natural human instructions.

  • Leverages Large Language Models (LLMs) to generate structured plans from ambiguous human instructions
  • Implements a feedback loop system that refines task plans through human-robot dialogue
  • Demonstrates improved planning efficiency in collaborative scenarios compared to traditional methods
  • Offers a generalizable approach that works across diverse engineering applications

For manufacturing and engineering teams, this breakthrough means robots that can adapt more fluidly to human collaborators, understand context, and negotiate task uncertainty—ultimately leading to more flexible automation systems.

From Vague Instructions to Task Plans: A Feedback-Driven HRC Task Planning Framework based on LLMs

110 | 168