SafePlan: Making LLM-Powered Robots Safer

SafePlan: Making LLM-Powered Robots Safer

A formal logic framework to prevent unsafe robot actions

SafePlan introduces a novel safety framework that uses formal logic and chain-of-thought reasoning to prevent Large Language Models from executing harmful robotics commands.

  • Implements a three-stage safety pipeline that examines commands for malicious intent
  • Uses formal verification techniques to validate robot actions before execution
  • Employs chain-of-thought reasoning to enhance transparency and safety justifications
  • Successfully blocks harmful commands while allowing legitimate operations

This research addresses critical security concerns as LLMs become more integrated with physical robotics systems, providing a practical approach to preventing potential harm from malicious prompts or unsafe planning.

SafePlan: Leveraging Formal Logic and Chain-of-Thought Reasoning for Enhanced Safety in LLM-based Robotic Task Planning

12 | 21