
Gaze-Driven AI Assistants
LLM-Enhanced Robots That Read Your Intentions
MindEye-OmniAssist combines gaze tracking with large language models to create more intuitive assistive robots that understand user intentions without explicit commands.
- Enables robots to understand complex intentions beyond basic grasping
- Uses gaze data + LLM reasoning to predict what users actually want
- Performs multi-step tasks based on understood intentions
- Demonstrates significant improvements in task completion rates
This breakthrough matters for support systems by transforming how people with mobility limitations interact with assistive technology, making help more intuitive and reducing cognitive load for users.