Bridging the Human-Robot Gap for Elderly Care

Bridging the Human-Robot Gap for Elderly Care

Multimodal Fusion of Voice and Gestures via LLMs

This research introduces a natural interaction framework combining voice commands with pointing gestures to help elderly users communicate with service robots more intuitively.

  • Eliminates need for complex syntax or sign language learning
  • Integrates visual cues with spoken instructions for better intent understanding
  • Leverages Large Language Models to process multimodal inputs
  • Creates more accessible robot interfaces for aging populations

Why it matters: As societies age globally, this technology addresses a critical need for supportive care technologies that accommodate physical and cognitive limitations of elderly users, enabling more independent living through intuitive robot assistance.

Natural Multimodal Fusion-Based Human-Robot Interaction: Application With Voice and Deictic Posture via Large Language Model

6 | 53