
Gesture-Enhanced Speech Recognition
Bridging communication gaps for patients with language disorders
This research introduces a multimodal approach that combines gestures with speech to improve automatic speech recognition for individuals with language disorders.
- Developed a zero-shot framework that integrates gestures with speech input
- Enables more accurate interpretation of speech from patients with language impairments
- Significantly improves communication accessibility for individuals who rely on non-verbal cues
- Addresses a critical gap in current voice-assisted technologies
This innovation has profound medical implications by providing more inclusive communication tools for patients with aphasia and other language disorders, potentially improving their quality of life and healthcare outcomes.
Gesture-Aware Zero-Shot Speech Recognition for Patients with Language Disorders