
Empowering the Visually Impaired with AI Vision
How Large Multimodal Models Transform Daily Life for People with Visual Impairments
This research examines how Large Multimodal Models (LMMs) are revolutionizing assistive technology by providing natural language descriptions of surroundings through audible feedback.
- Investigates real-world applications beyond basic usability, focusing on both capabilities and limitations
- Explores how LMM-based tools function in personal and social contexts
- Identifies design implications for future improvements in assistive technology
- Demonstrates the significant impact these tools have on daily task management and independence
This research matters because it moves beyond theoretical applications to understand how AI vision technology actually transforms lived experiences of visually impaired users, creating opportunities for greater autonomy and accessibility in everyday situations.