MLLMs for Parent-Child Communication

MLLMs for Parent-Child Communication

Leveraging AI to understand joint attention in early language development

This research evaluates how Multimodal Large Language Models can detect and analyze joint attention—a critical element of early speech-language development—in parent-child interactions.

  • Analyzed 26 parent-child interaction videos with expert annotations
  • Assessed MLLMs' ability to distinguish between strong and poor joint attention
  • Establishes a framework for AI-assisted assessment of developmental communication patterns
  • Creates potential for earlier intervention in developmental communication disorders

This work has significant medical implications by potentially enabling broader screening for communication development issues, supporting speech-language pathologists with AI-enhanced tools, and improving early intervention for children with developmental challenges.

Towards Multimodal Large-Language Models for Parent-Child Interaction: A Focus on Joint Attention

27 | 53