
Predicting Human Behavior with AI
Using Multimodal LLMs for Context-Aware Human Behavior Prediction
This research explores how Multimodal Large Language Models can predict human behavior in shared spaces, enabling safer human-robot interaction across various environments.
- Integrates visual and contextual information to predict human actions
- Evaluates system performance across different environments and activity types
- Identifies key challenges in applying MLLMs to real-world prediction scenarios
- Provides insights for improving human behavior prediction accuracy
From a security perspective, this research enables robots to anticipate human actions and respond appropriately, reducing risks in shared spaces and enhancing safety protocols for autonomous systems.