
LanHAR: Bridging the Gap in Activity Recognition
Using Language Models to Interpret Sensor Data Across Diverse Environments
LanHAR leverages Large Language Models to transform inertial sensor data into semantic interpretations, solving cross-dataset challenges in Human Activity Recognition.
- Converts numerical sensor readings into language descriptions that LLMs can process
- Addresses distribution gaps caused by variations in activity patterns, devices, and sensor placements
- Creates a unified framework that maintains performance across different environments
In healthcare applications, this technology enables more reliable patient monitoring, consistent rehabilitation tracking, and accurate health assessments regardless of device type or positioning—critical for remote patient care and mobility analysis.