
LLMs at the Edge: XR Device Performance Analysis
Benchmarking 17 language models across leading XR platforms
This study evaluates the on-device execution performance of large language models across four major XR platforms, providing critical insights for developers and engineers working on immersive AI applications.
- Magic Leap 2, Meta Quest 3, Vivo X100s Pro, and Apple Vision Pro were benchmarked
- 17 different LLMs were tested for processing speed, memory usage, and battery efficiency
- Performance varied significantly across device-model combinations
- Results enable informed decision-making for XR-LLM implementation
This research is particularly valuable for engineering teams developing next-generation XR applications that require local AI processing without cloud dependencies, helping bridge the gap between theoretical capabilities and practical implementation.
LoXR: Performance Evaluation of Locally Executing LLMs on XR Devices