
LLMs in the Driver's Seat
Enhancing autonomous vehicles with on-board language models
This research introduces a hybrid architecture that combines traditional control systems with locally deployed LLMs to handle edge-case driving scenarios that purely data-driven approaches struggle with.
- Integrates Model Predictive Controller (MPC) with on-board LLMs to mimic human intuition in unexpected driving situations
- Employs Retrieval Augmented Generation (RAG) to provide relevant context from driving manuals
- Implements model compression techniques (LoRA fine-tuning and quantization) for efficient deployment on vehicle hardware
- Demonstrates performance improvements in edge-case scenarios without sacrificing real-time capabilities
This engineering breakthrough offers a practical path to more robust autonomous driving systems by complementing data-driven methods with knowledge-driven approaches, potentially addressing key safety and reliability challenges in the industry.
Enhancing Autonomous Driving Systems with On-Board Deployed Large Language Models