Next-Gen Autonomous Driving Perception

Next-Gen Autonomous Driving Perception

Integrating Deep Learning & Multimodal LLMs for Enhanced Road Safety

This research advances autonomous vehicle intelligence by combining deep learning with Multimodal Large Language Models (MLLMs) to create more robust road perception systems.

  • Achieved 99.8% accuracy in traffic sign recognition using ResNet-50 architecture
  • Developed an integrated framework combining specialized models for comprehensive road awareness
  • Successfully implemented robust lane detection capabilities for complex driving environments
  • Demonstrated how multimodal approaches improve autonomous navigation safety

This engineering breakthrough has significant implications for AV development, addressing critical safety challenges in real-world driving conditions while establishing a foundation for more reliable autonomous transportation systems.

Advancing Autonomous Vehicle Intelligence: Deep Learning and Multimodal LLM for Traffic Sign Recognition and Robust Lane Detection

152 | 204