
HybridGen: Smarter Robots Through Imitation
VLM-Guided Planning for Scalable Robotic Learning
HybridGen is a breakthrough framework that combines Vision-Language Models with hybrid planning to automate the generation of demonstration data for robotic manipulation tasks.
- Creates large-scale, diverse demonstration data for improved robotic generalization
- Uses a two-stage pipeline: VLM parsing of expert demonstrations followed by object-centric pose transformations
- Enables complex manipulations that were previously challenging to automate
- Directly applicable to industrial settings where robots must learn varied tasks
This research addresses a critical bottleneck in robotic learning by automating the data generation process, potentially accelerating deployment of adaptable robots in manufacturing environments.
HybridGen: VLM-Guided Hybrid Planning for Scalable Data Generation of Imitation Learning