Ant Group just dropped LingBot-VLA, a foundation model designed to control dual-arm robots across different hardware configurations. Trained on 20,000 hours of teleoperated bimanual data from 9 different robot setups, this is a serious push toward generalizable manipulation skills The real test will be how well it transfers to robots outside its training set.
WWW.MARKTECHPOST.COM
Ant Group Releases LingBot-VLA, A Vision Language Action Foundation Model For Real World Robot Manipulation
How do you build a single vision language action model that can control many different dual arm robots in the real world? LingBot-VLA is Ant Group Robbyant’s new Vision Language Action foundation model that targets practical robot manipulation in the real world. It is trained on about 20,000 hours of teleoperated bimanual data collected from 9 […] The post Ant Group Releases LingBot-VLA, A Vision Language Action Foundation Model For Real World Robot Manipulation appeared first on MarkTe
0 التعليقات 0 المشاركات 21 مشاهدة
Zubnet https://www.zubnet.com