• Classified by Topic • Classified by Publication Type • Sorted by Date • Sorted by First Author Last Name • Classified by Funding Source •
Coopernaut: End-to-End Driving with Cooperative Perception for Networked Vehicles.
Jiaxun
Cui, Hang Qiu, Dian Chen, Peter
Stone, and Yuke Zhu.
In IEEE/CVF Conference on Computer Vision and
Pattern Recognition (CVPR), June 2022.
Project website
Optical sensors and learning algorithms for autonomous vehicles have dramatically advanced in the past few years. Nonetheless, the reliability of today's autonomous vehicles is hindered by the limited line-of-sight sensing capability and the brittleness of data-driven methods in handling extreme situations. With recent developments of telecommunication technologies, cooperative perception with vehicle-to-vehicle communications has become a promising paradigm to enhance autonomous driving in dangerous or emergency situations. We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving. Our model encodes LiDAR information into compact point-based representations that can be transmitted as messages between vehicles via realistic wireless channels. To evaluate our model, we develop AUTOCASTSIM, a network-augmented driving simulation framework with example accident-prone scenarios. Our experiments on AUTOCASTSIM suggest that our cooperative perception driving models lead to a 40% improvement in average success rate over egocentric driving models in these challenging driving situations and a 5x smaller bandwidth requirement than prior work V2VNet
@InProceedings{CVPR22-cui, author = {Jiaxun Cui and Hang Qiu and Dian Chen and Peter Stone and Yuke Zhu}, title = {Coopernaut: End-to-End Driving with Cooperative Perception for Networked Vehicles}, booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, location = {New Orleans, LA, USA}, month = {June}, year = {2022}, abstract = { Optical sensors and learning algorithms for autonomous vehicles have dramatically advanced in the past few years. Nonetheless, the reliability of today's autonomous vehicles is hindered by the limited line-of-sight sensing capability and the brittleness of data-driven methods in handling extreme situations. With recent developments of telecommunication technologies, cooperative perception with vehicle-to-vehicle communications has become a promising paradigm to enhance autonomous driving in dangerous or emergency situations. We introduce COOPERNAUT, an end-to-end learning model that uses cross-vehicle perception for vision-based cooperative driving. Our model encodes LiDAR information into compact point-based representations that can be transmitted as messages between vehicles via realistic wireless channels. To evaluate our model, we develop AUTOCASTSIM, a network-augmented driving simulation framework with example accident-prone scenarios. Our experiments on AUTOCASTSIM suggest that our cooperative perception driving models lead to a 40% improvement in average success rate over egocentric driving models in these challenging driving situations and a 5x smaller bandwidth requirement than prior work V2VNet }, wwwnote={<a href="https://ut-austin-rpl.github.io/Coopernaut/">Project website</a>}, }
Generated by bib2html.pl (written by Patrick Riley ) on Sun Nov 24, 2024 20:24:53