From left, U of T researchers Wenjie Luo, Associate Professor Raquel Urtasun, and Bin Yang at Uber’s Advanced Technologies Group (ATG) Toronto (photo by Ryan Perez)
A self-driving vehicle has to detect objects, track them over time, and predict where they will be in the future in order to plan a safe manoeuvre. These tasks are typically trained independently from one another, which could result in disasters should any one task fail.
Researchers at the University of Toronto’s department of computer science and Uber’s Advanced Technologies Group (ATG) in Toronto have developed an algorithm that jointly reasons about all these tasks – the first to bring them all together. Importantly, their solution takes as little as 30 milliseconds per frame.
Luo and Bin Yang, a PhD student in computer science, along with their graduate supervisor, Raquel Urtasun, an associate professor of computer science and head of Uber ATG Toronto, presented their paper, Fast and Furious: Real Time End-to-End 3D Detection, Tracking and Motion Forecasting with a Single Convolutional Net, at last week’s Computer Vision and Pattern Recognition (CVPR) conference in Salt Lake City, the premier annual computer vision event.
Read the full article here.