Toyota, MIT release open data to fasttrack autonomous driving study
How can self-driving vehicles become more aware of its environment? Is it possible for computers to learn from their past and use them to make more intuitive, spontaneous decisions like humans?
These are some of the questions that the Toyota's Collaborative Safety Research Center (CSRC) and the Massachusetts Institute for Technology (MIT) AgeLab at the MIT Center for Transportation & Logistics want to answer by using an open, innovative dataset called Drive Seg.
Drive Seg is free, and advances autonomous driving research to make it more human-like in perception.
This way, it looks at the environment as a complex, continuous flow of visual information—like how humans process a scene, not just bits and pieces of images that are used to identify objects seen on the road.
Generally, a typical self-driving data use "bounding boxes" that catch single, defined, uniformed images to identify vehicles and other objects on the road (like traffic and walking pedestrians). This can be constricting, since the images are only bound by what those boxes recognize.
DriveSeg uses continuous driving scene segments for a more holistic view of the entire road. While it also uses pixelated representations of the same common objects found inside a "bounding box", DriveSeg allows a broader view that recognizes less uniformed and defined shapes, such as a long field or a road work scenario.
"Predictive power is an important part of human intelligence,” says Rini Sherony, Toyota Collaborative Safety Research Center’s Senior Principal Engineer.
“Whenever we drive, we are always tracking the movements of the environment around us to identify potential risks and make safer decisions. By sharing this dataset, we hope to accelerate research into autonomous driving systems and advanced safety features that are more attuned to the complexity of the environment around them.”
Drive Seg can be used by researchers and the academic community to advance their research in autonomous vehicles. This video-based driving scene perception gives a better flow of data to researchers, allowing more access to data patterns that can be played out over time.
Drive Seg's data is made up of two parts: DriveSeg (manual) is a 2-minute, 47-seconds of high-resolution video daytime drive around Cambridge, Massachussetts.
DriveSeg (Semi-auto) is taken from MIT Advanced Vehicle Technologies (AVT) Consortium data. It was created to study the wide range of real‐world driving scenarios, and assess the potential of training vehicle perception systems on pixel labels using AI‐based labeling systems.
You can learn more about DriveSeg by clicking this link.
Photos from Toyota