Autonomous vehicles are getting much better at sensing their surroundings, but even they still need to be able to directly see a hazard to react to it. Now, a Stanford team is developing a laser-based system that may allow driverless cars to see around blind corners, and come to a stop before a child runs out or another car speeds by.
In order to image objects beyond direct line-of-sight, the technique involves bouncing laser pulses around a corner, off the desired object and back. A highly sensitive sensor captures the returning photons of light, an algorithm analyzes them and the end result is a fuzzy snapshot of something lurking just out of sight.
As high-tech as it sounds, this isn't the first time scientists have successfully pulled off this kind of light show. A team from MIT was experimenting with a similar system in 2012, and in 2014 European and Canadian researchers were able to recreate "light echoes" of hidden objects.
The Stanford scientists say their new contribution is mostly on the mathematical side. Since light is scattered by the objects it hits, they can bounce back to the sensor from almost anywhere, resulting in a lot of "noise" that can wash out the desired target. To help cut through, the team developed an advanced algorithm that can calculate the path taken by the captured photons, and use that to reconstruct the object.
"A substantial challenge in non-line-of-sight imaging is figuring out an efficient way to recover the 3D structure of the hidden object from the noisy measurements," says David Lindell, co-author of a paper describing the technique. "I think the big impact of this method is how computationally efficient it is."
The researchers say their algorithm can analyze the photon data in less than a second, and is efficient enough to run on a regular laptop. The problem in the way of practicality at the moment is the initial scans – in order to generate enough data about a hidden object, the system needs to fire off many pulses of laser light in a process that can take up to an hour. As it stands, that's not going to be much use for providing forewarning of a child about to run around a corner in front of a car.
The other major issue is ambient light. Under carefully-controlled lab conditions the system works fine, but take it outside into the bright light of day and the sensors get a little overwhelmed. That said, in outdoor tests the researchers found that the technology was able to clearly pick up highly reflective objects, such as high-visibility clothing and road signs and markers.
In future, the researchers plan to work on speeding up the scans, improving the system's ability to work in daylight and detect moving objects.
The research was published in the journal Nature. The team describes the project in the video below.
Showing posts with label laser. Show all posts
Showing posts with label laser. Show all posts
New
December 06, 2017
128-laser LiDAR sensor significantly sharpens autonomous cars' vision
New
An optical image of the resolution by the Velodyne LiDAR VLS-128 in operation
(Credit: Velodyne LiDAR)
In what promises to be a big step forward in 3D vision systems for autonomous vehicles, Velodyne has announced a new 128-channel LiDAR sensor that boasts the longest range and highest-resolution on the market.LiDAR sensors are used to provide real-time 3D mapping and object detection in many autonomous driving systems. Velodyne LiDAR developed and patented the world's first 3D real-time rotating LiDAR sensor for advanced automotive safety applications in 2005. Its first application was during the Defense Advanced Research Projects Agency (DARPA) Grand Challenge that year, in which autonomous vehicles competed to complete a course through a mock urban environment.Since then, Velodyne's increasingly capable sensors have been installed in thousands of vehicles around the world, and currently provide core technology for several autonomous vehicle development programs.
Since then, Velodyne's increasingly capable sensors have been installed in thousands of vehicles around the world, and currently provide core technology for several autonomous vehicle development programs.
Since then, Velodyne's increasingly capable sensors have been installed in thousands of vehicles around the world, and currently provide core technology for several autonomous vehicle development programs.
The new flagship VLS-128 model is claimed to have 10 times the resolving power of the company's previous benchmark model, the HDL-64. As well as doubling the channel numbers, channel density has been tripled and the zoom resolution doubled, enabling it to detect objects more clearly and identify them more accurately. The resulting range is 300 meters (984 ft) and the high-resolution data gathered enables it to directly detect objects without additional sensor fusion, reducing computational complexity.Despite the increase in resolving power, the VLS-128 is around one third the weight of the HDL-64 and it features auto-alignment technology that will be progressively installed in Velodyne's other LiDAR offerings.Velodyne claims that as well as its capabilities in low-speed urban environments, the VLS-128 will help autonomous vehicles to function at highway speeds, where it's "designed to solve for all corner cases needed for full autonomy."
"We think the biggest unsolved problem for autonomous driving at highway speeds is avoiding road debris," says company founder and CEO, David Hall. "That's tough, because you have to see way out ahead. The self-driving car needs to change lanes, if possible, and do so safely. On top of that, most road debris is shredded truck tire – all black material on a dark surface. Especially at night, that type of object recognition is challenging, even for the LiDAR sensors we've previously built. The autonomous car needs to see further out, with denser point clouds and higher laser repetitions."Velodyne says it will begin shipping the VLS-128 by year's end.
Source: Velodyne, NewAtlas
