11 Apr 2016
MIT system built largely from off-the-shelf parts is designed to work efficiently in daylight.
Lidar systems, employing infra-red light to sense a vehicle's surroundings in an analogous way to the use of radio signals in radar, have been incorporated into a number of high-profile programs, including the ongoing development of a self-driving car by the secretive Google R&D unit known only as X.
But lidar is not cheap, and the kind of systems appearing on these vehicles can have five-figure price tags.
A cheaper optical distance-sensing technology could be a significant step forward, and a team at MIT has developed just such a platform, using off-the-shelf components and exploiting the processing power now available within most smartphones.
Even if not suitable for the highest specifications of driverless vehicles, it could still help encourage the development of smaller autonomous systems and drones, and open up some new applications for cheaper rangefinding technology as well.
Dealing with daylight
"Our motivation was the current need for a low cost sensor able to work outdoors in ambient light," commented Jason Gao of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL).
"The problem with any laser illumination in a bright sunny environment is distinguishing it that from the background radiation. Our answer is essentially a combination of modulating the laser output, and computational analysis of the multiple resulting images."
At heart the optics involved were fairly straightforward; the basic operation is essentially active triangulation, emitting laser light in a single horizontal plane - which is nothing new. But MIT's focus was on leveraging the computational capabilities of modern phones so that the team could employ laser components of relatively low cost and modest specifications.
High-end rangefinders use extremely short and high-energy light pulses to avoid the detector being overwhelmed by ambient light or confused by other light sources, but this was not a realistic option for the MIT smartphone-based design philosophy.
Instead, the team conceived a system which carries out multiple measurements timed to the emission of lower energy pulses. As described by the project's technical paper, the unit captures four frames of video to make a range determination: two are used to record the reflected laser signal, and the other pair to assess the ambient light in the scene.
"In principle a straightforward subtraction would then isolate the returning laser light, but in fact there are some potential problems with that operation," Gao said.
"Other moving objects in the scene, and objects either flashing or just appearing to be flashing, might accidentally be detected as laser illumination too. So computational algorithms carry out some complicated interpretations of the images, to figure out where the laser light actually is and reject the false positives."
This in turn requires adequate processor power, and although the team was able to tweak the phone's on-board routines to get the most out of its GPU, the handset's inherent computational capabilities did represent a restriction on the achievable performance - at least in the prototype.
"There are a few potential advances we are working on, and faster processing is one," commented Gao. "Our captured image resolution of 640x480 pixels is down-sampled well below the image sensor's full resolution, partly due to processing bottlenecks. Since the image processing routines are highly parallelized, we expect that a faster smartphone processor could make a significant improvement."
Devices for smarter cities
Other potential improvements include a change to the nature of the shutter mechanism employed in the smartphone camera itself. The prototype's handset uses a "rolling shutter", in which the mechanical opening of the shutter to admit light to the sensor - although rapid - does not technically allow light to hit all pixels simultaneously.
"Our current pulse duration is matched to the duration of the rolling shutter exposure of 33.3 ms," commented Gao. "Mobile devices are now coming out with alternatives known as global shutters, in which light collection starts and stops at the same time for all pixels. Using a global shutter would allow us to shorten the laser pulse to the duration of an individual pixel exposure."
This is turn would allow the laser power to be boosted and increase the effective range of operation, or alternatively mean the frame rate of the system could be raised.
Driverless cars have grabbed the headlines, but the MIT team foresees its rangefinder being used in a broader class of less expensive self-driving vehicles, or potentially being installed onto such items as golf carts or wheelchairs to make them autonomous.
"We are really targeting a growing class of smaller autonomous vehicles that are starting to be envisaged for many applications, such as city maintenance robots, package delivery drones, or small personal transportation," noted Gao. "These require much lower rangefinder system costs than a lidar unit, and an ability to work in challenging environments."
This fits with the broader program pursued by the MIT project group under Li-Shiuan Peh, which envisages a device-centric approach to smarter cities, rather than one centered on vehicles or infrastructure.
"We have kept the optics very simple and cheap," said Gao. "The infra-red cut-off filter was taken out and we added a bandpass filter, but the unit is at heart a cheap $10 laser module. The principle is that you wouldn't need to upgrade a vehicle to acquire the new rangefinder functionality - the unit would be cheap enough that you could just dock the phone onto the existing vehicle instead."
About the Author
Tim Hayes is a contributor to Optics.org.
|OCT imaging with AI screening improves retinal diagnosis|
|Nanoscribe installs GT2 3D printer in Keio University|
|Dynetics to build 100 kW laser weapon with Lockheed and Rolls-Royce|
|Plessey and Jasper develop GaN-on-Si HD microLED display|
|NIST builds IR thermometer with 'dramatically improved' performance|
|Glass wafer redesign expands field of view in AR, MR apps|