Optics.org
daily coverage of the optics & photonics industry and the markets that it serves
Featured Showcases
Photonics West Showcase
Optics+Photonics Showcase
News
Menu
Research & Development

MIT team makes low-cost 3-D imaging '1,000 times better'

08 Dec 2015

Algorithms exploiting light’s polarization significantly boost resolution of cheap commercial depth sensors.

Researchers at MIT have shown that by exploiting the polarization of light they can increase the resolution of conventional 3-D imaging devices by as much as 1,000 times. They say the technique could lead to high-quality 3-D cameras integrated into cellphones, and perhaps to the ability to photograph an object and then use a 3-D printer to produce a replica. Other developments could support the development of driverless cars.

“Today, 3-D cameras can be miniaturized to fit on cellphones,” said Achuta Kadambi, a PhD student in the MIT Media Lab and one of the system’s developers. “But this involves making compromises to the 3-D sensing, leading to very coarse recovery of observed geometry. That is a natural application for polarization, because you can still use a low-quality sensor, but by adding a polarizing filter gives a result quality that is better than from many machine-shop laser scanners.”

The researchers describe the new system, which they call Polarized 3D, in a paper they are presenting at the International Conference on Computer Vision in Santiago, Chile, between December 11-18. Kadambi is lead author, supported by Ramesh Raskar and Boxin Shi (both MIT) and Vage Taamazyan from the Skolkovo Institute of Science and Technology in Russia, which MIT helped found in 2011.

Polarization affects the way in which light bounces off of physical objects. If light strikes an object squarely, much of it will be absorbed, but whatever reflects back will have the same mix of polarizations as the incoming light. At wider angles of reflection, however, light within a certain range of polarizations is more likely to be reflected. So the polarization of reflected light carries information about the geometry of the objects it has struck.

The MIT team says that although this relationship has been known for centuries, it has been difficult to make use of it, because of a fundamental ambiguity about polarized light: light with a particular polarization is indistinguishable from light with the opposite polarization.

Mid-quality object scan by laser.

Mid-quality object scan by laser.

Polarization plus depth sensing

To resolve this ambiguity, the Media Lab researchers use coarse depth estimates provided by some other method, such as the time a light signal takes to reflect off of an object and return to its source. Even with this added information, calculating surface orientation from measurements of polarized light is complicated, but it can be done in real-time by a graphics processing unit, the type of special-purpose graphics chip, such as is found in most video game consoles.

The researchers’ experimental setup consisted of a Microsoft Kinect — which gauges distance using reflection time — with an ordinary polarizing photographic lens placed in front of its camera. In each experiment, the researchers took three photos of an object, rotating the polarizing filter each time, and their algorithms compared the light intensities of the resulting images.

On its own, at a distance of several meters, the Kinect can resolve physical features as small as a centimeter or so across. But with the addition of the polarization information, the researchers’ system could resolve features in the range of tens of micrometers, or one-thousandth the size. For comparison, the researchers also imaged several of their test objects with a high-precision laser scanner, which requires that the object be inserted into the scanner bed. Polarized 3D still offered the higher resolution.

The MIT paper also offers the “tantalizing prospect” that polarization systems could aid the development of self-driving cars. The team comments, “today’s experimental self-driving cars are, in fact, highly reliable under normal illumination conditions, but their vision algorithms go haywire in rain, snow, or fog. That’s because water particles in the air scatter light in unpredictable ways, making it much harder to interpret.”

Constructive interference

The researchers show that in certain simple test cases — which have challenged conventional computer vision algorithms — their system can exploit information contained in interfering waves of light to handle scattering. Kadambi commented, “Mitigating scattering in controlled scenes is a small step. But that’s something that I think will be a cool open problem.”

“The work fuses two 3-D sensing principles, each having pros and cons,” commented Yoav Schechner, an associate professor of electrical engineering at Technion — Israel Institute of Technology. “One principle provides the range for each scene pixel: This is the state of the art of most 3-D imaging systems. The second principle does not provide range. On the other hand, it derives the object slope, locally. In other words, per scene pixel, it tells how flat or oblique the object is.”

“The work uses each principle to solve problems associated with the other principle. Because this approach practically overcomes ambiguities in polarization-based shape sensing, it can lead to wider adoption of polarization in the toolkit of machine-vision engineers.”

About the Author

Matthew Peach is a contributing editor to optics.org.

Synopsys, Optical Solutions GroupMad City Labs, Inc.Berkeley Nucleonics CorporationSacher Lasertechnik GmbHHyperion OpticsHamamatsu Photonics Europe GmbHChangchun Jiu Tian  Optoelectric Co.,Ltd.
© 2024 SPIE Europe
Top of Page