15 Sep 2020
Stanford project based on confocal diffuse tomography could also assist autonomous driving.
Remote sensing, robotic vision and autonomous vehicles all have to contend with objects partially or completely obscured by fog, rain or dust in the atmosphere, and would benefit from methods to successfully image through those obstacles.
A project at Stanford University has now demonstrated a technique combining single-photon avalanche diodes, ultra-fast pulsed lasers, and a new computational algorithm to capture 3D shapes through scattering media. and reported the results in Nature Communications.
"A lot of imaging techniques make images look a little bit better, a little bit less noisy, but this is really something where we make the invisible visible," said Gordon Wetzstein of Stanford's Computational Imaging Group. "This is really pushing the frontier of what may be possible with any kind of sensing system."
The technique is based on diffuse optical tomography (DOT), a method which reconstructs objects within thick scattering media by modeling the diffusion of light from illumination sources to detectors placed around the scattering volume. DOT is attractive but often limited to 2D reconstruction, or requires computationally expensive mathematical modelling to yield the results.
Stanford has developed confocal diffuse tomography (CDT), a variant technique intended to surpass DOT in imaging of complex macroscopic regimes by first modeling and then inverting the scattering of photons that travel through a thick diffuser, propagate through free space to a hidden object, and scatter back again through the diffuser.
The process of explicitly modeling and then inverting the scattering process allows CDT to incorporate scattered photons into the ultimate reconstruction procedure, enabling imaging in scenarios where analyzing just the directly returned ballistic photons is too inefficient to be effective, according to the team.
"Our insight is that a hardware design patterned after confocal scanning systems combining emerging single-photon-sensitive, picosecond-accurate detectors, and newly developed signal processing transforms, allows for an efficient approximate solution to this challenging inverse problem," commented the project in its published paper. "The approach operates with low computational complexity at relatively long range for large, meter-sized imaging volumes."
Useful for large-scale applications
In trials, the project applied its platform to the imaging of an object hidden behind a 60 x 60 x 2.5 centimeter polyurethane scattering medium, using a 532-nanometer laser delivering 35-picosecond pulses.
The pulsed laser shared an optical path with a single-photon avalanche diode (SPAD) detector, effectively creating a confocal acquisition procedure, while two scanning mirrors scanned the laser and SPAD across the scattering medium.
Returning photons from the hidden object were captured by the SPAD over time, with earlier arriving photons being gated out. The project's bespoke computational method then carried an inversion procedure to recover the hidden object's albedo from the returning light.
The platform successfully reconstructed CDT images of a hidden reflective mannequin, letter-shaped objects, and a group of traffic cones. Depending on the brightness of the hidden objects, the scanning operation required between one minute and one hour, but the algorithm reconstructed the obscured scene in real-time.
Next steps for the project will include other types of scattering geometries, such as objects embedded in densely scattering material, analogous to seeing an object that’s surrounded by fog.
"We were interested in being able to image through scattering media without relying on ballistic photons, and to collect all the photons that have been scattered to reconstruct the image," said David Lindell of Stanford. "This makes our system especially useful for large-scale applications, where there would be very few ballistic photons."
Stanford University video