21 Jan 2020
Technique from Stanford University and Rice University tackles problem of non-line-of-sight imaging.
Techniques involving time-of-flight measurements as a means of retracing a signal's journey have been studied for some time, joined more recently by methods using the inherent spatial correlations encoded in scattered light to reconstruct an image. These approaches are hindered in photon-starved scenarios, however, partly due to the poorly understood noise characteristics of that correlation.
A project at Stanford University, Princeton University, Southern Methodist University and Rice University has now made a significant step forward, by developing a better noise model for recovering occluded objects from indirect reflections and training a deep neural network to crunch the numbers. The work was published in Optica.
"Compared to other approaches, our non-line-of-sight imaging system provides uniquely high resolutions and imaging speeds," said team leader Christopher Metzler. "These attributes enable applications that wouldn't otherwise be possible, such as reading the license plate of a hidden car as it is driving or reading a badge worn by someone walking on the other side of a corner."
According to the project team, the imaging system bounces a continuous-wave laser at an angle off a visible wall, so that it illuminates an object out of direct view around the corner. The speckle pattern created when reflected light is returned back onto another section of the same wall is then recorded.
By using two or more measurements of this returned pattern, a correlation with the object's albedo can be initially estimated, and then refined as a phase retrieval (PR) problem using deep learning to obtain the object's shape.
Medical imaging and navigation
The creation of an accurate noise model for the initial NLoS correlography operation is key, and deep-learning approaches have traditionally had to rely on large training data sets and loss functions that do not translate easily to NLoS scenarios.
The project's new and improved noise model was based on using results from spectral density estimation to analyze the distribution of the noise, and "several new, translation-invariant loss functions for learning-based PR," according to its paper.
"Our deep learning algorithm is far more robust to noise and thus can operate with much shorter exposure times," said Prasanna Rangarajan from Southern Methodist University. "By accurately characterizing the noise, we were able to synthesize data to train the algorithm to solve the reconstruction problem using deep learning without having to capture costly experimental training data."
Once trained, the system was tested by imaging 1-centimeter numbers and letters located roughly 0.5 meters away from a 532-nanometer laser source, but obscured around a corner by the visible wall.
Results showed that the shape of the objects could be successfully reconstructed at this distance using just two eighth-of-a-second exposure images captured by a conventional CMOS detector. An imaging resolution of 300-microns was calculated.
Correlation techniques such as this are inherently suited to imaging small isolated objects, since larger ones do not cause the "memory effect" upon which the correlation between interference and albedo depends. The system is also at present unable to localize the position of objects within the hidden space.
But applications for improved NLoS imaging could be widespread, according to the project, and these findings may show a route to real-time high-resolution NLoS techniques for larger objects.
"Non-line-of-sight imaging has important applications in medical imaging, navigation, robotics and defense," commented Felix Heide from Princeton University. "Our work takes a step toward enabling its use in a variety of such applications."