Optics.org
daily coverage of the optics & photonics industry and the markets that it serves
Featured Showcases
Photonics West Showcase
Optics+Photonics Showcase
News
Menu
Research & Development

UCLA 3D camera mimics a fly's multiview vision

16 Aug 2022

Computational image processing reveals obscured objects, could assist automotive and medical uses.

Optics designers have taken inspiration from the natural world on several previous occasions, exploiting some of the novel approaches to light processing that have evolved in living creatures.

Examples have included nanostructures based on the eyes of moths that can improve the anti-reflective properties of optical coatings, and a bioinspired camera interlacing nanoscale structures and photodetectors to spot cancerous cells.

A team from UCLA and China's Zhejiang Laboratory has now developed a new class of "bionic" 3D camera whose operation is inspired jointly by the multiview vision of flies and the natural sonar sensing of bats. The results were published in Nature Communications.

Termed compact light field photography, or CLIP, the computational framework designed by the project could be capable of deciphering the size and shape of objects hidden around corners or behind other items, based on the principles behind the optical and acoustic sensing of the two types of animal.

"While the idea itself has been tried before, seeing across a range of distances and around occlusions has been a major hurdle," said Liang Gao from the UCLA Samueli School of Engineering.

"To address that, we developed a novel computational imaging framework, which for the first time enables the acquisition of a wide and deep panoramic view with simple optics and a small array of sensors."

The approach behind CLIP involves recording only a few measurements from each sub-aperture in a bioinspired lens array, and then reconstructing a scene computationally. This reflects that fact that for most applications a recording of the entire high-dimensional light field is not the ultimate goal, commented the team in its paper, but instead an intermediate step prior to other image processing operations such as digital refocusing or extending the depth of field.

"The CLIP framework encompasses and goes far beyond previous endeavors to efficiently sample light fields, and is essentially an efficient dimensionality reduction in the optical domain, allowing high-dimensional information to be acquired with sensors of lower dimensionality, such as the ubiquitous 1D detectors which are still the dominant sensor format for imaging at the ultrafast time scale or the infrared spectral regime," noted the team's paper.

CLIP tackles inherent lidar tradeoffs

This approach can help to image partially hidden scenes, if an object is obscured to some views of a CLIP-enabled array but remains partially visible to others; nonlocal and complementary information from those other views can enable compressive retrieval of the complete object.

The UCLA project believes that CLIP could be beneficial when combined with lidar platforms, and offer a solution to the inherent tradeoff between sensing range and light throughput in flash lidar imaging, when a field of view is illuminated by a single laser pulse.

As proof-of-concept, the team used an array of seven lidar cameras with CLIP. The array takes a lower-resolution image of a scene, processes what the individual cameras see, then reconstructs the combined scene in high-resolution 3D imaging, according to UCLA. The researchers also demonstrated that their camera system could image a complex 3D scene with several objects, all set at different distances.

"Combined with lidar, the system is able to achieve a bat echolocation effect so one can sense a hidden object by how long it takes for light to bounce back to the camera," said UCLA.

Hyperion OpticsHÜBNER PhotonicsBerkeley Nucleonics CorporationPhoton Lines LtdSacher Lasertechnik GmbHLaCroix Precision OpticsChangchun Jiu Tian  Optoelectric Co.,Ltd.
© 2024 SPIE Europe
Top of Page