Optics.org
daily coverage of the optics & photonics industry and the markets that it serves
Featured Showcases
Photonics West Showcase
News
Menu
Applications

3D imaging with LEDs and smartphone points to simpler machine vision

20 Jan 2021

University of Strathclyde project creates images without complex synchronizing of camera and lighting.

An approach to 3D imaging employing conventional LED room lighting and a smartphone could led to more straightforward visualization methods for industrial or security applications.

The new method has been developed by a team at the University of Strathclyde Institute of Photonics and Bristol Robotics Laboratory, in work supported by the UK EPSRC QuantIC research program. It represents an enhancement of photometric stereo (PS) imaging, in which a 3D image is built up from one fixed camera perspective and different illumination directions.

According to the project, the complexity of most PS methods prevents them from being deployed in existing building infrastructures, thanks to the need for compatible PS-specific illumination in indoors or outdoors lighting installations, and the cabling required to synchronize several luminaires with each other and with the camera.

Tackling those hurdles might allow PS imaging to provide an attractive route to 3D imaging in industrial settings for process control and robot navigation, in public spaces for security and surveillance applications, and for structural monitoring.

PS imaging traditionally requires four light sources deployed symmetrically around the viewing axis of a camera, according to the project team. The new work set out to reconstruct 3D images of objects illuminated from above but imaged from the side, allowing overhead room lighting to be used as the illumination sources.

Published in Optics Express, the new method is said to be compatible with standard lighting infrastructure, and can be operated without the need for temporal synchronization of the light sources and camera.

"We wanted to make photometric stereo imaging more easily deployable by removing the link between the light sources and the camera," said Emma Le Francois of the University of Strathclyde. "To our knowledge, we are the first to demonstrate a top-down illumination system with a side image acquisition where the modulation of the light is self-synchronized with the camera."

The project aimed to develop algorithms that modulate each illuminating LED in a specific way, effectively acting as an illumination fingerprint and allowing the camera to determine which LED has generated which image data in a subsequent 3-D reconstruction.

The new modulation approach also carries its own clock signal, passively detected by the camera and allowing image acquisition to be self-synchronized with the LEDs. Imaging data was recorded using a hand-held mobile phone camera, and subsequently processed into 3D images using the team's algorithms.

More complete sense of the environment

Trials imaging both simple geometric shapes and a 3D-printed model of a monkey's head 130 millimeters tall allowed the new method to be used on objects whose true dimensions were known. Results showed an error of 2.69 millimeters for an object imaged at a distance of 42 centimeters. Video of a moving image at 25 fps was also achieved.

"These efforts make PS imaging synchronization-free, reduce flicker, and demonstrate compatibility with ceiling lighting and both wall-mounted and mobile cameras," noted the project in its published paper. "In this scenario, PS imaging would coexist with light fidelity (LiFi) networks or visible light positioning (VLP), potentially using the same LED luminaires for all of these functions as well as general lighting."

This potential for LED luminaires to carry out multiple functions besides illumination has been the topic of earlier research by the same team. In 2018 it described via IEEE a "lighting as a service" approach in which LEDs could both provide illumination for imaging and visible light communications at the same time, potentially allowing automated monitoring of the activity within an area through 3D imaging while providing wireless connection to robotic agents in the same space.

At present the new overhead-LED system requires a few minutes to reconstruct an image, so the researchers are now working to decrease the computational time by incorporating a deep-learning neural network, that would learn to reconstruct the shape of the object from the raw image data.

"Current video surveillance systems such as the ones used for public transport rely on cameras that provide only 2D information," said Emma Le Francois. "Our new approach could be used to illuminate different indoor areas to allow better surveillance with 3D images, create a smart work area in a factory, or to give robots a more complete sense of their environment."

Hyperion OpticsFirst Light ImagingMad City Labs, Inc.ABTechUniverse Kogaku America Inc.Berkeley Nucleonics CorporationTRIOPTICS GmbH
© 2024 SPIE Europe
Top of Page