Optics.org
daily coverage of the optics & photonics industry and the markets that it serves
Featured Showcases
Photonics West Showcase
Optics+Photonics Showcase
News
Menu
Research & Development

Compact lens design improves cameras for AR-VR devices...

26 Nov 2024

...and second Korea based team’s metalenses harness AI to boost performance

Researchers at Seoul National University College of Engineering have developed an optical design technology that significantly reduces the volume of cameras with a folded lens system utilizing metasurfaces.

By arranging metasurfaces on a glass substrate so that light can be reflected and moved around in the glass substrate in a folded manner, the researchers have realized a lens system with a thickness of 0.7mm, which is much thinner than existing refractive lens systems. The research, which was supported by the Samsung Future Technology Development Program and the Institute of Information & communications Technology Planning & Evaluation (IITP), was published in Science Advances.

Traditional cameras are designed to stack multiple glass lenses to refract light when capturing images. While this structure provides excellent high-quality images, the thickness of each lens and the wide spacing between lenses increases the overall bulk of the camera, making it difficult to apply to devices that require ultra-compact cameras, such as virtual and augmented reality devices, smart phones, endoscopes, drones, and more.

Ultra-thin system

To address this limitation, The researchers have developed an ultra-thin camera system that reduces the thickness of a conventional lens system by less than half, using a new lens module design that incorporates metasurfaces. Metasurfaces have the ability to precisely control the three properties of light: intensity, phase, and polarization – on a pixel-by-pixel basis.

The nanostructures that make up a metasurface are arranged in periods of a few hundred nanometers. According to the team, by designing a metasurface optimized for a specific wavelength (852 nm) and arranging multiple sheets horizontally on a glass substrate, light can be reflected multiple times inside the substrate, thereby space-efficiently securing the light paths in a folded manner. The team has presented a structure for a miniaturized camera that captures images with a system of thin, folded lens module that adjust the path of light.

The system not only overcomes the physical limitations of thick conventional lens modules, but also delivers superior image quality. This is because it provides a 10-degree field of view within a small system footprint of just 0.7 mm thick, and delivers high-resolution images close to the diffraction limit at an aperture of f/4 and a wavelength of 852 nm.

“This research provides a creative breakthrough to innovatively reduce the thickness of cameras by introducing nano-optical devices,” said Youngjin Kim, first author of the paper. ”We will continue our research to lead the innovation of thin and light cameras with metasurfaces that combine excellent performance and industrial benefits and fabrication through semiconductor processes.”

“Our research focuses on efficiently utilizing the lens space by using metasurfaces,” said Taewon Choi, co-first author of the study. ”The folded lens system is very thin, so it will play an important role in the virtual and augmented reality industry, where device miniaturization and lightweight are essential.”

Metalenses harness AI for superior performance

Modern imaging systems, such as those used in smart phones, virtual reality, and augmented reality devices, are constantly evolving to become more compact, efficient, and high-performing. Traditional optical systems rely on bulky glass lenses, which have limitations like chromatic aberrations, low efficiency at multiple wavelengths, and large physical sizes.

To overcome these issues, researchers have developed metalenses—ultra-thin lenses composed of tiny nanostructures that can manipulate light at the nanoscale. Metalenses offer tremendous potential for miniaturizing optical systems, but they are not without their own challenges, particularly when it comes to capturing full-color images without distortions.

In a recent study published in Advanced Photonics, researchers have introduced an innovative, deep-learning-powered, end-to-end metalens imaging system that overcomes many of these limitations. This system pairs a mass-produced metalens with a specialized image restoration framework driven by deep learning. By combining advanced optical hardware with artificial intelligence, the team has achieved high-resolution, aberration-free, full-color images, all while maintaining the compact form factor that metalenses promise.

The metalens itself is fabricated using nanoimprint lithography, a scalable and cost-effective method, followed by atomic layer deposition, allowing for large-scale production of these lenses. The metalens is designed to focus light efficiently but, like most metalenses, suffers from chromatic aberration and other distortions due to its interaction with light of different wavelengths.

To address this, the deep learning model is trained to recognize and correct the color distortions and blurring caused by the metalens. This approach is unique because it learns from a large dataset of images and applies these corrections to future images captured by the system.

The image restoration framework uses adversarial learning, where two neural networks are trained together. One network generates corrected images, and the other assesses their quality, pushing the system to improve continuously. Additionally, advanced techniques like positional embedding help the model understand how image distortions change depending on the viewing angle.

The system produces images that rival those from traditional, bulky lenses, but in a much smaller, more efficient package. This innovation has the potential to revolutionize a wide range of industries, from consumer electronics like smart phones and cameras to more specialized applications in VR and AR.

According to senior and corresponding author Junsuk Rho, Mu-Eun-Jae endowed chair professor with a joint appointment in mechanical engineering, chemical engineering, and electrical engineering at Pohang University of Science and Technology (POSTECH, Korea), “This deep-learning-driven system marks a significant advancement in the field of optics, offering a new pathway to creating smaller, more efficient imaging systems without sacrificing quality.”

• The second (POSTECH) article was published first on spie.org.

Mad City Labs, Inc.Berkeley Nucleonics CorporationLASEROPTIK GmbHHyperion OpticsOmicron-Laserage Laserprodukte GmbHCHROMA TECHNOLOGY CORP.HÜBNER Photonics
© 2024 SPIE Europe
Top of Page