21 Apr 2022
Tiny metalenses that can focus on both near and far objects are based on compound eyes of trilobites.
But one group, Dalmanitina socialis, was exceptionally farsighted. Their bifocal eyes, each composed of two lenses that bent light at different angles, enabled these sea creatures to simultaneously view prey floating nearby as well as distant enemies approaching from more than a kilometer away.
Inspired by the eyes of D. socialis, researchers at the US National Institute of Standards and Technology (NIST) have developed a miniature camera featuring a bifocal lens with what they describe as a “record-setting depth of field”.
The camera can simultaneously image objects as close as 30 mm and as distant as 1.7 km. They devised a computer algorithm to correct for aberrations, sharpen objects at intermediate distances between these near and far focal lengths and generate a final all-in-focus image covering this enormous depth of field.
Such lightweight, large-depth-of-field cameras, which integrate photonic technology at the nanometer scale with software-driven photography, promise to revolutionize future high-resolution imaging systems, says NIST. In particular, the cameras would greatly boost the capacity to produce highly detailed images of cityscapes, groups of organisms in a large field of view, and other photographic applications featuring both near and far objects.
NIST researchers Amit Agrawal and Henri Lezec, along with their colleagues from the University of Maryland in College Park and Nanjing University, have described their work in Nature Communications.
The researchers fabricated an array of tiny metalenses – ultrathin films etched or imprinted with groupings of nanoscale pillars tailored to manipulate light. Agrawal and colleagues studded a flat surface of glass with millions of tiny, rectangular nanometer-scale pillars.
The shape and orientation of the constituent nanopillars focused light in such a way that the metasurface simultaneously acted as both macro lens and telephoto lens.
The team arranged the nanopillars, which were rectangular, so that some of the incoming light had to travel through the longer part of the rectangle and some through the shorter part. In the longer path, light had to pass through more material and therefore experienced more bending. For the shorter path, the light had less material to travel though and therefore less bending.
Without further processing, however, that arrangement would leave objects at intermediate distances unfocused. Agrawal and his colleagues used a neural network to teach software to recognize and correct for defects such as blurriness and color aberration in the objects that resided midway between the near and far focus of the metalens.
The team tested its camera by placing objects of various colors, shapes and sizes at different distances in a scene of interest and applying software correction to generate a final image that was focused and free of aberrations over the entire kilometer range of depth of field.
The metalenses developed by the team boost light-gathering ability without sacrificing image resolution. In addition, because the system automatically corrects for aberrations, it has a high tolerance for error, enabling researchers to use simple, easy to fabricate designs for the miniature lenses, Agrawal said.