01 Aug 2017
Wider field-of-view single-lens platform could enhance suitability for several applications.Stanford University have unveiled the latest iteration of the light field (LF) camera technology that the university has been developing for over a decade, and believe that it surpasses current vision systems for robotic vision and augmented reality.
The instrument, presented last week at the CVPR 2017 computer vision conference, is said to be the first single-lens wide field-of-view light field camera, and has been designed with robotics applications specifically in mind.
Light field, or plenoptic, cameras are intended to capture not just the intensity of incident light, as a conventional camera does, but also the angle from which it arrives.
In the initial configuration conceived in 2005 and subsequently commercialized by Lytro, this is achieved by placing a micro-lens array (MLA) in front of a conventional image sensor, recording directional information as well as color and intensity. A "4D" image can then be recreated from the data, which among other useful properties can be refocused after being taken, something impossible in a conventional photograph.
Stanford's new prototype platform is designed to add an improved field of view (FOV) to the existing light field camera's capabilities, tackling a drawback which was likely to prove a significant hindrance in moving the technology into machine vision scenarios.
"This could enable various types of artificially intelligent technology to understand how far away objects are, whether they are moving and what they are made of," commented Stanford's Gordon Wetzstein. "The system could be helpful in any situation where you have limited space and you want the computer to understand the entire world around it."
In its CVPR paper, the project team describes a design that exploits advances in monocentric lenses, a form of lens constructed from concentric glass spheres of differing refractive index. A monocentric lens forms its image on a spherical focal surface, rather than a plane, which has the effect of avoiding many classical lens aberrations. As few as two concentric glass shells are needed to form a high-resolution image over a field of 120 degrees or more.
Robot vision and virtual reality
For its new prototype, the team used a single sensor and MLA, and rotated them around the monocentric lens on a mechanical arm, in order to effectively emulate a multisensor set-up.
"We replace the expensive and resolution-limited sensor-coupling fiber bundles conventionally employed in monocentric systems with MLAs, as employed in lenslet-based LF cameras," noted the team. "This allows the optical coupling to be carried out in software, effectively leveraging the capacity of MLAs to enable light field capture and processing."
In combination with a suitable data processing workflow, the prototype system was able to capture 138-degree panoramas of 1600 x 200 pixels, and retained the advantages of relative depth estimation and post-capture refocus inherent in the LF principle.
One challenge for the project now is to move from its single-sensor prototype to genuine multisensor platforms based around the same optical principles, small enough to suit existing robots and other machine vision applications. Such compact light field technology with enhanced FOV could be particularly valuable for robot systems navigating small spaces, or for self-driving cars.
It could also play a role in virtual reality systems, where the depth information it provides should lead to improved renderings of real-world scenes, and better integration between those scenes and virtual components.
"Many research groups are looking at what we can do with light fields, but at present no one has great cameras," said Donald Dansereau of Stanford. "This is the first example of a light field camera built specifically for robotics and augmented reality. I’m stoked to put it into peoples' hands and to see what they can do with it."