Optics.org
daily coverage of the optics & photonics industry and the markets that it serves
Featured Showcases
Photonics West Showcase
Menu
Historical Archive

Tiny pixels aid 3D imaging

22 Apr 2008

An image sensor that contains submicron pixels arranged as subarrays could be a key component in compact and low-cost 3D cameras.

The first submicron pixels measuring just 0.7 µm have been created by researchers in the US. The pixels are much smaller than those found in today's current mobile phones, which until now contained the smallest pixels at around 2.2 µm. The team arranged its submicron pixels in a multi-aperture architecture, which it says could lead to 3D cameras that overcome cost and size restrictions.

"We have created the first submicron pixels and created a multi-aperture architecture that makes use of these pixels," Keith Fife, a researcher at Stanford University, told optics.org. "One of the unique capabilities of the imager is that depth can be extracted from the scene, which opens up several applications in photography, robotics, biometrics, security and surveillance, and biomedicine."

Although the camera has not yet been made, the team has found a way to construct an image sensor consisting of subarrays of the submicron pixels. What's more, the image sensor is based on charge couple devices (CCDs) made on a silicon chip built in a CMOS process.

"Most image sensors today are CMOS chips fabricated in a relatively standard process with some modifications to decrease dark current and improve optical efficiency," commented Fife. "The chip that we built is more than just arrays of pixels – we have integrated timing circuits and analogue-to-digital converters so that the data that comes off the sensor is digital."

The image sensor comprises a 166 x 76 (12,616) aperture array, each with its own local optics and a 16 x 16 (256) CCD pixel array with readout circuit. "The sensor is an array of integrated cameras where all of the critical dimensions of the array and optics are controlled through the semiconductor processing techniques such as photolithography," commented Fife.

Unlike a conventional imaging system where the objective lens focuses the image directly onto the image sensor, the image is instead focused above the sensor plane and re-imaged by the local optics to form partially overlapping images of the scene. "The local optics demagnifies the image from the objective by a factor of 2–5 depending on the depth," explained Fife. "These images are then combined to form 2D and 3D representations of the scene."

The next step for the team is to implement the local image forming optics into the semiconductor process. "One of the key challenges is to create pixels with a low noise-floor," concluded Fife. "Because we split the image formation into multiple channels it is important to keep the noise in each of these channels low."

The team presented its work at the IEEE International Solid-State Circuits Conference in paper number 978-1-4244-2010-0/08.

Omicron-Laserage Laserprodukte GmbHUniverse Kogaku America Inc.Berkeley Nucleonics CorporationMad City Labs, Inc.LASEROPTIK GmbHJenLab GmbHLaCroix Precision Optics
© 2024 SPIE Europe
Top of Page