20 Mar 2008
Computer models of what happens when light hits the pixels of a CMOS image sensor are so complex that each simulation can take days to run. Tim Hayes hears how recent advances in software and hardware can speed up the process by a factor of twenty or even more.
Optoelectronics manufacturers are under constant pressure to produce higher performance devices while also shortening the time taken to develop them. CMOS image sensors, which are widely used in camera phones, webcams, digital cameras and camcorders, have to pack more pixels into each new generation of device, while remaining cost-effective to produce.
Effective computer modelling is an essential part of this development process, enabling producers to simulate critical aspects of the process and the device design. For CMOS image sensors these simulation tasks include not just process optimization, but also electrical and optical analyses to model parameters such as dark-current, microlens design and, crucially, cross-talk analysis.
The complexity of the devices means that such modelling is a demanding operation. "Over the last few years there has been growing interest in simulating CMOS image sensors, as the number of pixels in these applications continues to rise," said Ric Borges, senior manager of product marketing at Synopsys, a developer of semiconductor design software. "But that increase in pixels has meant that the number-crunching involved has become formidable. It could take two or three days just to run one single simulation."
A hardware solution has now been developed that has a dramatic effect on the time taken to complete these models. It links Synopsys' TCAD Sentaurus Device simulation software with the ClusterInABox Quad Q30 workstation made by Acceleware, designers of special purpose software/hardware acceleration products. The result is claimed to be a dramatic speeding up of electromagnetic simulations of optoelectronic devices, through Acceleware's development of multiple-thread, massively parallel processing in the system hardware. "The value we put in is by enabling applications to run in a massively parallel way, and that gives a performance boost," commented Rob Miller, vice-president of marketing and product management at Acceleware.
In designing a CMOS sensor, the main parameters to simulate are diffraction and interference. "They're both very important, since the relative sizes of the structures are now so small," observed Borges. "There could, for example, be some charges in the silicon that are meant to register a blue signal but which are actually collected by the red pixel, adding to the noise."
The wavelength of incident light is of the same order of magnitude as the dimensions of the pixels, so geometrical optics is not sufficient to predict what will happen. Instead, a full solution to Maxwell's equations that can treat interference and diffraction in full-wave optics is required, and that is where the computational complexity comes in. Solving Maxwell's equations involves a highly accurate numerical method called finite-difference time-domain (FDTD) analysis, which can cover a wide frequency range with a single simulation run.
FDTD simulations in the past have typically required tens of hours to fully characterize a CMOS image sensor design in 3D, but Acceleware's hardware acceleration can drastically improve things. "Acceleware's founders are experts in this field, who first ran into these issues while at university and looked for ways to accelerate their research," said Miller.
"There are some details of the CMOS process that need to be fine tuned for optical applications, and there is a level of complexity in doing that even when designing just a single pixel," explained Borges. "When trying to model the interplay between four pixels or more the simulation's complexity grows rapidly and you need a lot of high-end analysis."
Modelling the real world
The use of simulations in the development process for a CMOS image sensor usually follows a set sequence. "Simulation of a single pixel is done first, to get the major details correct," said Borges. "Factors such as the colour filter, the shape of the lens and the photodiode piece within the silicon are all modelled to see what their effect will be."
Once the single pixel is optimized, multi-pixel simulations can begin. The typical benchmark is to simulate four pixels, and at this stage modelling the extent of optical cross-talk between pixels becomes crucial. "A typical scheme for RGB would be a four-pixel sensor with one red pixel, one blue pixel and two green pixels. So, for example, we will simulate covering three pixels and shining light on the fourth to see if any signal is registered in the three covered pixels," said Borges.
Other simulations model real-world effects. "There are good reasons for simulating light hitting the sensor at an oblique angle to see what the impact of incident angle is on performance, since that's a situation similar to the sensor's environment in a commercial application," noted Borges. "At some point the angle will be shallow enough to stop the sensor working properly."
The possible time-saving provided by the new hardware acceleration during these simulations can be considerable. Figures from the companies claim a halving of the simulation time for a small one-pixel CMOS image sensor of 356 000 cells. For a more complex single pixel CMOS image sensor with 31 million cells, the simulation time is said to drop from nearly 13.5 hours without the acceleration technology to 22 minutes, some 35-times faster.
After reaching the optimal configuration, the final detail will be to take account of manufacturing tolerances. "The shape of the lens and its location may vary, drifting around somewhat depending on how tightly controlled the manufacturing process is," said Borges. "So another purpose of the modelling analysis is to understand the impact of, for example, the lens being misaligned on top of the photodiode and what effect that would have on performance. Overall there will be an incremental scale of simulations, starting with modelling the simplest pixel units and ending with simulations of these real-world manufacturing variables."
As Borges points out, the CMOS image sensor market is very competitive. The overall development process will actually involve a sequence of several individual development cycles, and simulation can help reduce the number of these cycles or assist with designing experiments that home in more efficiently on the proper optical design.
"Time is of the essence, and if you can get to market a couple of months sooner than your competitor then that's a really big deal. Typically you can count on it taking between one and two years for a manufacturer to develop a new product and bring it to market. Using simulations can allow developers to shave six months off that period, and that's huge for them."
• This article originally appeared in the March 2008 issue of Optics & Laser Europe magazine.