daily coverage of the optics & photonics industry and the markets that it serves
Featured Showcases
Photonics West Showcase
Research & Development

Inside Intel's photonics revolution - part two

11 Oct 2010

In the second of two articles covering the advanced photonics technologies under development at Intel, we focus on silicon.

Photonics development at the world's largest silicon chip producer is focused on two related, yet fundamentally different, technologies.

Light Peak, Intel’s souped-up version of the USB connection, is close to becoming a commercial reality and will be aimed squarely at consumers. Meanwhile, the development of silicon photonics presents an altogether more radical approach to optical technologies. Whereas Light Peak is based on conventional optoelectronic components, produced and assembled in a highly cost-effective manner, developing silicon as a photonic medium presents a much more fundamental challenge - but one that is likely to have an equally fundamental impact on the photonics industry.

Basic physics tells us that trying to get silicon to emit light in any meaningful way is doomed to failure – the result of an indirect bandgap that makes any conduction-valence band transitions technically forbidden and, at best, highly unlikely. With Intel’s entire technological history built around silicon and silicon manufacturing processes, this might appear to be an insurmountable problem.

Sensibly enough, Mario Paniccia and his team have not been trying to get silicon to emit light. Their idea is based more around finding a silicon-like manufacturing platform that can also accommodate the compound semiconductors that do a much more efficient job of emitting light, all at an acceptable cost and speed.

Essentially, that means working on materials and device integration, and this was really the key aspect of the live demonstration of a 50 Gb/s link using silicon photonics earlier this summer. “The combination of devices is complex, and it was a monumental challenge to do the big integration thing,” admitted Paniccia. “But we learned an enormous amount of stuff along the way.”

No major hurdles
Paniccia now sees no major technological hurdles to achieving the overall goal – of delivering high-speed silicon photonics links, in large volumes and at an appropriate cost. “We have convinced ourselves [that it works]. There are not a lot of showstoppers now, and we feel comfortable saying that we know how to ramp this into high volume.”

So just how did they do it? The key part was finding a cost-effective way to integrate light-emitting InP within a silicon-like process. As Paniccia points out, InP is by far the most expensive material used in the process, so the basic idea was to minimize the amount that is needed. Use too much of it, and the economics of “silicon” photonics would be no better than the conventional III-V approach.

“The performance of the laser is 100 percent dependent on the silicon,” Paniccia told optics.org. “It is all determined by lithography. The InP is simply there to emit and then amplify the light. Nobody thought that silicon could act as an effective microcavity, and this has been a key achievement.”

Not only does InP represent the most expensive part of the overall architecture; much of it is wasted in a typical manufacturing process, discarded as a wafer of the semiconductor material is etched away. So instead of processing an entire InP wafer, the Intel approach is to dice it up into tiny pieces at the very start. This is in complete contrast to the conventional processing method, where dicing – the method by which a wafer is turned into thousands of discrete chips - is usually the last of the manufacturing steps before packaging and final testing.

Radical approach
Once diced-up, the InP is stuck onto a handle wafer, before bonding to an 8-inch silicon wafer. Transferring to the larger wafer-size is crucial, because Intel’s existing fabs no longer manufacture 6-inch wafers. The handle wafer is then thrown away, leaving a hybrid silicon/InP wafer. Now, although this doesn’t really sound like the kind of “wafer-level” process upon which Intel would normally base commercial production, Paniccia is convinced that it represents the most cost-effective way to produce “silicon” emitters. Key collaborator John Bowers, from the University of California, Santa Barbara, has previously confirmed that the team has tried bonding 150 mm InP to SOI wafers; however, at that size, the cost of InP becomes economically unviable and incompatible with Intel’s fabs.

Intel’s radical approach, much different to other “silicon photonics” research under way at CEA-Leti and elsewhere, means that a single 8-inch SOI can produce more than 100,000 lasers. But perhaps even more valuable than that is the additional ability to integrate other optoelectronic devices within the wafer process. “That’s the key to the scalability,” said Paniccia.

What is more, the relative simplicity of a laser structure means that there is no requirement to use the latest semiconductor lithography tools. “We don’t need leading-edge lithography to do this,” says Paniccia. Instead, they can use older, 8-inch wafer fabs utilizing 90 nm processes – fully depreciated in cost, but easily good enough for hybrid laser production.

Innovation motivation
Part of the Intel mantra is to push technology and innovation to its limits. The same is true when it comes to setting targets and motivating its staff, it appears. When Justin Rattner, Paniccia’s boss, found himself scheduled to give a presentation at the Integrated Photonics Research conference in Monterey last July, he set one of those typically tough challenges. Rattner decided that a 50 Gb/s silicon photonics demo would be the sort of thing that the conference organizers were looking for in a keynote talk. And while a six-week schedule to achieve that may have given Paniccia’s development team a few sleepless nights, it is exactly the kind of push for innovation that seems to be standard procedure at Intel.

One of the major innovations since the initial demonstration of the silicon/InP hybrid laser platform back in 2006 is the laser transmission wavelength. In that early inception, the hybrid laser operated in the 1550 nm range. But the Monterey demo actually featured four hybrid lasers, operating at 1290, 1310, 1330 and 1350 nm.

So why the wavelength change? Paniccia explains that the relatively high germanium content used in the SiGe receiver component means that it responds better at the shorter wavelength. “The idea is to solve a five-meter problem, not to do long-haul transmission,” he said.

Having now shown what is possible with silicon photonics, proving process repeatability and reliability is one of the next key steps to take for the Intel team. Right now, they are working on the reliability testing and optimization process, which includes getting to lower lasing threshold conditions, and tweaking the epitaxial stack to optimize the laser output.

Since reliability testing takes at least 5000 hours, this will occupy the silicon photonics team for much of the next few months, but Paniccia is expecting a positive outcome: “We feel very good about it, and we’ve [already] addressed a lot of the concerns people had. But you just don’t know for sure until you complete the reliability testing.”

Ultimately, Paniccia believes that silicon photonics will become pervasive across all interconnects. But at first it will be a question of timing – when will it make sense to deploy the technology in each application sector – and thus a question of economics, rather than technology. High-performance computing is one of the more obvious early applications, and one where optical links have always been desired but have traditionally been too costly.

Within five years, links based on the technology should appear in the interconnects between server racks, and between the CPUs and memory slots. Electronic interconnects are constrained by heat and electromagnetic interference, making it difficult to bring them too close to each other. With an optical link, that constraint is removed, potentially enabling radical re-designs of server farms and other enterprise computing applications, such as separating the CPU and memory elements in more distributed architectures. Those designs could be optimized for specific applications, and to minimize energy consumption: another key opportunity for “green” photonics.

The benefits for server applications are compelling, and have long been recognized. What silicon photonics will do is make that application economically viable for the first time. With CPUs typically developed within a cycle lasting three-to-four years, designers can now seriously look at ways to deploy optical technology.

As both Paniccia and his colleague Vic Krutul point out, silicon photonics remains at the research stage right now. But both are firmly of the belief that this will change soon. “This is not a decade out,” they say, and their best guess - assuming that no red flags are raised during the reliability tests – is that we will see silicon photonics products in their first applications by 2015.

“We have merged the silicon and laser concept,” they explain. What Intel’s photonics pioneers have begun to demonstrate, with both silicon photonics and Light Peak, is that photonics, like electronics, is scalable. “What this all means is that optics [in the form of optical interconnects] is coming to the mass market,” said Paniccia. That means a technological roadmap, a path from 50G to 100G and beyond – to whatever speeds the market requires. Photonics doesn’t yet have its own version of Moore’s law. But after 2015, when Intel begins to roll-out silicon photonics, we may see the start of it.

About the Author

Mike Hatcher is the Editor in Chief of Optics.org

© 2024 SPIE Europe
Top of Page