19 Mar 2025
Lumentum and Coherent both involved in effort announced at the AI computing pioneer's 'GTC' event in San Jose.
Artificial intelligence (AI) computing giant Nvidia has announced details of how it is adopting silicon photonics and co-packaged optics (CPO) technology to deliver massive scaling in AI architecture while reducing power consumption.
During a keynote talk at the firm’s “GTC 2025” event in San Jose, California, Nvidia’s CEO Jensen Huang introduced new silicon photonics network switch technologies, called “Spectrum-X” and “Quantum-X”.
They are designed to scale AI “factories” to millions of GPUs connected with 1.6 Tb/s port switches, using CPO to provide short-range connections as a way to improve overall electrical efficiency.
“Nvidia has achieved the fusion of electronic circuits and optical communications at massive scale,” announced the company.
Photonics firms Lumentum and Coherent, already key suppliers of optical transceivers to Nvidia, are both involved in the effort, with Lumentum providing lasers for the Spectrum-X product and Coherent collaborating on silicon photonics.
Also involved are the chip foundry giant TSMC, as well as Corning, Foxconn, and Senko.
‘Crazy technology’
During his keynote address, Huang said: “The challenge with scaling out GPUs to many hundreds of thousands is the connection of the scale-out. When the data centers are now the size of a stadium, we need something [new] - and that’s where silicon photonics comes in.”
He added: “The challenge of silicon photonics has been that the transceivers consume a lot of energy. We’re announcing Nvidia’s first co-packaged [optics] silicon photonics system.
“It is the world’s first 1.6 Tb/s CPO, based on a technology called a micro ring modulator (MRM). It is completely built with this incredible process technology at TSMC.”
Describing the development as “really crazy, crazy technology”, Huang compared the conventional Mach-Zehnder pluggable transceivers typically used to connect GPUs via network switches with Nvidia’s new development.
He pointed out that, today, each of those GPUs would require six individual transceivers, meaning a power consumption of 180 Watts and a cost of $6000 per GPU.
“The question is: how do we scale up now to millions of GPUs?” said the CEO. “Because if we have a million GPUs [we need] 6 million transceivers [consuming] 180 megawatts.”
Megawatt savings
Huang then showed a video detailing how the “Quantum-X” package, made by TSMC, combines 18 silicon photonics engines, enabling 324 optical connections and 288 data links from 36 laser inputs.
Six detachable optical sub-assemblies, each comprising three silicon photonics engines to provide an aggregate data throughput of 4.8 Tb/s, connect to the package. Each silicon photonics engine features a 200 Gb/s MRM fabricated using TSMC’s “COUPE” (Compact Universal Photonic Engine) process, which stacks 220 million transistors on top of 1000 photonic integrated circuits (PICs).
Overall, the approach is said to reduce power consumption by 3.5 times, compared with the conventional pluggable transceiver approach.
“Just a technology marvel,” said Huang of the development, adding that Nvidia would begin shipping the silicon photonics switch in the second half of this year, followed by the Spectrum-X product in the second half of 2026.
He added that data centers could save “tens of megawatts” of power consumption as a result of the silicon photonics technology, which Nvidia has been working on for the past five years.
Fewer lasers; more photonics
Part of the reason for the better power efficiency is that the new architecture actually requires four times fewer lasers than would be used by the equivalent number of pluggable transceivers, with Huang saying in a company release:
“AI factories are a new class of data centers with extreme scale, and networking infrastructure must be reinvented to keep pace. By integrating silicon photonics directly into switches, Nvidia is shattering the old limitations of hyperscale and enterprise networks and opening the gate to million-GPU AI factories.”
Lumentum is providing lasers for the new switches, and Wupen Yuen, president of the company’s cloud and networking technology business, commented:
“Innovations that drastically lower networking power consumption will expand the market for photonic components, supporting larger AI installations and accelerating the transition from copper to photonic interconnects.”
In its own announcement Coherent’s CEO Jim Anderson added: “We are pleased to be Nvidia’s collaborator on this new transceiver form factor. We expect that CPO will further accelerate the expansion of optical networking in the data center.”
Gilad Shainer, Nvidia’s senior VP of networking, said that Nvidia’s collaborations with Lumentum and Coherent on lasers and silicon photonics would enable the next generation of “million-scale AI”.
© 2025 SPIE Europe |
|