Optics.org
daily coverage of the optics & photonics industry and the markets that it serves
Featured Showcases
Photonics West Showcase
News
Menu
Research & Development

SPIE O+P 2024: Emerging topics in AI

21 Aug 2024

Yale’s Logan Wright details his group’s achievements in “Optical Computing at the Quantum Limit”.

By Matthew Peach in San Diego

As part of this week’s SPIE Optics & Photonics conference program Emerging Topics in Artificial Intelligence, Asst. Prof. Logan G. Wright, of Yale University, presented an invited paper, entitled “Optical Computing at the Quantum Limit”. Wright’s lab works on programming programmable physical systems and multimode nonlinear and quantum optics.

On Sunday, he gave an overview of his group’s work testing the absolute limits of optical computing, including the development of optical neural networks that use “vastly less than one photon per multiplication, operating deep in the stochastic regime dominated by quantum noise”.

He also discussed how optical neural networks can be scaled up, and how they may offer advantages, including up to five orders of magnitude more energy-efficient inference, for implementing large language models.

Wright began his talk by showing a slide listing many of the better-known benefits and capabilities of AI: computer vision, natural language processing, strategy making, physical system simulation, and generative tasks.

He said, “If you're in this room, you probably have seen this before, but the reason that I like to put in a slide like this in the talk is that there’s an enormous amount of hype about these technologies. But it is important to remember that within this hype, there is actual true substance. These are really truly amazing algorithms, and it is completely astonishing to me that such a simple algorithm can do all these things.”

He continued, “As you put more and more compute into these models, they get better and better, this is an important observation. What that has led to is the size of these models has gotten bigger and bigger more and more compute is being used in these algorithms. It’s rapidly outpacing the amount of compute that is available at a fixed cost.”

“This is where optics and other analog physical platforms come in. Most of the operations that are performed in a deep neural network are just a matrix-vector multiplication, and that operation scales as n-squared, so if the vector has n elements it takes n-squared digital operations to perform,” he said.

“So the way that we perform optical matrix vector multiplication looks like this: we take an optical image, and we make a bunch of copies of it, and we then attenuate these copies using a spatial light modulator; attenuation is like multiplication. The output of an attenuation is the input intensity times the attenuation.”

“We use a micro lens array to create these optical copies, which look like a fly’s compound dye, so that creates a multiple different copies. These get spread out over a liquid crystal display, which attenuates with these different copies that is the pump of addition, and then we sum them up with a photo detector.”

After describing in detail his group’s approach to optical neural networks and their applicability to AI, Wright concluded that his group has demonstrated optical neural networks that can consume extremely low optical energy: “I would say, bewilderingly small optical energy, and we have shown that we can use single photon detectors as the neuroactivation function, and that allows us to perform optical neural networks at the level of single photons per neuron.

“That is less than one thousandth of a photon per operation. Finally, we have shown that this is not just true for simple, small-scale machine learning operations, but that it also translates to a fundamentally different scaling law in terms of how much energy it costs to perform inference of large machine learning tasks such as large language models.”

Hyperion OpticsECOPTIKHÜBNER PhotonicsAlluxaABTechTRIOPTICS GmbHIridian Spectral Technologies
© 2024 SPIE Europe
Top of Page