02 Feb 2021
Stanford University optimizes image quality through Michelson holography.
Holographic displays offer a promising route to high-quality images for these platforms, potentially delivered from a compact platform while maintaining large fields of view through the use of spatial light modulators (SLMs).
Although SLMs can indirectly create a holographic image by shaping a wave field in the particular ways necessary to form a target image, high quality images are a challenge, often due to low diffraction efficiency of SLMs. Any significant presence of undiffracted light interferes with the user-controlled diffraction orders and degrades the observed image.
A project based at Stanford University has now developed a possible solution involving a technique termed Michelson holography (MH), named after its inspiration in Michelson interferometry. As described in Optica, the new system is an architecture in which two SLMs are employed.
"The core idea of Michelson holography is to destructively interfere with the diffracted light of one SLM using the undiffracted light of the other," said Jonghyun Kim from Stanford University and partners Nvidia. "This allows the undiffracted light to contribute to forming the image, rather than creating speckle and other artifacts."
The Stanford platform involves the design methodology termed camera-in-the-loop (CITL), a computational approach in which the deviations of a result from ideal light transport are assessed from images on a camera display and the findings fed back into the computation process.
CITL can already allow holography to partially compensate for the undiffracted light of an SLM without having to explicitly model all of the terms involved, but the Stanford project has now developed the principle further.
"Although holographic displays with multiple SLMs have been investigated for viewing angle enhancement, improving resolution, or complex modulation, our approach is the first to leverage CITL optimization to optimize image quality of two phase-only SLMs by mitigating the effect of undiffracted light in a fully automatic manner," noted the team in its published paper.
Transformative impact on society
In use, the Stanford CITL procedure captures the superposition of all diffracted and undiffracted light of the display, assesses the apparent error against a target image, and then backpropagates that error into both SLM patterns simultaneously.
"This procedure does not require us to explicitly model the SLM pixel structure or the undiffracted light," said the team. "We simply need a camera that captures intermediate images of this iterative holography algorithm, which automatically optimizes the resulting phase patterns for both SLMs."
In trials the bench-top MH architecture was used to display several 2D and 3D holograms, including images of an insect and a bird. The demonstration showed that the dual-SLM holographic display with CITL calibration provided significantly better image quality than existing computer-generated hologram approaches.
"Once the computer model is trained, it can be used to precisely figure out what a captured image would look like without physically capturing it," said Kim. "This means that the entire optical setup can be simulated in the cloud to perform real-time inference of computationally heavy problems with parallel computing."
The next steps will include translating the bench-top setup into a system that could be built into a wearable AR or VR system, although the approach of co-designing hardware and software could be useful for improving other applications of computational displays and computational imaging in general, according to the team.
"Augmented and virtual reality systems are poised to have a transformative impact on our society by providing a seamless interface between a user and the digital world,” commented Kim. "Holographic displays could overcome some of the biggest remaining challenges for these systems by improving the user experience and enabling more compact devices."