30 Apr 2019
RMIT University gathers data for stereo imaging from the fiber bundles commonly used in 2D endoscopy.
The optical fiber bundles employed in microendoscopy are normally limited to 2D imaging, primarily because size constraints make the use of tunable focusing optics problematic.A project at Melbourne's RMIT University has now demonstrated a way to get around this limitation, and found that depth information can in fact be extracted from the optical data carried by a conventional fiber bundle, potentially opening up new routes to minimally invasive bioimaging. The work was published in Science Advances.
"It turns out these optical fibres naturally capture images from multiple perspectives, giving us depth perception at the microscale," commented Antony Orth of RMIT University. "Our approach can process all those microscopic images and combine the viewpoints to deliver a depth-rendered visualization of the tissue being examined: an image in three dimensions."
The breakthrough hinges on the mode structure of the light within fiber bundle cores, and the data that can be carried by it. The potential for unscrambling 3D information from single-core multi-mode optical fibers has been known for some time, but an extreme sensitivity to fiber bending has made it slow and impractical to deploy in medical imaging to date.
At RMIT, the team discovered that the light field's angular dimension is in fact contained within intracore intensity patterns in the fiber bundle, generated by angle-dependent coupling effects when the returning light enters the fiber. These patterns have traditionally been ignored, but RMIT has successfully related them to the angular structure of the light field, and used bespoke data analysis to reveal depth information about the object being imaged.
"The key observation is that the angular distribution of light is subtly hidden in the details of how these optical fiber bundles transmit light," Orth said. "The fibres essentially 'remember' how light was initially sent in, and the pattern of light at the other side depends on the angle at which light entered the fiber."
Thinnest light field imaging device
In the project's published paper, it describes employing the technique to generate 3D images of two-micron fluorescent beads, followed by imaging of a five-millimeter-thick slice of mouse brain. Cell nuclei were visible in the resulting stereo image, and an animation of the shifting viewpoint successfully created.
Surface and depth mapping was achieved with an accuracy better than ten microns, on samples up to 80 microns away from the fiber bundle. This depth ranging resolution is better than the confocal slice thickness of commercial bare fiber, fixed-focus microendoscopes, according to the project team.
Although this new approach cannot recover data that has been fundamentally scrambled because of scattering or attenuation within the sample, it could still prove to be a valuable way to record depth imaging from biological samples in a single shot.
"To our knowledge, this is the thinnest light field imaging device reported to date," noted the team in its paper. "The ultimate form factor limit of our approach is reached when cores are shrunk until they are single mode, at which point no angular information can be obtained. But our work establishes suitable optical fiber bundles as a new class of light field sensor, alongside microlens arrays, aperture masks, angle-sensitive pixels, and camera arrays."
The new approach could be a potentially valuable step towards 3D optical biopsies carried out using optical fiber bundles, or used for in vivo 3D fluorescence microscopy in biological research.
"The exciting thing is that our approach is fully compatible with the optical fibre bundles that are already in clinical use," said Orth. "So it’s possible that 3D optical biopsies could be a reality sooner rather than later."
© 2024 SPIE Europe |
|