08 Jul 2015
MIT Media Lab platform promises significant impact on global health.
Usually the problem is alignment: unless the camera is located and orientated correctly, the resulting picture of the retina will be sub-standard.
Unfortunately the design of a conventional fundus camera makes it tricky for operators to be sure that the positioning is exactly perfect. Some method to secure the patient's head and prevent involuntary movement is often required to ensure success.
The alternative is for the patient to attempt to position the camera themselves and keep it orientated correctly during the imaging procedure - an approach which would simplify the logistics, but which most patients have found nearly impossible to do.
A project at MIT Media Lab has now developed a solution, by thinking of the problem in terms of the user-interface (UI) involved. The team has developed a prototype device christened eyeSelfie (as in Eye Self-Imaging), a retinal imaging platform which uses novel optics and an interactive UI to provide users with a visual fixation cue indicating when the alignment is correct - the first time interactive self-imaging of the retina has been demonstrated.
As well as being more straightforward than standard focusing methodologies in retinal photography - and using less light than some existing infrared-based alignment procedures - the principles behind eyeSelfie could also now be extended outside ophthalmology and into other sectors.
"Any self-imaging of the retina could previously only be accomplished through luck," commented Karin Roesch of MIT Media Lab's Camera Culture Group. "In traditional approaches, where the image of a fixation target passes through only one point of the pupil but the imaging optics capture rays moving through all parts, a user has no way of knowing if the edge of their pupil is occluding the image of the retina."
The key development behind eyeSelfie involves the idea of "virtual pinholes," a simple light pattern seen by the user and produced at their pupil. A novel interactive ray-based approach developed by the team allows images with the same field-of-view to be projected onto the retina simultaneously, but pass through a different part of the eye's cornea, pupil and lens. Lateral and axial movement of the eye is then perceived by the user as a shift in this pattern of pinhole light.
"This is the first demonstration of a class of static light-field patterns in which the perceived image changes with eye relief and lateral movement," said Roesch. "Multi-view or glasses-free 3D displays have used a similar concept; but this work differs because the perceived images change with small movements in the near-field."
Another breakthrough involves the "eye box" - the spatial area within which the patient's pupil must remain in order to sample all desired ray angles. Traditional systems, including applications found outside ophthalmology in head-mounted displays, have tried to enlarge the eye box so that the image appears the same even for slight misalignments. However, the developers of eyeSelfie took a different approach.
"Our design has a large 'partial' eye box, in the sense that it's easy for a user to see part of the pattern when partially aligned," Roesch said. "That partial pattern indicates how the user needs to re-align. We believe this is the first time such a 'layered' or 'hierarchical' eye box has been used. Another way to describe it is that we have a large optical eye box, and a small perceptual eye box."
Challenges during the development process included finding a display approach that reduced any ambiguity about whether the system was aligned or not. Determining the common set of perceptual cues between different users and accounting for differences in pupil size and corneal shape was another important consideration.
With those hurdles now tackled, the general principle could be applied in a number of different scenarios. Within ophthalmology, self-imaging would allow patients to take retina photos in their own home, allowing clinicians to better observe changes after treatment or perhaps enabling new ways to monitor diabetes.
"Self imaging of the retina also makes possible a new class of highly accurate gaze trackers, or improvements to the accuracy of biometric devices such as iris and retinal scanners," noted Roesch. "Furthermore, our light-field pattern can be readily incorporated into artificial reality headsets for self-calibration of near eye displays. Emerging light-field-based near-eye displays could use such patterns for user alignment straight out of the box."
The hope is that the principles behind eyeSelfie could also have a global impact, by enabling high-quality retinal imaging to be carried out more frequently outside of a clinical setting and perhaps be integrated with other health data.
This could soon become a reality through LVPEI-MITRA, a collaborative program between the LV Prasad Eye Institute in Hyderabad and the MIT Media Lab Camera Culture group, which aims to build and deploy the next generation of screening, diagnostic and therapeutic tools for eye care.
One of these solutions will be eyeMITRA, a mobile retinal imaging system intended to bring routine diagnostic retinal examinations - in particular for diabetic retinopathy - to developing countries where standards of eye care are low, while reducing the cost of the imaging device significantly.
About the Author
Tim Hayes is a contributor to Optics.org.
|Hyundai Mobis develops Driver State Warning technology|
|Metasurface gratings offer new route to polarization imaging|
|Quanergy lidar sensors land security mission|
|Trumpf platform brings high productivity to denture manufacture|
|Partnership to enable ‘plug-and-play’ 3D facial recognition|
|LASER 2019: German firms tout auto lidar components|