12 Oct 2021
EPFL and Dartmouth College create detector array to spot fluorescent molecules in scattering media.
SPAD detectors have also played a part in the development of lidar platforms able to operate in real-world environments and capture multiple data points near-simultaneously.
As reported in Optica, the new platform is capable of localizing and quantifying the concentration of fluorescent molecules in heavily scattering media with sub-millimeter depth accuracy, potentially valuable for the imaging of tumors and other diseases.
The breakthrough builds on EPFL's previous development of SwissSAPD2, a sensor designed to allow low-cost picosecond temporal resolution in fluorescence imaging through its architecture of 512x512 individual SPAD pixels and a specialized time-gating system.
EPFL and Dartmouth has now applied the detector in a fluorescence lidar platform, with the same sensor recording both fluorescence and diffuse reflectance images collected from biological tissues at 635 nanometers. Photons which have encountered a tumor are delayed in their travel back to the detector, a time differential that allows an image of the tumor to be reconstructed.
"The delay is less than a nanosecond, but it’s enough for us to be able to generate an image," commented Edoardo Charbon of EPFL. "The deeper into a tumor the light travels, the more time it will take to return. That allows us to construct an image in three dimensions."
Trials of in vivo fluorescence lidar on head & neck tumors selectively labelled with suitable markers demonstrated the feasibility of the platform to detect picomolar fluorophore concentrations in clinical scenarios, according to the project team, and the technology could ultimately help surgeons with the always-critical task of identifying tumor margins when excising a malignancy.
"Scientists have had to choose between identifying a tumor’s depth or its location," noted EPFL. "But with this new technology, they can have both."
Easier application of AI in bioimaging
Also at EPFL, a project from the lab's Center for Imaging has developed a software plugin intended to make it easier to incorporate artificial intelligence into image analysis for life-science research.
The open-source plugin, called deepImageJ and described in Nature Methods, could help non-experts to perform common image processing tasks in life-science research, potentially saving both time and money.
The research builds upon ImageJ, an established image processing tool with an open architecture designed to facilitate subsequent modifications and enhancements, such as the one now developed and released by EPFL.
"Deep-learning, a sub-field of artificial intelligence, is showing an increasing potential for bioimage analysis, but using the deep-learning models often requires coding skills that few life scientists possess," commented EPFL.
A consortium of European researchers is currently developing a community-driven repository of pre-trained deep-learning AI models, called the BioImage Model Zoo, and deepImageJ is designed to allow clinicians to access this repository without expert knowledge.
“To train their models researchers need specific resources and technical knowledge, especially in Python coding, that many life scientists do not have, but ideally these models should be available to everyone," said Daniel Sage, overseeing the deepImageJ development at EPFL. "We expect deepImageJ to contribute to the broader dissemination and reuse of deep learning models in life sciences applications and bioimage informatics."