22 Jul 2015
RMIT University platform could speed up image-based assays during drug development.
Multispectral imaging techniques are designed to capture optical data at several specific frequencies during the same analysis, an approach which allows a higher density of both spectral and spatial information about a sample to be gathered compared to more conventional fluorescence microscopy methods.This has always had potential advantages for high-content imaging assays in biology, where large numbers of cells need to be analyzed for multiple parameters. But in practice, different multispectral imaging approaches have their own drawbacks.
Reading the spectra sequentially from one line of pixels at a time is relatively slow and disperses the light from each spatial point over several pixels, while analyzing the spectrum from each pixel individually - known as "whiskbroom" imaging after an analogy with the sweeping pattern involved - is even slower.
Now a team including researchers from the Centre of Excellence for Nanoscale Biophotonics (CNBP) at RMIT University in Melbourne, Australia, has developed a multispectral microscopy platform that could improve matters considerably. The work was published in Optica.
It employs a centimeter-scale microlens array to massively parallelize a whiskbroom multispectral approach, effectively recording the spectra from thousands of points in the sample simultaneously, rather than the individual point examined at each stage of a traditional whiskbroom method. Use of a single physical aperture provides the confocal filtering necessary for successful multispectral imaging.
"If you use a regular microscope objective, you have to construct a large image mosaic by recording small fields of view, one at a time," said Antony Orth of RMIT. "Every time you record an image, you need to bring the sample stage to a halt, refocus and change through all your various filters, which is inherently slow for large samples. Our microlens array spreads out imaging over a larger area, enabling continuous scanning and image collection. We are able to record large images with fewer mechanical motions, which saves a lot of time."
The full potential of this approach becomes clear when there are different types of fluorophores in one sample, in which case the quantity of data involved can be substantial. The team reported multispectral image cubes (ie. positional and color data for individual points) of up to 1.3 gigapixels in the spatial domain, with as many as 13 spectral samples recorded per pixel and six independent fluorescent channels - making a total image size of 16.8 billion spatial–spectral samples.
"We have never previously seen any multispectral or hyperspectral microscopy dataset larger than a standard field of view, say five megapixels or so," commented Orth. "I don't think anyone had previously attempted to create such a large multispectral image because the hardware to do it efficiently simply didn't exist."
Although the new dataset sets a benchmark in microscopy, Orth pointed out that the astronomy community has a history of creating massive datasets with many spectral windows, and the cross-pollination between the two different fields can be considerable. A technique called objective prism spectroscopy uses an analogous technique to measure several spectra within a telescope's field of view (FOV).
A chicken-and-egg problem
In the RMIT platform, post-processing techniques separate out the signals from a multiplicity of fluorophores. This brings its own data-handling challenges, when file sizes from a single experiment can routinely run up to 100 GB.
"There are two main computational aspects here: image stitching and spectral unmixing," noted Orth. "Spectral unmixing has been around for a while, so we more or less implemented it directly in our system; but image stitching was a bit more involved. As far as we know, no one had ever stitched together upwards of 10,000 multichannel images before. So we had to write some custom routines for that task, addressing issues such as equalizing the exposure of each small image."
The clinical impact of the system could ultimately be considerable, especially in fields such as drug discovery. Assessing the impact of a new drug involves testing large numbers of cells with different doses under various conditions, and imaging can be a significant bottleneck in the process.But despite the appeal of multispectral imaging, systems have not to date been purpose-built for large FOV imaging and high-throughput screening.
"I think there has been a bit of a chicken-and-egg problem," commented Orth. "Researchers won't develop assays with more than a few fluorophores because they know that they can't read out the assay with their microscope, while microscope manufacturers are reluctant to spend R&D and marketing money on a microscope for which the market is not already well established. Being an academic institution, we were able to take the risk and develop this microscope to the point where we have shown that it works well."
Recent developments and price-cuts in both machine-vision and computing power also played a part, with the team employing a USB3-connected camera costing under $1000 for this research but still not pushing it to its limits.
"Five years ago the camera and computer portion of this system probably would have cost five- to ten-times what it does today," Orth said. "The computing power needed to record, store and recall the data is now available in more or less standard desktop computers."
The next leap
The implicit compromise that the new gigapixel platform still brings with it is the one between imaging speed and the achievable resolution. The microlenses used by the team for illumination and collection have a numerical aperture (NA) of about 0.24; going higher would mean either reducing the focal length or increasing the size of the microlenses, and neither change is ideal.
"Reducing the focal length is tricky because of the working distance needed to focus through the sample coverslip, while increasing the microlens size means that there will be fewer microlenses within a field of view," Orth noted. "This in turn decreases the imaging speed - which is one of the main advantages of our system. We settled on using an NA of about 0.24 with one-micron resolution, because high-throughput screens typically have a Nyquist-limited resolution of about that size."
Having now completed its proof-of-principle study, the project is seeking partners to invest in the IP and implement the technology. Pharmaceutical companies are among the obvious candidates, since bringing the system into their production pipelines should bring a competitive advantage in terms of assay speed.
"Whoever embraces the next leap in high-content imaging will be developing drugs more quickly and at a lower cost," commented Orth. "I think there is a big commercial opportunity here."
About the Author
Tim Hayes is a contributor to Optics.org.
© 2024 SPIE Europe |
|