From AI@MBL: Researchers use Machine Learning to Create Virtual Labels in Cells
Fluorescent labeling of cells is an arduous task – something biologists know deeply. It’s error-prone and inefficient, lighting up specific cell structures one at a time. Staining is also harsh on cell cultures, often bleaching and degrading their useful life under the microscope.
Enter virtual staining, an AI approach to cell labeling, with no need for chemicals. Using machine learning techniques, researchers feed massive amounts of microscopy images to artificial neural networks, training them to recognize patterns from the inputs. With enough training, these deep neural networks can eventually learn to differentiate landmark organelles and cell states within images. The models can also generate fluorescent-like images without any physical labeling from a fluorescent probe; the outputs are entirely digital, painted in pixels matched to a fluorescent stain.
Despite advancements in machine learning, AI models struggle to generalize across cell types, experimental conditions, and microscopes. Now, a team of scientists has developed Cytoland – a collection of AI models that closes the generalization gap. As published in Nature Machine Intelligence recently, Cytoland’s generalizable models reproduce virtual stains across robust – and even imperfect – experimental conditions. Senior author Shalin Mehta of CZ Biohub, co-director of the AI@MBL course at the Marine Biological Laboratory, credits some of the paper’s experiments to discussions he had with former students in the course. Students also in part developed the published models, working with course teaching assistants and co-authors Ziwen Liu and Eduardo Hirata-Miyasaki. This August, a new class of AI@MBL students will train and test these models, as well as deep learning networks using their own microscopy data – and perhaps inspire the next generation of AI models.
The Chan Zuckerberg Initiative and Howard Hughes Medical Institute support the AI@MBL course.