A planarian’s guide to growing a new head

Researchers at the Whitehead Institute have described a pathyway by which planarians, freshwater flatworms with spectacular regenerative capabilities, can restore large portions of their nervous system, even regenerating a new head with a fully functional brain.

Shafaq Zia | Whitehead Institute
February 6, 2025

Cut off any part of this worm’s body and it will regrow. This is the spectacular yet mysterious regenerative ability of freshwater flatworms known as planarians. The lab of Whitehead Institute Member Peter Reddien investigates the principles underlying this remarkable feat. In their latest study, published in PLOS Genetics on February 6, first author staff scientist M. Lucila Scimone, Reddien, and colleagues describe how planarians restore large portions of their nervous system—even regenerating a new head with a fully functional brain—by manipulating a signaling pathway.

This pathway, called the Delta-Notch signaling pathway, enables neurons to guide the differentiation of a class of progenitors—immature cells that will differentiate into specialized types—into glia, the non-neuronal cells that support and protect neurons. The mechanism ensures that the spatial pattern and relative numbers of neurons and glia at a given location are precisely restored following injury.

“This process allows planarians to regenerate neural circuits more efficiently because glial cells form only where needed, rather than being produced broadly within the body and later eliminated,” said Reddien, who is also a professor of biology at Massachusetts Institute of Technology and an Investigator with the Howard Hughes Medical Institute.

Coordinating regeneration

Multiple cell types work together to form a functional human brain. These include neurons and a more abundant group of cells called glial cells—astrocytes, microglia, and oligodendrocytes. Although glial cells are not the fundamental units of the nervous system, they perform critical functions in maintaining the connections between neurons, called synapses, clearing away dead cells and other debris, and regulating neurotransmitter levels, effectively holding the nervous system together like glue. A few years ago, Reddien and colleagues discovered cells in planarians that looked like glial cells and performed similar neuro-supportive functions. This led to the first characterization of glial cells in planarians in 2016.

Unlike in mammals where the same set of neural progenitors give rise to both neurons and glia, glial cells in planarians originate from a separate, specialized group of progenitors. These progenitors, called phagocytic progenitors, can not only give rise to glial cells but also pigment cells that determine the worm’s coloration, as well as other, lesser understood cell types.

Why neurons and glia in planarians originate from distinct progenitors—and what factors ultimately determine the differentiation of phagocytic progenitors into glia—are questions that still puzzled Reddien and team members. Then, a study showing that planarian neurons regenerate before glia formation led the researchers to wonder whether a signaling mechanism between neurons and phagocytic progenitors guides the specification of glia in planarians.

The first step to unravel this mystery was to look at the Notch signaling pathway, which is known to play a crucial role in the development of neurons and glia in other organisms, and determine its role in planarian glia regeneration. To do this, the researchers used RNA interference (RNAi)—a technique that decreases or completely silences the expression of genes—to turn off key genes involved in the Notch pathway and amputated the planarian’s head. It turned out Notch signaling is essential for glia regeneration and maintenance in planarians—no glial cells were found in the animal following RNAi, while the differentiation of other types of phagocytic cells was unaffected.

Of the different Notch signaling pathway components the researchers tested, turning of the genes notch-1delta-2, and suppressor of hairless produced this phenotype. Interestingly, the signaling molecules Delta-2 was found on the surface of neurons, whereas Notch-1 was expressed in phagocytic progenitors.

With these findings in hand, the researchers hypothesized that interaction between Delta-2 on neurons and Notch-1 on phagocytic progenitors could be governing the final fate determination of glial cells in planarians.

To test the hypothesis, the researchers transplanted eyes either from planarians lacking the notch-1 gene or from planarians lacking the delta-2 gene into wild-type animals and assessed the formation of glial cells around the transplant site. They observed that glial cells still formed around the notch-1 deficient eyes, as notch-1 was still active in the glial progenitors of the host wild-type animal. However, no glial cells formed around the delta-2 deficient eyes, even with the Notch signaling pathway intact in phagocytic progenitors, confirming that delta-2 in the photoreceptor neurons is required for the differentiation of phagocytic progenitors into glia near the eye.

“This experiment really showed us that you have two faces of the same coin—one is the phagocytic progenitors expressing Notch-1, and one is the neurons expressing Delta-2—working together to guide the specification of glia in the organism,”said Scimone.

The researchers have named this phenomenon coordinated regeneration, as it allows neurons to influence the pattern and number of glia at specific locations without the need for a separate mechanism to adjust the relative numbers of neurons and glia.

The group is now interested in investigating whether the same phenomenon might also be involved in the regeneration of other tissue types.

AI model deciphers the code in proteins that tells them where to go

Whitehead Institute and CSAIL researchers created a machine-learning model to predict and generate protein localization, with implications for understanding and remedying disease.

Greta Friar | Whitehead Institute
February 13, 2025

Proteins are the workhorses that keep our cells running, and there are many thousands of types of proteins in our cells, each performing a specialized function. Researchers have long known that the structure of a protein determines what it can do. More recently, researchers are coming to appreciate that a protein’s localization is also critical for its function. Cells are full of compartments that help to organize their many denizens. Along with the well-known organelles that adorn the pages of biology textbooks, these spaces also include a variety of dynamic, membrane-less compartments that concentrate certain molecules together to perform shared functions. Knowing where a given protein localizes, and who it co-localizes with, can therefore be useful for better understanding that protein and its role in the healthy or diseased cell, but researchers have lacked a systematic way to predict this information.

Meanwhile, protein structure has been studied for over half-a-century, culminating in the artificial intelligence tool AlphaFold, which can predict protein structure from a protein’s amino acid code, the linear string of building blocks within it that folds to create its structure. AlphaFold and models like it have become widely used tools in research.

Proteins also contain regions of amino acids that do not fold into a fixed structure, but are instead important for helping proteins join dynamic compartments in the cell. MIT Professor Richard Young and colleagues wondered whether the code in those regions could be used to predict protein localization in the same way that other regions are used to predict structure. Other researchers have discovered some protein sequences that code for protein localization, and some have begun developing predictive models for protein localization. However, researchers did not know whether a protein’s localization to any dynamic compartment could be predicted based on its sequence, nor did they have a comparable tool to AlphaFold for predicting localization.

Now, Young, also member of the Whitehead Institute for Biological Research; Young lab postdoc Henry Kilgore; Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL); and colleagues have built such a model, which they call ProtGPS. In a paper published on Feb. 6 in the journal Science, with first authors Kilgore and Barzilay lab graduate students Itamar Chinn, Peter Mikhael, and Ilan Mitnikov, the cross-disciplinary team debuts their model. The researchers show that ProtGPS can predict to which of 12 known types of compartments a protein will localize, as well as whether a disease-associated mutation will change that localization. Additionally, the research team developed a generative algorithm that can design novel proteins to localize to specific compartments.

“My hope is that this is a first step towards a powerful platform that enables people studying proteins to do their research,” Young says, “and that it helps us understand how humans develop into the complex organisms that they are, how mutations disrupt those natural processes, and how to generate therapeutic hypotheses and design drugs to treat dysfunction in a cell.”

The researchers also validated many of the model’s predictions with experimental tests in cells.

“It really excited me to be able to go from computational design all the way to trying these things in the lab,” Barzilay says. “There are a lot of exciting papers in this area of AI, but 99.9 percent of those never get tested in real systems. Thanks to our collaboration with the Young lab, we were able to test, and really learn how well our algorithm is doing.”

Developing the model

The researchers trained and tested ProtGPS on two batches of proteins with known localizations. They found that it could correctly predict where proteins end up with high accuracy. The researchers also tested how well ProtGPS could predict changes in protein localization based on disease-associated mutations within a protein. Many mutations — changes to the sequence for a gene and its corresponding protein — have been found to contribute to or cause disease based on association studies, but the ways in which the mutations lead to disease symptoms remain unknown.

Figuring out the mechanism for how a mutation contributes to disease is important because then researchers can develop therapies to fix that mechanism, preventing or treating the disease. Young and colleagues suspected that many disease-associated mutations might contribute to disease by changing protein localization. For example, a mutation could make a protein unable to join a compartment containing essential partners.

They tested this hypothesis by feeding ProtGOS more than 200,000 proteins with disease-associated mutations, and then asking it to both predict where those mutated proteins would localize and measure how much its prediction changed for a given protein from the normal to the mutated version. A large shift in the prediction indicates a likely change in localization.

The researchers found many cases in which a disease-associated mutation appeared to change a protein’s localization. They tested 20 examples in cells, using fluorescence to compare where in the cell a normal protein and the mutated version of it ended up. The experiments confirmed ProtGPS’s predictions. Altogether, the findings support the researchers’ suspicion that mis-localization may be an underappreciated mechanism of disease, and demonstrate the value of ProtGPS as a tool for understanding disease and identifying new therapeutic avenues.

“The cell is such a complicated system, with so many components and complex networks of interactions,” Mitnikov says. “It’s super interesting to think that with this approach, we can perturb the system, see the outcome of that, and so drive discovery of mechanisms in the cell, or even develop therapeutics based on that.”

The researchers hope that others begin using ProtGPS in the same way that they use predictive structural models like AlphaFold, advancing various projects on protein function, dysfunction, and disease.

Moving beyond prediction to novel generation

The researchers were excited about the possible uses of their prediction model, but they also wanted their model to go beyond predicting localizations of existing proteins, and allow them to design completely new proteins. The goal was for the model to make up entirely new amino acid sequences that, when formed in a cell, would localize to a desired location. Generating a novel protein that can actually accomplish a function — in this case, the function of localizing to a specific cellular compartment — is incredibly difficult. In order to improve their model’s chances of success, the researchers constrained their algorithm to only design proteins like those found in nature. This is an approach commonly used in drug design, for logical reasons; nature has had billions of years to figure out which protein sequences work well and which do not.

Because of the collaboration with the Young lab, the machine learning team was able to test whether their protein generator worked. The model had good results. In one round, it generated 10 proteins intended to localize to the nucleolus. When the researchers tested these proteins in the cell, they found that four of them strongly localized to the nucleolus, and others may have had slight biases toward that location as well.

“The collaboration between our labs has been so generative for all of us,” Mikhael says. “We’ve learned how to speak each other’s languages, in our case learned a lot about how cells work, and by having the chance to experimentally test our model, we’ve been able to figure out what we need to do to actually make the model work, and then make it work better.”

Being able to generate functional proteins in this way could improve researchers’ ability to develop therapies. For example, if a drug must interact with a target that localizes within a certain compartment, then researchers could use this model to design a drug to also localize there. This should make the drug more effective and decrease side effects, since the drug will spend more time engaging with its target and less time interacting with other molecules, causing off-target effects.

The machine learning team members are enthused about the prospect of using what they have learned from this collaboration to design novel proteins with other functions beyond localization, which would expand the possibilities for therapeutic design and other applications.

“A lot of papers show they can design a protein that can be expressed in a cell, but not that the protein has a particular function,” Chinn says. “We actually had functional protein design, and a relatively huge success rate compared to other generative models. That’s really exciting to us, and something we would like to build on.”

All of the researchers involved see ProtGPS as an exciting beginning. They anticipate that their tool will be used to learn more about the roles of localization in protein function and mis-localization in disease. In addition, they are interested in expanding the model’s localization predictions to include more types of compartments, testing more therapeutic hypotheses, and designing increasingly functional proteins for therapies or other applications.

“Now that we know that this protein code for localization exists, and that machine learning models can make sense of that code and even create functional proteins using its logic, that opens up the door for so many potential studies and applications,” Kilgore says.

A sum of their parts

Researchers in the Department of Biology at MIT use an AI-driven approach to computationally predict short amino acid sequences that can bind to or inhibit a target, with a potential for great impact on fundamental biological research and therapeutic applications.

Lillian Eden | Department of Biology
February 6, 2025

All biological function is dependent on how different proteins interact with each other. Protein-protein interactions facilitate everything from transcribing DNA and controlling cell division to higher-level functions in complex organisms.

Much remains unclear about how these functions are orchestrated on the molecular level, however, and how proteins interact with each other — either with other proteins or with copies of themselves. 

Recent findings have revealed that small protein fragments have a lot of functional potential. Even though they are incomplete pieces, short stretches of amino acids can still bind to interfaces of a target protein, recapitulating native interactions. Through this process, they can alter that protein’s function or disrupt its interactions with other proteins. 

Protein fragments could therefore empower both basic research on protein interactions and cellular processes and could potentially have therapeutic applications. 

Recently published in Proceedings of the National Academy of Sciences, a new computational method developed in the Department of Biology at MIT builds on existing AI models to computationally predict protein fragments that can bind to and inhibit full-length proteins in E. coli. Theoretically, this tool could lead to genetically encodable inhibitors against any protein. 

The work was done in the lab of Associate Professor of Biology and HHMI Investigator Gene-Wei Li in collaboration with the lab of Jay A. Stein (1968) Professor of Biology, Professor of Biological Engineering and Department Head Amy Keating.

Leveraging Machine Learning

The program, called FragFold, leverages AlphaFold, an AI model that has led to phenomenal advancements in biology in recent years due to its ability to predict protein folding and protein interactions. 

The goal of the project was to predict fragment inhibitors, which is a novel application of AlphaFold. The researchers on this project confirmed experimentally that more than half of FragFold’s predictions for binding or inhibition were accurate, even when researchers had no previous structural data on the mechanisms of those interactions. 

“Our results suggest that this is a generalizable approach to find binding modes that are likely to inhibit protein function, including for novel protein targets, and you can use these predictions as a starting point for further experiments,” says co-first and corresponding author Andrew Savinov, a postdoc in the Li Lab. “We can really apply this to proteins without known functions, without known interactions, without even known structures, and we can put some credence in these models we’re developing.”

One example is FtsZ, a protein that is key for cell division. It is well-studied but contains a region that is intrinsically disordered and, therefore, especially challenging to study. Disordered proteins are dynamic, and their functional interactions are very likely fleeting — occurring so briefly that current structural biology tools can’t capture a single structure or interaction. 

The researchers leveraged FragFold to explore the activity of fragments of FtsZ, including fragments of the intrinsically disordered region, to identify several new binding interactions with various proteins. This leap in understanding confirms and expands upon previous experiments measuring FtsZ’s biological activity. 

This progress is significant in part because it was made without solving the disordered region’s structure, and because it exhibits the potential power of FragFold.

“This is one example of how AlphaFold is fundamentally changing how we can study molecular and cell biology,” Keating says. “Creative applications of AI methods, such as our work on FragFold, open up unexpected capabilities and new research directions.”

Inhibition, and beyond

The researchers accomplished these predictions by computationally fragmenting each protein and then modeling how those fragments would bind to interaction partners they thought were relevant.

They compared the maps of predicted binding across the entire sequence to the effects of those same fragments in living cells, determined using high-throughput experimental measurements in which millions of cells each produce one type of protein fragment. 

AlphaFold uses co-evolutionary information to predict folding, and typically evaluates the evolutionary history of proteins using something called multiple sequence alignments for every single prediction run. The MSAs are critical, but are a bottleneck for large-scale predictions — they can take a prohibitive amount of time and computational power. 

For FragFold, the researchers instead pre-calculated the MSA for a full-length protein once and used that result to guide the predictions for each fragment of that full-length protein. 

Savinov, together with Keating Lab alum Sebastian Swanson, PhD ‘23, predicted inhibitory fragments of a diverse set of proteins in addition to FtsZ. Among the interactions they explored was a complex between lipopolysaccharide transport proteins LptF and LptG. A protein fragment of LptG inhibited this interaction, presumably disrupting the delivery of lipopolysaccharide, which is a crucial component of the E. coli outer cell membrane essential for cellular fitness.

“The big surprise was that we can predict binding with such high accuracy and, in fact, often predict binding that corresponds to inhibition,” Savinov says. “For every protein we’ve looked at, we’ve been able to find inhibitors.”

The researchers initially focused on protein fragments as inhibitors because whether a fragment could block an essential function in cells is a relatively simple outcome to measure systematically. Looking forward, Savinov is also interested in exploring fragment function outside inhibition, such as fragments that can stabilize the protein they bind to, enhance or alter its function, or trigger protein degradation. 

Design, in principle 

This research is a starting point for developing a systemic understanding of cellular design principles, and what elements deep-learning models may be drawing on to make accurate predictions. 

“There’s a broader, further-reaching goal that we’re building towards,” Savinov says. “Now that we can predict them, can we use the data we have from predictions and experiments to pull out the salient features to figure out what AlphaFold has actually learned about what makes a good inhibitor?” 

Savinov and collaborators also delved further into how protein fragments bind, exploring other protein interactions and mutating specific residues to see how those interactions change how the fragment interacts with its target. 

Experimentally examining the behavior of thousands of mutated fragments within cells, an approach known as deep mutational scanning, revealed key amino acids that are responsible for inhibition. In some cases, the mutated fragments were even more potent inhibitors than their natural, full-length sequences. 

“Unlike previous methods, we are not limited to identifying fragments in experimental structural data,” says Swanson. “The core strength of this work is the interplay between high-throughput experimental inhibition data and the predicted structural models: the experimental data guides us towards the fragments that are particularly interesting, while the structural models predicted by FragFold provide a specific, testable hypothesis for how the fragments function on a molecular level.”

Savinov is excited about the future of this approach and its myriad applications.

“By creating compact, genetically encodable binders, FragFold opens a wide range of possibilities to manipulate protein function,” Li agrees. “We can imagine delivering functionalized fragments that can modify native proteins, change their subcellular localization, and even reprogram them to create new tools for studying cell biology and treating diseases.” 

Alumni Profile: Desmond Edwards, SB ’22

An interest in translating medicine for a wider audience

School of Science
February 6, 2025

Growing up hearing both English and Patois in rural Jamaica, he always had an interest in understanding other languages, so he studied French in high school and minored in it at MIT. As a child with persistent illnesses, he was frustrated that doctors couldn’t explain the “how” and “why” of what was happening in his body. “I wanted to understand how an entity so small that we can’t even see it with most microscopes is able to get into a massively intricate human body and completely shut it down in a matter of days,” he says.

Edwards, now an MIT graduate and a PhD candidate in microbiology and immunology at Stanford University—with a deferred MD admission in hand as well—feels closer to understanding things. The financial support he received at MIT from the Class of 1975 Scholarship Fund, he says, was one major reason that he chose MIT.

Support for research and discovery

I took a three-week Independent Activities Period boot camp designed to expose first-years with little or no research background to basic molecular biology and microbiology techniques. We had guidance from the professor and teaching assistants, but it was up to us what path we took. That intellectual freedom was part of what made me fall in love with academic research. The lecturer, Mandana Sassanfar, made it her personal mission to connect interested students to Undergraduate Research Opportunities Program placements, which is how I found myself in Professor Rebecca Lamason’s lab.

At the end of my first year, I debated whether to prioritize my academic research projects or leave for a higher-paying summer internship. My lab helped me apply for the Peter J. Eloranta Summer Undergraduate Research Fellowship, which provided funding that allowed me to stay for the summer, and I ended up staying in the lab for the rest of my time at MIT. One paper I coauthored (about developing new genetic tools to control pathogenic bacteria’s gene expression) was published this year.

French connections

French is one of the working languages of many global health programs, and being able to read documents in their original language has been helpful because many diseases that I care about impact Francophone countries like those in sub-Saharan and west Africa. In one French class, we had to analyze an original primary historical text, so I was able to look at an outbreak of plague in the 18th century and compare their public health response with ours to Covid-19. My MIT French classes have been useful in some very cool ways that I did not anticipate.

Translating medicine for the masses

When I go home and talk about my research, I often adapt folk stories, analogies, and relatable everyday situations to get points across since there might not be exact Patois words or phrases to directly convey what I’m describing. Taking these scientific concepts and breaking them all into bite-size pieces is important for the general American public too. I want to lead a scientific career that not only advances our understanding and treatment of infectious diseases, but also positively impacts policy, education, and outreach. Right now, this looks like a combination of being an academic/medical professor and eventually leading the Centers for Disease Control and Prevention.

Kingdoms collide as bacteria and cells form captivating connections

Studying the pathogen R. parkeri, researchers discovered the first evidence of extensive and stable interkingdom contacts between a pathogen and a eukaryotic organelle.

Lillian Eden | Department of Biology
January 24, 2025

In biology textbooks, the endoplasmic reticulum is often portrayed as a distinct, compact organelle near the nucleus, and is commonly known to be responsible for protein trafficking and secretion. In reality, the ER is vast and dynamic, spread throughout the cell and able to establish contact and communication with and between other organelles. These membrane contacts regulate processes as diverse as fat metabolism, sugar metabolism, and immune responses.

Exploring how pathogens manipulate and hijack essential processes to promote their own life cycles can reveal much about fundamental cellular functions and provide insight into viable treatment options for understudied pathogens.

New research from the Lamason Lab in the Department of Biology at MIT recently published in the Journal of Cell Biology has shown that Rickettsia parkeri, a bacterial pathogen that lives freely in the cytosol, can interact in an extensive and stable way with the rough endoplasmic reticulum, forming previously unseen contacts with the organelle.

It’s the first known example of a direct interkingdom contact site between an intracellular bacterial pathogen and a eukaryotic membrane.

The Lamason Lab studies R. parkeri as a model for infection of the more virulent Rickettsia rickettsii. R. rickettsii, carried and transmitted by ticks, causes Rocky Mountain Spotted Fever. Left untreated, the infection can cause symptoms as severe as organ failure and death.

Rickettsia is difficult to study because it is an obligate pathogen, meaning it can only live and reproduce inside living cells, much like a virus. Researchers must get creative to parse out fundamental questions and molecular players in the R. parkeri life cycle, and much remains unclear about how R. parkeri spreads.

Detour to the junction

First author Yamilex Acevedo-Sánchez, a BSG-MSRP-Bio program alum and a graduate student at the time, stumbled across the ER and R. parkeri interactions while trying to observe Rickettsia reaching a cell junction.

The current model for Rickettsia infection involves R. parkeri spreading cell to cell by traveling to the specialized contact sites between cells and being engulfed by the neighboring cell in order to spread. Listeria monocytogenes, which the Lamason Lab also studies, uses actin tails to forcefully propel itself into a neighboring cell. By contrast, R. parkeri can form an actin tail, but loses it before reaching the cell junction. Somehow, R. parkeri is still able to spread to neighboring cells.

After an MIT seminar about the ER’s lesser-known functions, Acevedo-Sánchez developed a cell line to observe whether Rickettsia might be spreading to neighboring cells by hitching a ride on the ER to reach the cell junction.

Instead, she saw an unexpectedly high percentage of R. parkeri surrounded and enveloped by the ER, at a distance of about 55 nanometers. This distance is significant because membrane contacts for interorganelle communication in eukaryotic cells form connections from 10-80 nanometers wide. The researchers ruled out that what they saw was not an immune response, and the sections of the ER interacting with the R. parkeri were still connected to the wider network of the ER.

“I’m of the mind that if you want to learn new biology, just look at cells,” Acevedo-Sánchez says. “Manipulating the organelle that establishes contact with other organelles could be a great way for a pathogen to gain control during infection.”

The stable connections were unexpected because the ER is constantly breaking and reforming connections, lasting seconds or minutes. It was surprising to see the ER stably associating around the bacteria. As a cytosolic pathogen that exists freely in the cytosol of the cells it infects, it was also unexpected to see R. parkeri surrounded by a membrane at all.

Small margins

Acevedo-Sánchez collaborated with the Center for Nanoscale Systems at Harvard University to view her initial observations at higher resolution using focused ion beam scanning electron microscopy. FIB-SEM involves taking a sample of cells and blasting them with a focused ion beam in order to shave off a section of the block of cells. With each layer, a high-resolution image is taken. The result of this process is a stack of images.

From there, Acevedo-Sánchez marked what different areas of the images were — such as the mitochondria, Rickettsia, or the ER — and a program called ORS Dragonfly, a machine learning program, sorted through the thousand or so images to identify those categories. That information was then used to create 3D models of the samples.

Acevedo-Sánchez noted that less than 5 percent of R. parkeri formed connections with the ER — but small quantities of certain characteristics are known to be critical for R. parkeri infection. R. parkeri can exist in two states: motile, with an actin tail, and nonmotile, without it. In mutants unable to form actin tails, R. parkeri are unable to progress to adjacent cells — but in nonmutants, the percentage of R. parkeri that have tails starts at about 2 percent in early infection and never exceeds 15 percent at the height of it.

The ER only interacts with nonmotile R. parkeri, and those interactions increased 25-fold in mutants that couldn’t form tails.

Creating connections

Co-authors Acevedo-Sánchez, Patrick Woida, and Caroline Anderson also investigated possible ways the connections with the ER are mediated. VAP proteins, which mediate ER interactions with other organelles, are known to be co-opted by other pathogens during infection.

During infection by R. parkeri, VAP proteins were recruited to the bacteria; when VAP proteins were knocked out, the frequency of interactions between R. parkeri and the ER decreased, indicating R. parkeri may be taking advantage of these cellular mechanisms for its own purposes during infection.

Although Acevedo-Sánchez now works as a senior scientist at AbbVie, the Lamason Lab is continuing the work of exploring the molecular players that may be involved, how these interactions are mediated, and whether the contacts affect the host or bacteria’s life cycle.

Senior author and associate professor of biology Rebecca Lamason noted that these potential interactions are particularly interesting because bacteria and mitochondria are thought to have evolved from a common ancestor. The Lamason Lab has been exploring whether R. parkeri could form the same membrane contacts that mitochondria do, although they haven’t proven that yet. So far, R. parkeri is the only cytosolic pathogen that has been observed behaving this way.

“It’s not just bacteria accidentally bumping into the ER. These interactions are extremely stable. The ER is clearly extensively wrapping around the bacterium, and is still connected to the ER network,” Lamason says. “It seems like it has a purpose — what that purpose is remains a mystery.”

Alumni Profile: Matthew Dolan, SB ’81

From Bench to Bedside and Beyond

Lillian Eden | Department of Biology
January 16, 2025

Matthew Dolan, SB ‘81, worked in the U.S. and abroad during a fascinating time in the field of immunology and virology.

In medical school, Matthew Dolan, SB ‘81, briefly considered specializing in orthopedic surgery because of the materials science nature of the work — but he soon realized that he didn’t have the innate skills required for that type of work. 

“I’ll be honest with you — I can’t parallel park,” he jokes. “You can consider a lot of things, but if you find the things that you’re good at and that excite you, you can hopefully move forward with those.” 

Dolan certainly has, tackling problems from bench to bedside and beyond. Both in the U.S. and abroad through the Air Force, Dolan has emerged as a leader in immunology and virology, and has served as Director of the Defense Institute for Medical Operations. He’s worked on everything from foodborne illnesses and Ebola to biological weapons and COVID-19, and has even been a guest speaker on NPR’s Science Friday

“This is fun and interesting, and I believe that, and I work hard to convey that — and it’s contagious,” he says. “You can affect people with that excitement.” 

Pieces of the Puzzle

Dolan fondly recalls his years at MIT, and is still in touch with many of the “brilliant” and “interesting” friends he made while in Cambridge. 

He notes that the challenges that were the most rewarding in his career were also the ones that MIT had uniquely prepared him for. Dolan, a Course 7 major, naturally took many classes outside of Biology as part of his undergraduate studies: organic chemistry was foundational for understanding toxicology while studying chemical weapons, while pathogens like Legionella, which causes pneumonia and can spread through water systems like ice machines or air conditioners, are solved at the interface between public health and ecology.

Man sitting on couch next to white dog with pointy ears.
Matthew Dolan stateside with his German Shepherd Sophie. Photo courtesy of Matthew Dolan.

“I learned that learning can be a high-intensity experience,” Dolan recalls. “You can be aggressive in your learning; you can learn and excel in a wide variety of things and gather up all the knowledge and knowledgeable people to work together towards solutions.”

Dolan, for example, worked in the Amazon Basin in Peru on a public health crisis of a sharp rise in childhood mortality due to malaria. The cause was a few degrees removed from the immediate problem: human agriculture had affected the Amazon’s tributaries, leading to still and stagnant water where before there had been rushing streams and rivers. This change in the environment allowed a certain mosquito species of “avid human biters” to thrive.  

“It can be helpful and important for some people to have a really comprehensive and contextual view of scientific problems and biological problems,” he says. “It’s very rewarding to put the pieces in a puzzle like that together.” 

Choosing To Serve

Dolan says a key to finding meaning in his work, especially during difficult times, is a sentiment from Alsatian polymath and Nobel Peace Prize winner Albert Schweitzer: “The only ones among you who will be really happy are those who will have sought and found how to serve.”

One of Dolan’s early formative experiences was working in the heart of the HIV/AIDS epidemic, at a time when there was no effective treatment. No matter how hard he worked, the patients would still die. 

“Failure is not an option — unless you have to fail. You can’t let the failures destroy you,” he says. “There are a lot of other battles out there, and it’s self-indulgent to ignore them and focus on your woe.” 

Lasting Impacts

Dolan couldn’t pick a favorite country, but notes that he’s always impressed seeing how people value the chance to excel with science and medicine when offered resources and respect. Ultimately, everyone he’s worked with, no matter their differences, was committed to solving problems and improving lives. 

Dolan worked in Russia after the Berlin Wall fell, on HIV/AIDS in Moscow and Tuberculosis in the Russian Far East. Although relations with Russia are currently tense, to say the least, Dolan remains optimistic for a brighter future. 

“People that were staunch adversaries can go on to do well together,” he says. “Sometimes, peace leads to partnership. Remembering that it was once possible gives me great hope.” 

Dolan understands that the most lasting impact he has had is, likely, teaching: time marches on, and discoveries can be lost to history, but teaching and training people continues and propagates. In addition to guiding the next generation of healthcare specialists, Dolan also developed programs in laboratory biosafety and biosecurity with the State Department and the Defense Department, and taught those programs around the world. 

“Working in prevention gives you the chance to take care of process problems before they become people problems — patient care problems,” he says. “I have been so impressed with the courageous and giving people that have worked with me.” 

Cellular interactions help explain vascular complications due to COVID-19 virus infection

Whitehead Institute Founding Member Rudolf Jaenisch and colleagues have found that cellular interactions help explain how SARS-CoV-2, the virus that causes COVID-19, could have such significant vascular complications, including blood clots, heart attacks, and strokes.

Greta Friar | Whitehead Institute
December 31, 2024

COVID-19 is a respiratory disease primarily affecting the lungs. However, the SARS-CoV-2 virus that causes COVID-19 surprised doctors and scientists by triggering an unusually large percentage of patients to experience vascular complications – issues related to blood flow, such as blood clots, heart attacks, and strokes.

Whitehead Institute Founding Member Rudolf Jaenisch and colleagues wanted to understand how this respiratory virus could have such significant vascular effects. They used pluripotent stem cells to generate three relevant vascular and perivascular cell types—cells that surround and help maintain blood vessels—so they could closely observe the effects of SARS-CoV-2 on the cells. Instead of using existing methods to generate the cells, the researchers developed a new approach, providing them with fresh insights into the mechanisms by which the virus causes vascular problems. The researchers found that SARS-CoV-2 primarily infects perivascular cells and that signals from these infected cells are sufficient to cause dysfunction in neighboring vascular cells, even when the vascular cells are not themselves infected. In a paper published in the journal Nature Communications on December 30, Jaenisch, postdoc in his lab Alexsia Richards, Harvard University Professor and Wyss Institute for Biologically Inspired Engineering Member David Mooney, and then-postdoc in the Jaenisch and Mooney labs Andrew Khalil share their findings and present a scalable stem cell-derived model system with which to study vascular cell biology and test medical therapies.

A new problem requires a new approach

When the COVID-19 pandemic began, Richards, a virologist, quickly pivoted her focus to SARS-CoV-2. Khalil, a bioengineer, had already been working on a new approach to generate vascular cells. The researchers realized that a collaboration could provide Richards with the research tool she needed and Khalil with an important research question to which his tool could be applied.

The three cell types that Khalil’s approach generated were endothelial cells, the vascular cells that form the lining of blood vessels; and smooth muscle cells and pericytes, perivascular cells that surround blood vessels and provide them with structure and maintenance, among other functions. Khalil’s biggest innovation was to generate all three cell types in the same media—the mixture of nutrients and signaling molecules in which stem cell-derived cells are grown.

The combination of signals in the media determines the final cell type into which a stem cell will mature, so it is much easier to grow each cell type separately in specially tailored media than to find a mixture that works for all three. Typically, Richards explains, virologists will generate a desired cell type using the easiest method, which means growing each cell type and then observing the effects of viral infection on it in isolation. However, this approach can limit results in several ways. Firstly, it can make it challenging to distinguish the differences in how cell types react to a virus from the differences caused by the cells being grown in different media.

“By making these cells under identical conditions, we could see in much higher resolution the effects of the virus on these different cell populations, and that was essential in order to form a strong hypothesis of the mechanisms of vascular symptom risk and progression,” Khalil says.

Secondly, infecting isolated cell types with a virus does not accurately represent what happens in the body, where cells are in constant communication as they react to viral exposure. Indeed, Richards’ and Khalil’s work ultimately revealed that the communication between infected and uninfected cell types plays a critical role in the vascular effects of COVID-19.

“The field of virology often overlooks the importance of considering how cells influence other cells and designing models to reflect that,” Richards says. “Cells do not get infected in isolation, and the value of our model is that it allows us to observe what’s happening between cells during infection.”

Viral infection of smooth muscle cells has broader, indirect effects

When the researchers exposed their cells to SARS-CoV-2, the smooth muscle cells and pericytes became infected—the former at especially high levels, and this infection resulted in strong inflammatory gene expression—but the endothelial cells resisted infection. Endothelial cells did show some response to viral exposure, likely due to interactions with proteins on the virus’ surface. Typically, endothelial cells press tightly together to form a firm barrier that keeps blood inside of blood vessels and prevents viruses from getting out. When exposed to SARS-CoV-2, the junctions between endothelial cells appeared to weaken slightly. The cells also had increased levels of reactive oxygen species, which are damaging byproducts of certain cellular processes.

However, big changes in endothelial cells only occurred after the cells were exposed to infected smooth muscle cells. This triggered high levels of inflammatory signaling within the endothelial cells. It led to changes in the expression of many genes relevant to immune response. Some of the genes affected were involved in coagulation pathways, which thicken blood and so can cause blood clots and related vascular events. The junctions between endothelial cells experienced much more significant weakening after exposure to infected smooth muscle cells, which would lead to blood leakage and viral spread. All of these changes occurred without SARS-CoV-2 ever infecting the endothelial cells.

This work shows that viral infection of smooth muscle cells, and their resultant signaling to endothelial cells, is the lynchpin in the vascular damage caused by SARS-CoV-2. This would not have been apparent if the researchers had not been able to observe the cells interacting with each other.

Clinical relevance of stem cell results

The effects that the researchers observed were consistent with patient data. Some of the genes whose expression changed in their stem cell-derived model had been identified as markers of high risk for vascular complications in COVID-19 patients with severe infections. Additionally, the researchers found that a later strain of SARS-CoV-2, an Omicron variant, had much weaker effects on the vascular and perivascular cells than did the original viral strain. This is consistent with the reduced levels of vascular complications seen in COVID-19 patients infected with recent strains.

Having identified smooth muscle cells as the main site of SARS-Cov-2 infection in the vascular system, the researchers next used their model system to test one drug’s ability to prevent infection of smooth muscle cells. They found that the drug, N, N-Dimethyl-D-erythro-sphingosine, could reduce infection of the cell type without harming smooth muscle or endothelial cells. Although preventing vascular complications of COVID-19 is not as pressing a need with current viral strains, the researchers see this experiment as proof that their stem cell model could be used for future drug development. New coronaviruses and other pathogens are frequently evolving, and when a future virus causes vascular complications, this model could be used to quickly test drugs to find potential therapies while the need is still high. The model system could also be used to answer other questions about vascular cells, how these cells interact, and how they respond to viruses.

“By integrating bioengineering strategies into the analysis of a fundamental question in viral pathology, we addressed important practical challenges in modeling human disease in culture and gained new insights into SARS-CoV-2 infection,” Mooney says.

“Our interdisciplinary approach allowed us to develop an improved stem cell model for infection of the vasculature,” says Jaenisch, who is also a professor of biology at the Massachusetts Institute of Technology. “Our lab is already applying this model to other questions of interest, and we hope that it can be a valuable tool for other researchers.”

An abundant phytoplankton feeds a global network of marine microbes

New findings illuminate how Prochlorococcus’ nightly “cross-feeding” plays a role in regulating the ocean’s capacity to cycle and store carbon.

Jennifer Chu | MIT News
January 3, 2025

One of the hardest-working organisms in the ocean is the tiny, emerald-tinged Prochlorococcus marinus. These single-celled “picoplankton,” which are smaller than a human red blood cell, can be found in staggering numbers throughout the ocean’s surface waters, making Prochlorococcus the most abundant photosynthesizing organism on the planet. (Collectively, Prochlorococcus fix as much carbon as all the crops on land.) Scientists continue to find new ways that the little green microbe is involved in the ocean’s cycling and storage of carbon.

Now, MIT scientists have discovered a new ocean-regulating ability in the small but mighty microbes: cross-feeding of DNA building blocks. In a study appearing today in Science Advances, the team reports that Prochlorococcus shed these extra compounds into their surroundings, where they are then “cross-fed,” or taken up by other ocean organisms, either as nutrients, energy, or for regulating metabolism. Prochlorococcus’ rejects, then, are other microbes’ resources.

What’s more, this cross-feeding occurs on a regular cycle: Prochlorococcus tend to shed their molecular baggage at night, when enterprising microbes quickly consume the cast-offs. For a microbe called SAR11, the most abundant bacteria in the ocean, the researchers found that the nighttime snack acts as a relaxant of sorts, forcing the bacteria to slow down their metabolism and effectively recharge for the next day.

Through this cross-feeding interaction, Prochlorococcus could be helping many microbial communities to grow sustainably, simply by giving away what it doesn’t need. And they’re doing so in a way that could set the daily rhythms of microbes around the world.

“The relationship between the two most abundant groups of microbes in ocean ecosystems has intrigued oceanographers for years,” says co-author and MIT Institute Professor Sallie “Penny” Chisholm, who played a role in the discovery of Prochlorococcus in 1986. “Now we have a glimpse of the finely tuned choreography that contributes to their growth and stability across vast regions of the oceans.”

Given that Prochlorococcus and SAR11 suffuse the surface oceans, the team suspects that the exchange of molecules from one to the other could amount to one of the major cross-feeding relationships in the ocean, making it an important regulator of the ocean carbon cycle.

“By looking at the details and diversity of cross-feeding processes, we can start to unearth important forces that are shaping the carbon cycle,” says the study’s lead author, Rogier Braakman, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

Other MIT co-authors include Brandon Satinsky, Tyler O’Keefe, Shane Hogle, Jamie Becker, Robert Li, Keven Dooley, and Aldo Arellano, along with Krista Longnecker, Melissa Soule, and Elizabeth Kujawinski of Woods Hole Oceanographic Institution (WHOI).

Spotting castaways

Cross-feeding occurs throughout the microbial world, though the process has mainly been studied in close-knit communities. In the human gut, for instance, microbes are in close proximity and can easily exchange and benefit from shared resources.

By comparison, Prochlorococcus are free-floating microbes that are regularly tossed and mixed through the ocean’s surface layers. While scientists assume that the plankton are involved in some amount of cross-feeding, exactly how this occurs, and who would benefit, have historically been challenging to probe; any stuff that Prochlorococcus cast away would have vanishingly low concentrations,and be exceedingly difficult to measure.

But in work published in 2023, Braakman teamed up with scientists at WHOI, who pioneered ways to measure small organic compounds in seawater. In the lab, they grew various strains of Prochlorococcus under different conditions and characterized what the microbes released. They found that among the major “exudants,” or released molecules, were purines and pyridines, which are molecular building blocks of DNA. The molecules also happen to be nitrogen-rich — a fact that puzzled the team. Prochlorococcus are mainly found in ocean regions that are low in nitrogen, so it was assumed they’d want to retain any and all nitrogen-containing compounds they can. Why, then, were they instead throwing such compounds away?

Global symphony

In their new study, the researchers took a deep dive into the details of Prochlorococcus’ cross-feeding and how it influences various types of ocean microbes.

They set out to study how Prochlorococcus use purine and pyridine in the first place, before expelling the compounds into their surroundings. They compared published genomes of the microbes, looking for genes that encode purine and pyridine metabolism. Tracing the genes forward through the genomes, the team found that once the compounds are produced, they are used to make DNA and replicate the microbes’ genome. Any leftover purine and pyridine is recycled and used again, though a fraction of the stuff is ultimately released into the environment. Prochlorococcus appear to make the most of the compounds, then cast off what they can’t.

The team also looked to gene expression data and found that genes involved in recycling purine and pyrimidine peak several hours after the recognized peak in genome replication that occurs at dusk. The question then was: What could be benefiting from this nightly shedding?

For this, the team looked at the genomes of more than 300 heterotrophic microbes — organisms that consume organic carbon rather than making it themselves through photosynthesis. They suspected that such carbon-feeders could be likely consumers of Prochlorococcus’ organic rejects. They found most of the heterotrophs contained genes that take up either purine or pyridine, or in some cases, both, suggesting microbes have evolved along different paths in terms of how they cross-feed.

The group zeroed in on one purine-preferring microbe, SAR11, as it is the most abundant heterotrophic microbe in the ocean. When they then compared the genes across different strains of SAR11, they found that various types use purines for different purposes, from simply taking them up and using them intact to breaking them down for their energy, carbon, or nitrogen. What could explain the diversity in how the microbes were using Prochlorococcus’ cast-offs?

It turns out the local environment plays a big role. Braakman and his collaborators performed a metagenome analysis in which they compared the collectively sequenced genomes of all microbes in over 600 seawater samples from around the world, focusing on SAR11 bacteria. Metagenome sequences were collected alongside measurements of various environmental conditions and geographic locations in which they are found. This analysis showed that the bacteria gobble up purine for its nitrogen when the nitrogen in seawater is low, and for its carbon or energy when nitrogen is in surplus — revealing the selective pressures shaping these communities in different ocean regimes.

“The work here suggests that microbes in the ocean have developed relationships that advance their growth potential in ways we don’t expect,” says co-author Kujawinski.

Finally, the team carried out a simple experiment in the lab, to see if they could directly observe a mechanism by which purine acts on SAR11. They grew the bacteria in cultures, exposed them to various concentrations of purine, and unexpectedly found it causes them to slow down their normal metabolic activities and even growth. However, when the researchers put these same cells under environmentally stressful conditions, they continued growing strong and healthy cells, as if the metabolic pausing by purines helped prime them for growth, thereby avoiding the effects of the stress.

“When you think about the ocean, where you see this daily pulse of purines being released by Prochlorococcus, this provides a daily inhibition signal that could be causing a pause in SAR11 metabolism, so that the next day when the sun comes out, they are primed and ready,” Braakman says. “So we think Prochlorococcus is acting as a conductor in the daily symphony of ocean metabolism, and cross-feeding is creating a global synchronization among all these microbial cells.”

This work was supported, in part, by the Simons Foundation and the National Science Foundation.

From Molecules to Memory

On a biological foundation of ions and proteins, the brain forms, stores, and retrieves memories to inform intelligent behavior.

Noah Daly | Department of Biology
December 23, 2024

Whenever you go out to a restaurant to celebrate, your brain retrieves memories while forming new ones. You notice the room is elegant, that you’re surrounded by people you love, having meaningful conversations, and doing it all with good manners. Encoding these precious moments (and not barking at your waiter, expecting dessert before your appetizer), you rely heavily on plasticity, the ability of neurons to change the strength and quantity of their connections in response to new information or activity. The very existence of memory and our ability to retrieve it to guide our intelligent behavior are hypothesized to be movements of a neuroplastic symphony, manifested through chemical processes occurring across vast, interconnected networks of neurons.

During infancy, brain connectivity grows exponentially, rapidly increasing the number of synapses between neurons, some of which are then pruned back to select the most salient for optimal performance. This exuberant growth followed by experience-dependent optimization lays a foundation of connections to produce a functional brain, but the action doesn’t cease there. Faced with a lifetime of encountering and integrating new experiences, the brain will continue to produce and edit connections throughout adulthood, decreasing or increasing their strength to ensure that new information can be encoded.

There are a thousand times more connections in the brain than stars in the Milky Way galaxy. Neuroscientists have spent more than a century exploring that vastness for evidence of the biology of memory. In the last 30 years, advancements in microscopy, genetic sequencing and manipulation, and machine learning technologies have enabled researchers, including four MIT Professors of Biology working in The Picower Institute for Learning and Memory – Elly NediviTroy LittletonMatthew Wilson, and Susumu Tonegawa – to help refine and redefine our understanding of how plasticity works in the brain, what exactly memories are, how they are formed, consolidated, and even changed to suit our needs as we navigate an uncertain world.

Circuits and Synapses: Our Information Superhighway

Neuroscientists hypothesize that how memories come to be depends on how neurons are connected and how they can rewire these connections in response to new experiences and information. This connectivity occursat the junction between two neurons, called a synapse. When a neuron wants to pass on a signal, it will release chemical messengers called neurotransmitters into the synapse cleft from the end of a long protrusion called the axon, often called the “pre-synaptic” area.

These neurotransmitters, whose release is triggered by electrical impulses called action potentials, can bind to specialized receptors on the root-like structures of the receiving neuron, known as dendrites (the “post-synaptic” area). Dendrites are covered with receptors that are either excitatory or inhibitory, meaning they are capable of increasing or decreasing the post-synaptic neuron’s chance of firing their own action potential and carrying a message further.

Not long ago, the scientific consensus was that the brain’s circuitry became hardwired in adulthood. However, a completely fixed system does not lend itself to incorporating new information.

“While the brain doesn’t make any new neurons, it constantly adds and subtracts connections between those neurons to optimize our most basic functions,” explains Nedivi. Unused synapses are pruned away to make room for more regularly used ones. Nedivi has pioneered techniques of two-photon microscopy to examine the plasticity of synapses on axons and dendrites in vivid, three-dimensional detail in living, behaving, and learning animals.

But how does the brain determine which synapses to strengthen and which to prune? “There are three ways to do this,” Littleton explains. “One way is to make the presynaptic side release more neurotransmitters to instigate a bigger response to the same behavioral stimulus. Another is to have the postsynaptic cell respond more strongly. This is often accomplished by adding glutamate receptors to the dendritic spine so that the same signal is detected at a higher level, essentially turning the radio volume up or down.” (Glutamate, one of the most prevalent neurotransmitters in the brain, is our main excitatory messenger and can be found in every region of our neural network.)

Littleton’s lab studies how neurons can turn that radio volume up or down by changing presynaptic as well as postsynaptic output. Characterizing many of the dozens of proteins involved has helped Littleton discover in 2005, for instance, how signals from the post-synaptic area can make some pre-synaptic signals stronger and more active than others. “Our interest is really understanding how the building blocks of this critical connection between neurons work, so we study Drosophila, the simple fruit fly, as a model system to address these questions. We usually take genetic approaches where we can break the system by knocking out a gene or overexpressing it, that allows us to figure out precisely what the protein is doing.”

In general, the release of neurotransmitters can make it more or less likely the receiving cell will continue the line of communication through activation of voltage-gated channels that initiate action potentials. When these action potentials arrive at presynaptic terminals, they can trigger that neuron to release its own neurotransmitters to influence downstream partners. The conversion of electrical signals to chemical transmitters requires presynaptic calcium channels that form pores in the cell membrane that act as a switch, telling the cell to pass along the message in full, reduce the volume, or change the tune completely. By altering calcium channel function, which can be done using a host of neuromodulators or clinically relevant drugs, synaptic function can be tuned up or down to change communication between neurons.

The third mechanism, adding new synapses, has been one of the focal points of Nedivi’s research. Nedivi models this in the visual cortex, labeling and tracking cells in lab mice exposed to different visual experiences that stimulate plasticity.

In a 2016 study, Nedivi showed that the distribution of excitatory and inhibitory synaptic sites on dendrites fluctuates rapidly, with the number of inhibitory sites disappearing and reappearing in the course of a single day. The action, she explains, is in the spines that protrude from dendrites along their length and house post-synaptic areas.

“We found that some spines which were previously thought to have only excitatory synapses are actually dually innervated, meaning they have both excitatory and inhibitory synapses,” Nedivi says. “The excitatory synapses are always stable, and yet on the same spine, about 70% of the inhibitory synapses are dynamic, meaning they can come and go. It’s as if the excitatory synapses on the dually innervated spines are hard-wired, but their activity can be attenuated by the presence of an inhibitory synapse that can gate their activity. Thus, Nedivi found that the number of inhibitory synapses, which make up roughly 15% of the synaptic density of the brain as a whole, play an outsized role in managing the passage of signals that lead to the formation of memory.

“We didn’t start out thinking about it this way, but the inhibitory circuitry is so much more dynamic.” she says. “That’s where the plasticity is.”

Inside Engrams: Memory Storage & Recall

A brain that has made many connections and can continually edit them to process information is well set up for its neurons to work together to form a memory. Understanding the mystery of how it does this excited Susumu Tonegawa, a molecular biologist who won the Nobel Prize for his prior work in immunology.

“More than 100 years ago, it was theorized that, for the brain to form a biological basis for storing information, neurons form localized groupings called engrams,” Tonegawa explains. Whenever an experience exposes the brain to new information, synapses among ensembles of neurons undergo persistent chemical and physical changes to form an engram.

Engram cells can be reactivated and modified physically or chemically by a new learning experience. Repeating stimuli present during a prior learning experience (or at least some part of it) also allows the brain to retrieve some of that information.

In 1992, Tonegawa’s lab was the first to show that knocking out a gene for the synaptic protein, alpha-CamKII could disrupt memory formation, helping to establish molecular biology as a tool to understand how memories are encoded. The lab has made numerous contributions on that front since then.

By 2012, neuroscience approaches had advanced to the point where Tonegawa and colleagues could directly test for the existence of engrams. In a study in Nature, Tonegawa’s lab reported that directly activating a subset of neurons involved in the formation of memory–an engram–was sufficient to induce the behavioral expression of that memory. They pinpointed cells involved in forming a memory (a moment of fear instilled in a mouse by giving its foot a little shock) by tracking the timely expression of the protein c-fos in neurons in the hippocampus. They then labeled these cells using specialized ion channels that activate the neurons when exposed to light. After observing what cells were activated during the formation of a fear memory, the researchers traced the synaptic circuits linking them.

It turned out that they only needed to optically activate the neurons involved in the memory of the footshock to trigger the mouse to freeze (just like it does when returned to the fearful scene), which proved those cells were sufficient to elicit the memory. Later, Tonegawa and his team also found that when this memory forms, it forms simultaneously in the cortex and the basolateral amygdala, where the brain forms emotional associations. This discovery contradicted the standard theory of memory consolidation, where memories form in the hippocampus before migrating to the cortex for retrieval later.

Tonegawa has also found key distinctions between memory storage and recall. In 2017, he and colleagues induced a form of amnesia in mice by disrupting their ability to make proteins needed for strengthening synapses. The lab found that engrams could still be reactivated artificially, instigating the freezing behavior, even though they could not be retrieved anymore through natural recall cues. They dubbed these no-longer naturally retrievable memories “silent engrams.” The research showed that while synapse strengthening was needed to recall a memory, the mere pattern of connectivity in the engram was enough to store it.

While recalling memories stored in silent engrams is possible, they require stronger than normal stimuli to be activated. “This is caused in part by the lower density of dendritic spines on neurons that participate in silent engrams,” Tonegawa says. Notably, Tonegawa sees applications of this finding in studies of Alzheimer’s disease. While working with a mouse model that presents with the early stages of the disease, Tonegawa’s lab could stimulate silent engrams to help them retrieve memories.

Making memory useful

Our neural circuitry is far from a hard drive or a scrapbook. Instead, the brain actively evaluates the information stored in our memories to build models of the world and then make modifications to better utilize our accumulated knowledge in intelligent behavior.

Processing memory includes making structural and chemical changes throughout life. This requires focused energy, like during sleep or waking states of rest. To hit replay on essential events and simulate how they might be replicated in the future, we need to power down and let the mind work. These so-called “offline states” and the processes of memory refinement and prediction they enable fascinate Matt Wilson. Wilson has spent the last several decades examining the ways different regions of the brain communicate with one another during various states of consciousness to learn, retrieve, and augment memories to serve an animal’s intelligent behavior.

“An organism that has successfully evolved an adaptive intelligent system already knows how to respond to new situations,” Wilson says. “They might refine their behavior, but the fact that they had adaptive behavior in the first place suggests that they have to have embedded some kind of a model of expectation that is good enough to get by with. When we experience something for the first time, we make refinements to the model–we learn–and then what we retain from that is what we think of as memory. So the question becomes, how do we refine those models based on experiences?”

Wilson’s fascination with resting states began during his postdoctoral research at the University of Arizona, where he noticed a sleeping lab rat was producing the same electrical activity in its brain as it did while running through a maze. Since then, he has shown that different offline states, including different states of sleep, represent different kinds of offline functions, such as replaying experiences or simulating them. In 2002, Wilson’s work with slow-wave sleep showed the important role the hippocampus plays in spatial learning. Using electrophysiology, where probes are directly inserted into the brain tissue of the mouse, Wilson found that the sequential firing of the same hippocampal neurons activated while it sought pieces of chocolate on either end of a linear track occurred 20 times faster while the rat was in slow-wave sleep.

In 2006, Wilson co-authored a study in Nature that showed mice can retrace their steps after completing a maze. Using electrophysiological recording of the activity of many individual neurons, Wilson showed that the mice replay the memory of each turn it took in reverse, doing so multiple times whenever they had an opportunity to rest between trials.
These replays manifested as ripples in electrical activity that occur during slow-wave sleep.

“REM sleep, on the other hand, can produce novel recapitulation of action-based states, where long sequences and movement information are also repeated.” (e.g. when your dog is moving its legs during sleep, it could be producing a full-fledged simulation of running). Three years after his initial replay study, Wilson found that mice can initiate replay from any point in the sequence of turns in the maze and can do so forward or in reverse.

“Memory is not just about storing my experience,” Wilson explains. “It’s about making modifications in an existing adaptive model, one that’s been developed based on prior experience. In the case of A.I.s such as large language models [like ChatGPT], you just dump everything in there. For biology, it’s all about the experience being folded into the evolutionary operating system, governed by developmental rules. In a sense, you can put this complexity into the machine, but you just can’t train an animal up de novo; there has to be something that allows it to work through these developmental mechanisms.”

The property of the brain that many neuroscientists believe enables this versatile, flexible, and adaptive approach to storing, recalling, and using memory is its plasticity. Because the brain’s machinery is molecular, it is constantly renewable and rewireable, allowing us to incorporate new experiences even as we apply prior experiences. Because we’ve had many dinners in many restaurants, we can navigate the familiar experience while appreciating the novelty of a celebration. We can look into the future, imagining similarly rewarding moments that have yet to come, and game out how we might get there. The marvels of memory allow us to see much of this information in real-time, and scientists at MIT continue to learn how this molecular system guides our behavior.

Imperiali Lab News Brief: combining bioinformatics and biochemistry

Parsing endless possibilities

Lillian Eden | Department of Biology
December 11, 2024

New research from the Imperiali Lab in the Department of Biology at MIT combines bioinformatics and biochemistry to reveal critical players in assembling glycans, the large sugar molecules on bacterial cell surfaces responsible for behaviors such as evading immune responses and causing infections.

In most cases, single-celled organisms such as bacteria interact with their environment through complex chains of sugars known as glycans bound to lipids on their outer membranes. Glycans orchestrate biological responses and interactions, such as evading immune responses and causing infections. 

The first step in assembling most bacterial glycans is the addition of a sugar-phosphate group onto a lipid, which is catalyzed by phosphoglycosyl transferases (PGTs) on the inner membrane. This first sugar is then further built upon by other enzymes in subsequent steps in an assembly-line-like pathway. These critical biochemical processes are challenging to explore because the proteins involved in these processes are embedded in membranes, which makes them difficult to isolate and study. 

Although glycans are found in all living organisms, the sugar molecules that compose glycans are especially diverse in bacteria. There are over 30,000 known bacterial PGTs, and hundreds of sugars for them to act upon. 

Research recently published in PNAS from the Imperiali Lab in the Department of Biology at MIT uses a combination of bioinformatics and biochemistry to predict clusters of “like-minded” PGTs and verify which sugars they will use in the first step of glycan assembly. 

Defining the biochemical machinery for these assembly pathways could reveal new strategies for tackling antibiotic-resistant strains of bacteria. This comprehensive approach could also be used to develop and test inhibitors, halting the assembly pathway at this critical first step. 

Exploring Sequence Similarity

First author Theo Durand, an undergraduate student from Imperial College London who studied at MIT for a year, worked in the Imperiali Lab as part of a research placement. Durand was first tasked with determining which sugars some PGTs would use in the first step of glycan assembly, known as the sugar substrates of the PGTs. When initially those substrate-testing experiments didn’t work, Durand turned to the power of bioinformatics to develop predictive tools. 

Strategically exploring the sugar substrates for PGTs is challenging due to the sheer number of PGTs and the diversity of bacteria, each with its own assorted set of glycans and glycoconjugates. To tackle this problem, Durand deployed a tool called a Sequence Similarity Network (SSN), part of a computational toolkit developed by the Enzyme Function Initiative. 

According to senior author Barbara Imperiali, Class of 1922 Professor of Biology and Chemistry, an SSN provides a powerful way to analyze protein sequences through comparisons of the sequences of tens of thousands of proteins. In an optimized SSN, similar proteins cluster together, and, in the case of PGTs, proteins in the same cluster are likely to share the same sugar substrate. 

For example, a previously uncharacterized PGT that appears in a cluster of PGTs whose first sugar substrate is FucNAc4N would also be predicted to use FucNAc4N. The researchers could then test that prediction to verify the accuracy of the SSN. 

FucNAc4N is the sugar substrate for the PGT of Fusobacterium nucleatum (F. nucleatum), a bacterium that is normally only present in the oral cavity but is correlated with certain cancers and endometriosis, and Streptococcus pneumoniae, a bacterium that causes pneumonia. 

Adjusting the assay

The critical biochemical process of assembling glycans has historically been challenging to define, mainly because assembly is anchored to the interior side of the inner membrane of the bacterium. The purification process itself can be difficult, and the purified proteins don’t necessarily behave in the same manner once outside their native membrane environment.

To address this, the researchers modified a commercially available test to work with proteins still embedded in the membrane of the bacterium, thus saving them weeks of work to purify the proteins. They could then determine the substrate for the PGT by measuring whether there was activity. This first step in glycan assembly is chemically unique, and the test measures one of the reaction products. 

For PGTs whose substrate was unknown, Durand did a deep dive into the literature to find new substrates to test. FucNAc4N, the first sugar substrate for F. nucleatum, was, in fact, Durand’s favorite sugar – he found it in the literature and reached out to a former Imperiali Lab postdoc for the instructions and materials to make it. 

“I ended up down a rabbit hole where I was excited every time I found a new, weird sugar,” Durand recalls with a laugh. “These bacteria are doing a bunch of really complicated things and any tools to help us understand what is actually happening is useful.” 

Exploring inhibitors

Imperiali noted that this research both represents a huge step forward in our understanding of bacterial PGTs and their substrates and presents a pipeline for further exploration. She’s hoping to create a searchable database where other researchers can seed their own sequences into the SSN for their organisms of interest. 

This pipeline could also reveal antibiotic targets in bacteria. For example, she says, the team is using this approach to explore inhibitor development. 

The Imperiali lab worked with Karen Allen, a professor of Chemistry at Boston University, and graduate student Roxanne Siuda to test inhibitors, including ones for F. nucleatum, the bacterium correlated with certain cancers and endometriosis whose first sugar substrate is FucNAc4N. They are also hoping to obtain structures of inhibitors bound to the PGT to enable structure-guided optimization.

“We were able to, using the network, discover the substrate for a PGT, verify the substrate, use it in a screen, and test an inhibitor,” Imperiali says. “This is bioinformatics, biochemistry, and probe development all bundled together, and represents the best of functional genomics.”