3 Questions with new faculty member Zuri Sullivan: Exploring the mechanisms underlying changes during infection

Zuri Sullivan, a new assistant professor of biology and Whitehead Institute member, studies why we get sick, and whether aspects of illness, such as disrupted appetite, contribute to host defense.

Lillian Eden | Department of Biology
February 20, 2026

With respiratory illness season in full swing, a bad night’s sleep, sore throat, and desire to cancel dinner plans could all be considered hallmark symptoms of the flu, Covid-19 or other illnesses. Although everyone has, at some point, experienced illness and these stereotypical symptoms, the mechanisms that generate them are not well understood.

Zuri Sullivan, a new assistant professor in the MIT Department of Biology and core member of the Whitehead Institute for Biomedical Research, works at the interface of neuroscience, microbiology, physiology, and immunology to study the biological workings underlying illness. In this interview, she describes her work on immunity thus far as well as research avenues — and professional collaborations — she’s excited to explore at MIT.

Q: What is immunity, and why do we get sick in the first place?

A: We can think of immunity in two ways: the antimicrobial programs that defend against a pathogen directly, and sickness, the altered organismal state that happens when we get an infection.

Sickness itself arises from brain-immune system interaction. The immune system is talking to the brain, and then the brain has a system-wide impact on host defense via its ability to have top-down control of physiologic systems and behavior. People might assume that sickness is an unintended consequence of infection, that it happens because your immune system is active, but we hypothesize that it’s likely an adaptive process that contributes to host defense.

If we consider sickness as immunity at the organismal scale, I think of my work as bridging the dynamic immunological processes that occur at the cellular scale, the tissue scale, and the organismal scale. I’m interested in the molecular and cellular mechanisms by which the immune system communicates with the brain to generate changes in behavior and physiology, such as fever, loss of appetite, and changes in social interaction.

Q: What sickness behaviors fascinate you?

A: During my thesis work at Yale University, I studied how the gut processes different nutrients and the role of the immune system in regulating gut homeostasis in response to different kinds of food. I’m especially interested in the interaction between food, the immune system, and the brain. One of the things I’m most excited about is the reduction in appetite, or changes in food choice, because we have what I would consider pretty strong evidence that these may be adaptive.

Sleep is another area we’re interested in exploring. From their own subjective experience, everyone knows that sleep is often altered during infection.

I also don’t just want to examine snapshots in time. I want to characterize changes over the course of an infection. There’s probably going to be individual variability, which I think may be in part because pathogens are also changing over the course of an illness — we’re studying two different biological systems interacting with each other.

Q: What sorts of expertise are you hoping to recruit to your lab, and what collaborations are you excited about pursuing?

A: I really want to bring together different areas of biology to think about organism-wide questions. The thing that’s most important to me is people who are creative — I’d rather trainees come in with an interesting idea than a perfectly formed question within the bounds of what we already believe to be true. I’m also interested in people who would complement my expertise; I’m fascinated by microbiology, but I don’t have any formal training.

The Whitehead Institute is really invested in interdisciplinary work, and there’s a natural synergy between my work and the other labs in this small community at the Whitehead Institute.

I’ve been collaborating with Sebastian Lourido’s lab for a few years, looking at how Toxoplasma gondii influences social behavior, and I’m excited to invest more time in that project. I’m also interested in molecular neuroscience, which is a focus of Siniša Hrvatin’s lab. That lab is interested in the hypothalamus, and trying to understand the mechanisms that generate torpor. My work also focuses on the hypothalamus because it regulates homeostatic behaviors that change during sickness, such as appetite, sleep, social behavior, and body temperature.

By studying different sickness states generated by different kinds of pathogens — parasites, viruses, bacteria — we can ask really interesting questions about how and why we get sick.

How a unique class of neurons may set the table for brain development

A new MIT study from the Nedivi Lab finds that somatostatin-expressing neurons follow a unique trajectory when forming connections in the brain’s visual cortex that may help establish the conditions needed for sensory experience to refine circuits.

David Orenstein | The Picower Institute for Learning and Memory
January 14, 2026

The way the brain develops can shape us throughout our lives, so neuroscientists are intensely curious about how it happens. A new study by researchers in The Picower Institute for Learning and Memory at MIT that focused on visual cortex development in mice, reveals that an important class of neurons follows a set of rules that while surprising, might just create the right conditions for circuit optimization.

During early brain development, multiple types of neurons emerge in the visual cortex (where the brain processes vision). Many are “excitatory,” driving the activity of brain circuits, and others are “inhibitory,” meaning they control that activity. Just like a car needs not only an engine and a gas pedal, but also a steering wheel and brakes, a healthy balance between excitation and inhibition is required for proper brain function. During a “critical period” of development in the visual cortex, soon after the eyes first open, excitatory and inhibitory neurons forge and edit millions of connections, or synapses, to adapt nascent circuits to the incoming flood of visual experience. Over many days, in other words, the brain optimizes its attunement to the world.

In the new study in The Journal of Neuroscience, a team led by MIT research scientist Josiah Boivin and Professor Elly Nedivi visually tracked somatostatin (SST)-expressing inhibitory neurons forging synapses with excitatory cells along their sprawling dendrite branches, illustrating the action before, during and after the critical period with unprecedented resolution. Several of the rules the SST cells appeared to follow were unexpected—for instance, unlike other cell types, their activity did not depend on visual input—but now that the scientists know these neurons’ unique trajectory, they have a new idea about how it may enable sensory activity to influence development: SST cells might help usher in the critical period by establishing the baseline level of inhibition needed to ensure that only certain types of sensory input will trigger circuit refinement.

“Why would you need part of the circuit that’s not really sensitive to experience? It could be that it’s setting things up for the experience-dependent components to do their thing,” said Nedivi, William R. and Linda R. Young Professor in The Picower Institute and MIT’s Departments of Biology and Brain and Cognitive Sciences.

Boivin added: “We don’t yet know whether SST neurons play a causal role in the opening of the critical period, but they are certainly in the right place at the right time to sculpt cortical circuitry at a crucial developmental stage.”

A unique trajectory

To visualize SST-to-excitatory synapse development, Nedivi and Boivin’s team used a genetic technique that pairs expression of synaptic proteins with fluorescent molecules to resolve the appearance of the “boutons” SST cells use to reach out to excitatory neurons. They then performed a technique called eMAP, developed by Kwanghun Chung’s lab in the Picower Institute, that expands and clears brain tissue to increase magnification, allowing super-resolution visualization of the actual synapses those boutons ultimately formed with excitatory cells along their dendrites. Co-author and postdoc Bettina Schmerl helped lead the eMAP work.

These new techniques revealed that SST bouton appearance and then synapse formation surged dramatically when the eyes opened and then as the critical period got underway. But while excitatory neurons during this timeframe are still maturing, first in the deepest layers of the cortex and later in its more superficial layers, the SST boutons blanketed all layers simultaneously, meaning that, perhaps counter intuitively, they sought to establish their inhibitory influence regardless of the maturation stage of their intended partners.

Many studies have shown that eye opening and the onset of visual experience sets in motion the development and elaboration of excitatory cells and another major inhibitory neuron type (parvalbumin-expressing cells). Raising mice in the dark for different lengths of time, for instance, can distinctly alter what happens with these cells. Not so for the SST neurons. The new study showed that varying lengths of darkness had no effect on the trajectory of SST bouton and synapse appearance; it remained invariant, suggesting it is pre-ordained by a genetic program or an age-related molecular signal, rather than experience.

Moreover, after the initial frenzy of synapse formation during development, many synapses are then edited, or pruned away, so that only the ones needed for appropriate sensory responses endure. Again, the SST boutons and synapses proved to be exempt from these redactions. Though the pace of new SST synapse formation slowed at the peak of the critical period, the net number of synapses never declined and even continued increasing into adulthood.

“While a lot of people think that the only difference between inhibition and excitation is their valence, this demonstrates that inhibition works by a totally different set of rules,” Nedivi said.

In all, while other cell types were tailoring their synaptic populations to incoming experience, the SST neurons appeared to provide an early but steady inhibitory influence across all layers of the cortex. After excitatory synapses have been pruned back by the time of adulthood, the continued upward trickle of SST inhibition may contribute to the increase in the inhibition to excitation ratio that still allows the adult brain to learn, but not as dramatically or as flexibly as during early childhood.

A platform for future studies

In addition to shedding light on typical brain development, Nedivi said, the study’s techniques can enable side-by-side comparisons in mouse models of neurodevelopmental disorders such as autism or epilepsy where aberrations of excitation and inhibition balance are implicated.

Future studies using the techniques can also look at how different cell types connect with each other in brain regions other than the visual cortex, she added.

Boivin, who will soon open his own lab as a faculty member at Amherst College, said he is eager to apply the work in new ways.

“I’m excited to continue investigating inhibitory synapse formation on genetically defined cell types in my future lab,” Boivin said. “I plan to focus on the development of limbic brain regions that regulate behaviors relevant to adolescent mental health.”

In addition to Nedivi, Boivin and Schmerl, the paper’s other authors are Kendyll Martin, and Chia-Fang Lee.

Funding for the study came from the National Institutes of Health, the Office of Naval Research and the Freedom Together Foundation.

Zuri Sullivan

Education

  • Undergraduate: AB, Molecular and Cellular Biology, Harvard University, 2012
  • Graduate: 2020, Yale University

Research Summary

In animals, host defense has two modes: antimicrobial programs, which kill pathogens directly; and sickness, a state of altered physiology and behavior that is actively generated by brain-immune system interactions. The lab is interested in (1) how and (2) why infections make us sick – the neuroimmune interactions that lead to sickness, and their impact on host fitness. Our goal is to understand the mechanistic basis of sickness as a host defense strategy.

Awards & Honors

Celebrating worm science

Time and again, an unassuming roundworm has illuminated aspects of biology with major consequences for human health.

Jennifer Michalowski | McGovern Institute
December 12, 2025

For decades, scientists with big questions about biology have found answers in a tiny worm. That worm–a millimeter-long creature called Caenorhabditis elegans–has helped researchers uncover fundamental features of how cells and organisms work. The impact of that work is enormous: Discoveries made using C. elegans have been recognized with four Nobel prizes and have led to the development of new treatments for human disease.

In a perspective piece published in the November 2025 issue of the journal PNAS, eleven biologists including Robert Horvitz, the David H. Koch (1962) Professor of Biology at MIT, celebrate Nobel Prize-winning advances made through research in C. elegans. The authors discuss how that work has led to advances for human health and highlight how a uniquely collaborative community among worm researchers has fueled the field.

MIT scientists are well represented in that community: The prominent worm biologists who coauthored the PNAS paper include former MIT graduate students Andy Fire and Paul Sternberg, now at Stanford University and the California Institute of Technology, and two past postdoctoral researchers in Horvitz’s lab, University of Massachusetts Medical School professor Victor Ambros and Massachusetts General Hospital investigator Gary Ruvkun. Ann Rougvie at the University of Minnesota is the paper’s corresponding author.

Early worm discoveries

“This tiny worm is beautiful—elegant both in its appearance and in its many contributions to our understanding of the biological universe in which we live,” says Horvitz, who in 2002 was awarded the Nobel Prize in Medicine along with colleagues Sydney Brenner and John Sulston for discoveries that helped explain how genes regulate programmed cell death and organ development. Horvitz is also a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research as well as an investigator at the Howard Hughes Medical Institute.

Those discoveries were among the early successes in C. elegans research, made by pioneering scientists who recognized the power of the microscopic roundworm. C. elegans offers many advantages for researchers: The worms are easy to grow and maintain in labs; their transparent bodies make cells and internal processes readily visible under a microscope; they are cellularly very simple (e.g., they have only 302 nerve cells, compared with about 100 billion in a human) and their genomes can be readily manipulated to study gene function.

Most importantly, many of the molecules and processes that operate in C. elegans have been retained throughout evolution, meaning discoveries made using the worm can have direct relevance to other organisms, including humans. “Many aspects of biology are ancient and evolutionarily conserved,” Horvitz explains. “Such shared mechanisms can be most readily revealed by analyzing organisms that are highly tractable in the laboratory.”

In the 1960s, Brenner, a molecular biologist who was curious about how animals’ nervous systems develop and function, recognized that C. elegans offered unique opportunities to study these processes. Once he began developing the worm into a model for laboratory studies, it did not take long for other biologists to join him to take advantage of the new system.

In the 1970s, the unique features of the worm allowed Sulston to track the transformation of a fertilized egg into an adult animal, tracing the origins of each of the adult worm’s 959 cells. His studies revealed that in every developing worm, cells divide and mature in predictable ways. He also learned that some of the cells created during development do not survive into adulthood and are instead eliminated by a process termed programmed cell death.

By seeking mutations that perturbed the process of programmed cell death, Horvitz and his colleagues identified key regulators of that process, which is sometimes referred to as apoptosis. These regulators, which both promote and oppose apoptosis, turned out to be vital for programmed cell death across the animal kingdom.

In humans, apoptosis shapes developing organs, refines brain circuits, and optimizes other tissue structures. It also modulates our immune systems and eliminates cells that are in danger of becoming cancerous. The human version of CED-9, the anti-apoptotic regulator that Horvitz’s team discovered in worms, is BCL-2. Researchers have shown that activating apoptotic cell death by blocking BCL-2 is an effective treatment for certain blood cancers. Today, researchers are also exploring new ways of treating immune disorders and neurodegenerative disease by manipulating apoptosis pathways.

Collaborative worm community

Horvitz and his colleagues’ discoveries about apoptosis helped demonstrate that understanding C. elegans biology has direct relevance to human biology and disease. Since then, a vibrant and closely connected community of worm biologists—including many who trained in Horvitz’s lab—has continued to carry out impactful work. In their PNAS article, Horvitz and his coauthors highlight that early work, as well as the Nobel Prize-winning work of:

  • Andrew Fire and Craig Mello, whose discovery of an RNA-based system of gene silencing led to powerful new tools to manipulate gene activity. The innate process they discovered in worms, known as RNA interference, is now used as the basis of six FDA-approved therapeutics for genetic disorders, silencing faulty genes to stop their harmful effects.
  • Martin Chalfie, who used a fluorescent protein made by jellyfish to visualize and track specific cells in C. elegans, helping launch the development of a set of tools that transformed biologists’ ability to observe molecules and processes that are important for both health and disease.
  • Victor Ambros and Gary Ruvkun, who discovered a class of molecules called microRNAs that regulate gene activity not just in worms, but in all multicellular organisms. This prize-winning work was started when Ambros and Ruvkun were postdoctoral researchers in Horvitz’s lab. Humans rely on more than 1,000 microRNAs to ensure our genes are used at the right times and places. Disruptions to microRNAs have been linked to neurological disorders, cancer, cardiovascular disease, and autoimmune disease, and researchers are now exploring how these small molecules might be used for diagnosis or treatment.

Horvitz and his coauthors stress that while the worm itself made these discoveries possible, so too did a host of resources that facilitate collaboration within the worm community and enable its scientists to build upon the work of others. Scientists who study C. elegans have embraced this open, collaborative spirit since the field’s earliest days, Horvitz says, citing the Worm Breeder’s Gazette, an early newsletter where scientists shared their observations, methods, and ideas.

Today, scientists who study C. elegans—whether the organism is the centerpiece of their lab or they are looking to supplement studies of other systems—contribute to and rely on online resources like WormAtlas and WormBase, as well as the Caenorhabditis Genetics Center, to share data and genetic tools. Horvitz says these resources have been crucial to his own lab’s work; his team uses them every day.

Just as molecules and processes discovered in C. elegans have pointed researchers toward important pathways in human cells, the worm has also been a vital proving ground for developing methods and approaches later deployed to study more complex organisms. For example, C. elegans, with its 302 neurons, was the first animal for which neuroscientists successfully mapped all of the connections of the nervous system. The resulting wiring diagram, or connectome, has guided countless experiments exploring how neurons work together to process information and control behavior. Informed by both the power and limitations of the C. elegans’ connectome, scientists are now mapping more complex circuitry, such as the 139,000-neuron brain of the fruit fly, whose connectome was completed in 2024.

C. elegans remains a mainstay of biological research, including in neuroscience. Scientists worldwide are using the worm to explore new questions about neural circuits, neurodegeneration, development, and disease. Horvitz’s lab continues to turn to C. elegans to investigate the genes that control animal development and behavior. His team is now using the worm to explore how animals develop a sense of time and transmit that information to their offspring.

Also at MIT, Steven Flavell’s team in the Department of Brain and Cognitive Sciences and the Picower Institute for Learning and Memory is using the worm to investigate how neural connectivity, activity, and modulation integrate internal states, such as hunger, with sensory information, such as the smell of food, to produce sometimes long-lasting behaviors. Flavell is Horvitz’s academic grandson, as Flavell trained with one of Horvitz’s postdoctoral trainees. As new technologies accelerate the pace of scientific discovery, Horvitz and his colleagues are confident that the humble worm will bring more unexpected insights.

Paper: “From nematode to Nobel: How community-shared resources fueled the rise of Caenorhabditis elegans as a research organism”

RNA editing study finds many ways for neurons to diversify

When MIT neurobiologists including Troy Littleton tracked how more than 200 motor neurons in fruit flies each edited their RNA, they cataloged hundreds of target sites and widely varying editing rates. Scores of edits altered proteins involved in neural communication and function.

David Orenstein | The Picower Institute for Learning and Memory
November 20, 2025

All starting from the same DNA, neurons ultimately take on individual characteristics in the brain and body. Differences in which genes they transcribe into RNA help determine which type of neuron they become, and from there, a new MIT study shows, individual cells edit a selection of sites in those RNA transcripts, each at their own widely varying rates.

The new study surveyed the whole landscape of RNA editing in more than 200 individual cells commonly used as models of fundamental neural biology: tonic and phasic motor neurons of the fruit fly. One of the main findings is that most sites were edited at rates between the “all or nothing” extremes many scientists have assumed based on more limited studies in mammals, said senior author Troy Littleton, Menicon Professor in the Departments of Biology and Brain and Cognitive Sciences. The resulting dataset and analyses published in eLife set the table for discoveries about how RNA editing affects neural function and what enzymes implement those edits.

“We have this ‘alphabet’ now for RNA editing in these neurons,” Littleton said. “We know which genes are edited in these neurons so we can go in and begin to ask questions as to what is that editing doing to the neuron at the most interesting targets.”

Andres Crane, who earned his PhD in Littleton’s lab based on this work, is the study’s lead author.

From a genome of about 15,000 genes, Littleton and Crane’s team found, the neurons made hundreds of edits in transcripts from hundreds of genes. For example, the team documented “canonical” edits of 316 sites in 210 genes. Canonical means that the edits were made by the well-studied enzyme ADAR, which is also found in mammals including humans. Of the 316 edits, 175 occurred in regions that encode the contents of proteins. Analysis indeed suggested 60 are likely to significantly alter amino acids. But they also found 141 more editing sites in areas that don’t code for proteins but instead affect their production, which means they could affect protein levels, rather than their contents.

The team also found many “non-canonical” edits that ADAR didn’t make. That’s important, Littleton said, because that information could aid in discovering more enzymes involved in RNA editing, potentially across species. That, in turn, could expand the possibilities for future genetic therapies.

“In the future, if we can begin to understand in flies what the enzymes are that make these other non-canonical edits, it would give us broader coverage for thinking about doing things like repairing human genomes where a mutation has broken a protein of interest,” Littleton said.

Moreover, by looking specifically at fly larvae, the team found many edits that were specific to juveniles vs. adults, suggesting potential significance during development. And because they looked at full gene transcripts of individual neurons, the team was also able to find editing targets that had not been cataloged before.

Widely varying rates

Some of the most heavily edited RNAs were from genes that make critical contributions to neural circuit communication such as neurotransmitter release, and the channels that cells form to regulate the flow of chemical ions that vary their electrical properties. The study identified 27 sites in 18 genes that were edited more than 90 percent of the time.

Yet neurons sometimes varied quite widely in whether they would edit a site, which suggests that even neurons of the same type can still take on significant degrees of individuality.

“Some neurons displayed ~100 percent editing at certain sites, while others displayed no editing for the same target,” the team wrote in eLife. “Such dramatic differences in editing rate at specific target sites is likely to contribute to the heterogeneous features observed within the same neuronal population.”

On average, any given site was edited about two-thirds of the time, and most sites were edited within a range well between all or nothing extremes.

“The vast majority of editing events we found were somewhere between 20% and 70%,” Littleton said. “We were seeing mixed ratios of edited and unedited transcripts within a single cell.”

Also, the more a gene was expressed, the less editing it experienced, suggesting that ADAR could only keep up so much with its editing opportunities.

Potential impacts on function

One of the key questions the data enables scientists to ask is what impact RNA edits have on the function of the cells. In a 2023 study, Littleton’s lab began to tackle this question by looking at just two edits they found in the most heavily edited gene: Complexin. Complexin’s protein product restrains release of the neurotransmitter glutamate, making it a key regulator of neural circuit communication. They found that by mixing and matching edits, neurons produced up to eight different versions of the protein with significant effects on their glutamate release and synaptic electrical current. But in the new study, the team reports 13 more edits in Complexin that are yet to be studied.

Littleton said he’s intrigued by another key protein, called Arc1, that the study shows experienced a non-canonical edit. Arc is a vitally important gene in “synaptic plasticity,” which is the property neurons have of adjusting the strength or presence of their “synapse” circuit connections in response to nervous system activity. Such neural nimbleness is hypothesized to be the basis of how the brain can responsively encode new information in learning and memory. Notably, Arc1 editing fails to occur in fruit flies that model Alzheimer’s disease.

Littleton said the lab is now working hard to understand how the RNA edits they’ve documented affect function in the fly motor neurons.

In addition to Crane and Littleton, the study’s other authors are Michiko Inouye and Suresh Jetti.

The National Institutes of Health, The Freedom Together Foundation and The Picower Institute for Learning and Memory provided support for the study.

Research:

Andrés B CraneMichiko O InouyeSuresh K JettiJ Troy Littleton (2025) A stochastic RNA editing process targets a select number of sites in individual Drosophila glutamatergic motoneurons eLife 14:RP108282.
https://doi.org/10.7554/eLife.108282.2

Q&A: Picower researchers including MIT Biology faculty Sara Prescott join effort to investigate the ‘Biology of Adversity’

Assistant Professor Sara Prescott and Research Affiliate Ravikiran Raju are key collaborators in a new Broad Institute research project to better understand physiological and medical effects of acute and chronic life stressors.

David Orenstein | The Picower Institute for Learning and Memory
November 3, 2025

Adverse experiences such as abuse and violence or poverty and deprivation have always been understood to be harmful, but the tools to understand how they may cause specific medical conditions and outcomes have only emerged recently. Technologies such as RNA or chromatin sequencing, for instance, can help scientists observe how stressors change gene expression, which can help establish mechanistic biological explanations for why people who’ve suffered adversity also experience higher risks of conditions such as stroke or Alzheimer’s disease.

Advancing scientific understanding of the physiological connections between adversity and disease can help pharmaceutical developers, physicians and public officials to develop meaningful interventions. Led by researcher Jason Buenrostro, the Broad Institute has launched a new research program, the “Biology of Adversity” project.. As leading collaborators in the effort, Picower Institute investigator Sara Prescott, assistant professor of biology, and Tsai Lab research affiliate Ravikiran Raju, a pediatrician at Boston Children’s Hospital, plan research projects in their Picower Institute labs to better elucidate how life stress leads to medical distress.

How can biology and neuroscience studies help people who’ve experienced adversity?

Prescott: Adversity comes in many flavors. But across different types of adversity, there is a common theme that it leads to psychological and emotional distress. If you were to ask a random person on the street, they’d probably tell you that distress is simply a feeling that exists only in the mind, rather than a biological process. But this is not true. We now appreciate that stress has predictable effects on the body, and there are severe long-term health consequences of experiencing chronic stress. Unfortunately, it’s been difficult to argue based on epidemiological data that stress itself (rather than other lifestyle factors like diet, smoking or access to health care services) is causally linked to poor health outcomes. This is confounded by the fact that we haven’t had good ways to empirically measure people’s levels of adversity and stress. This is part of what we want to address at the Biology of Adversity Project.

From a scientific perspective, there is still much to be understood about stress and the biological processes that lead to stress-associated diseases. And so that’s hopefully where efforts like the Biology of Adversity Project are going to come in. We can use scientific practices to come up with better guidelines for ways to track levels of stress, develop diagnostics, and then, hopefully, one day this will turn into actionable interventions. It’s not a random process of things going awry. There are going to be biological programs that are engaged in predictable ways. And we’re trying to understand, what exactly are these neural or biological programs? How many different types of programs are there? And how do each of those programs actually work down to the cellular and molecular level?

Raju: Efforts to combat adversity and stress have largely remained in the social space to date. But what we know from a growing body of epidemiological literature is that social stressors can have profound biological impact. They cause increases in mental health disorders, physical disorders like cancer, stroke, and heart disease. Individuals who experience chronic and high levels of stress are dying sooner. I think there is an imperative to understand what these forces are doing to our biology and how they’re dysregulating our physiology. Armed with that information, we can start to be more mechanistic and evidence-based in our promotion of resilience. What are the pathways that are made vulnerable when individuals are stressed? How do we rescue those deficiencies, whether it be through existing practices or novel interventions? A lot of the research we’re doing here at Picower is focusing on pathways that could be targeted and leveraged using specific micronutrients or specific small molecules that help promote resilience and prevent the onset of premature illness in individuals who are stress exposed.

What is the Biology of Adversity Project and how are each of you involved?

Prescott: My lab studies the autonomic nervous system, and we’re involved in the project’s animal studies. We think of stress as an adaptive response to prepare the body for an impending threat. When people experience stress, what happens? You engage a fight or flight response—you sweat, start to breathe harder, your heart rate goes up, your pupils dilate. This is protective in acute settings, but can become very maladaptive when these systems are activated for too long or in inappropriate settings, like when someone is having a panic attack. We predict that a lot of the long-term health consequences associated with adversity could relate to dysregulated autonomic stress responses.

And so that’s where our lab’s tools come in. We have good ways in animals to measure their heart rate and breathing in response to stress. We also have a wide range of genetic tools to specifically target different neural pathways in the periphery, possibly blocking stress pathways at the source. With these tools, we can explore what role those circuits have in long-term changes in these animals with greater precision than what was possible in the past.

Raju: My involvement came through my work on the Environmental and Social Determinants of Child Mental Health Conference in 2023, which I co-hosted with Li-Huei Tsai. I think this conference made the scientific community in Boston more aware that this was something of deep interest to researchers at Picower and MIT. In the creation of the Biology of Adversity Project, the center director, Jason Buenrostro, was doing a survey of the landscape of folks who were studying stress and adversity, and who were passionate about it and connected with us because of that symposium. Since then, I’ve been engaged in really exciting conversations with him and a exciting group of collaborators, including Sara Prescott. And so I’m really excited that a few of our projects are being showcased as flagship projects. We are currently using animal models of early life stress to try and build preclinical models to deepen our understanding of how stress dysregulates physiology. We’re developing pipelines for trying to think about promoting resilience through targeted interventions, using those preclinical models.

What research questions do you each plan to tackle?

Prescott: Broadly, we’re interested in the body-brain connection and how this relates to stress. How do different cues from within the body—like diet, or taking a deep breath–promote or regulate stress levels? These are interesting questions about how sensory inputs from the body feed into stress circuits in the brain. We’re also interested in the other direction—understanding how stress causes changes to peripheral organs, for example, by engaging the sympathetic nervous system. It’s well understood that sympathetic neurons are responsible for making you sweat and your heart race, but do they do other things as well? For example, the field is starting to appreciate that these same neurons regulate the immune system, and can signal to stem cells to promote or suppress tissue repair. These are important pathways to understand, as they could explain some of the links between chronic stress (where sympathetic neurons are over-activated) and increased rates of diseases like cancer. It also may have therapeutic applications down the road. I’m incredibly excited for the opportunity to work with people like Ravi, and others in the project, to apply our expertise in physiology and autonomic signaling towards this immensely important problem. I’m hoping that through this work we can move to an era where we can, from a societal perspective, understand how much our stress levels are damaging our body, be able to track that, and then find better ways to prevent the damage that’s happening.

Raju:  We are leveraging three key mouse models of environmental perturbations in this work: environmental enrichment, social isolation and resource deprivation. In studying enrichment, we are trying to better study the factors that promote resilience to stress. In our previous work on resilience, for example, we identified a transcription factor that’s specifically recruited to help ensure that neurons are resilient to the onset of Alzheimer’s pathology. So we’ve leveraged these enrichment models to study that mechanism and are able to then think of how that pathway might be leveraged in stress-exposed individuals. We are also using models of stress, specifically social isolation and resource deprivation. The idea here is that because mice are social mammals and rely on resources and social interactions and social networks in order to thrive, we can modulate these in a species-relevant way, and then study the pathways that are dysregulated. This will allow us to define vulnerable pathways in these preclinical models, and then assess if those same pathways are dysregulated in humans that are experiencing analagous environmental conditions. Armed with the right model, we can then determine how to reverse the physiological derangements induced by environmental stressors.

Neural activity helps circuit connections mature into optimal signal transmitters

By carefully tracking the formation and maturation of synaptic active zones in fruit flies, MIT scientists have discovered how neural activity helps circuit connections become tuned to the right size and degree of signal transmission capability over a period of days.

David Orenstein | The Picower Institute for Learning and Memory
October 14, 2025

Nervous system functions, from motion to perception to cognition, depend on the active zones of neural circuit connections, or “synapses,” sending out the right amount of their chemical signals at the right times. By tracking how synaptic active zones form and mature in fruit flies, researchers at The Picower Institute for Learning and Memory at MIT have revealed a fundamental model for how neural activity during development builds properly working connections.

Understanding how that happens is important, not only for advancing fundamental knowledge about how nervous systems develop, but also because many disorders such as epilepsy, autism, or intellectual disability can arise from aberrations of synaptic transmission, said senior author Troy Littleton, Menicon Professor in The Picower Institute and MIT’s Department of Biology. The new findings, funded in part by a 2021 grant from the National Institutes of Health, provide insights into how active zones develop the ability to send neurotransmitters across synapses to their circuit targets. It’s not instant or predestined, the study shows. It can take days to fully mature and that is regulated by neural activity.

If scientists can fully understand the process, Littleton said, then they can develop molecular strategies to intervene to tweak synaptic transmission when it’s happening too much or too little in disease.

“We’d like to have the levers to push to make synapses stronger or weaker, that’s for sure,” Littleton said. “And so knowing the full range of levers we can tug on to potentially change output would be exciting.”

Littleton Lab research scientist Yuliya Akbergenova led the study published Oct. 14 in the Journal of Neuroscience.

How newborn synapses grow up 

In the study, the researchers examined neurons that send the neurotransmitter glutamate across synapses to control muscles in the fly larvae. To study how the active zones in the animals matured, the scientists needed to keep track of their age. That hasn’t been possible before, but Akbergenova overcame the barrier by cleverly engineering the fluorescent protein mMaple, which changes its glow from green to red when zapped with 15 seconds of ultraviolet light, into a component of the glutamate receptors on the receiving side of the synapse. Then, whenever she wanted, she could shine light and all the synapses already formed before that time would glow red and any new once that formed subsequently would glow green.

With the ability to track each active zone’s birthday, the authors could then document how active zones developed their ability to increase output over the course of days after birth. The researchers actually watched as synapses were built over many hours by tagging each of eight kinds of proteins that make up an active zone. At first, the active zones couldn’t transmit anything. Then, as some essential early proteins accumulated, they could send out glutamate spontaneously, but not if evoked by electrical stimulation of their host neuron (simulating how that neuron might be signaled naturally in a circuit). Only after several more proteins arrived did active zones possess the mature structure for calcium ions to trigger the fusion of glutamate vesicles to the cell membrane for evoked release across the synapse.

Activity matters

Of course, construction does not go on forever. At some point, the fly larva stops building one synapse and then builds new ones further down the line as the neuronal axon expands to keep up with growing muscles. The researchers wondered whether neural activity had a role in driving that process of finishing up one active zone and moving on to build the next.

To find out, they employed two different interventions to block active zones from being able to release glutamate, thereby preventing synaptic activity. Notably, one of the methods they chose was blocking the action of a protein called Synaptotagmin 1. That’s important because mutations that disrupt the protein in humans are associated with severe intellectual disability and autism. Moreover, the researchers tailored the activity-blocking interventions to just one neuron in each larva because blocking activity in all their neurons would have proved lethal.

In neurons where the researchers blocked activity, they observed two consequences: the neurons stopped building new active zones and instead kept making existing active zones larger and larger. It was as if the neuron could tell the active zone wasn’t releasing glutamate and tried to make it work by giving it more protein material to work with. That effort came at the expense of starting construction on new active zones.

“I think that what it’s trying to do is compensate for the loss of activity,” Littleton said.

Testing indicated that the enlarged active zones the neurons built in hopes of restarting activity were functional (or would have been if the researchers weren’t artificially blocking them). This suggested that the way the neuron sensed that glutamate wasn’t being released was therefore likely to be a feedback signal from the muscle side of the synapse. To test that, the scientists knocked out a glutamate receptor component in the muscle and when they did, they found that the neurons no longer made their active zones larger.

Littleton said the lab is already looking into the new questions the discoveries raise. In particular, what are the molecular pathways that initiate synapse formation in the first place, and what are the signals that tell an active zone it has finished growing? Finding those answers will bring researchers closer to understanding how to intervene when synaptic active zones aren’t developing properly.

In addition to Littleton and Akbergenova, the paper’s other authors are Jessica Matthias and Sofya Makeyeva.

In addition to the National Institutes of Health, The Freedom Together Foundation provided funding for the study.

Dopamine signals when a fear can be forgotten

Study shows how a dopamine circuit between two brain regions enables mice to extinguish fear after a peril has passed.

David Orenstein | The Picower Institute for Learning and Memory
May 7, 2025

Dangers come but dangers also go, and when they do, the brain has an “all-clear” signal that teaches it to extinguish its fear. A new study in mice by MIT neuroscientists shows that the signal is the release of dopamine along a specific interregional brain circuit. The research therefore pinpoints a potentially critical mechanism of mental health, restoring calm when it works, but prolonging anxiety or even post-traumatic stress disorder when it doesn’t.

“Dopamine is essential to initiate fear extinction,” says Michele Pignatelli di Spinazzola, co-author of the new study from the lab of senior author Susumu Tonegawa, Picower Professor of biology and neuroscience at the RIKEN-MIT Laboratory for Neural Circuit Genetics within The Picower Institute for Learning and Memory at MIT, and a Howard Hughes Medical Institute (HHMI) investigator.

In 2020, Tonegawa’s lab showed that learning to be afraid, and then learning when that’s no longer necessary, result from a competition between populations of cells in the brain’s amygdala region. When a mouse learns that a place is “dangerous” (because it gets a little foot shock there), the fear memory is encoded by neurons in the anterior of the basolateral amygdala (aBLA) that express the gene Rspo2. When the mouse then learns that a place is no longer associated with danger (because they wait there and the zap doesn’t recur), neurons in the posterior basolateral amygdala (pBLA) that express the gene Ppp1r1b encode a new fear extinction memory that overcomes the original dread. Notably, those same neurons encode feelings of reward, helping to explain why it feels so good when we realize that an expected danger has dwindled.

In the new study, the lab, led by former members Xiangyu Zhang and Katelyn Flick, sought to determine what prompts these amygdala neurons to encode these memories. The rigorous set of experiments the team reports in the Proceedings of the National Academy of Sciences show that it’s dopamine sent to the different amygdala populations from distinct groups of neurons in the ventral tegmental area (VTA).

“Our study uncovers a precise mechanism by which dopamine helps the brain unlearn fear,” says Zhang, who also led the 2020 study and is now a senior associate at Orbimed, a health care investment firm. “We found that dopamine activates specific amygdala neurons tied to reward, which in turn drive fear extinction. We now see that unlearning fear isn’t just about suppressing it — it’s a positive learning process powered by the brain’s reward machinery. This opens up new avenues for understanding and potentially treating fear-related disorders, like PTSD.”

Forgetting fear

The VTA was the lab’s prime suspect to be the source of the signal because the region is well known for encoding surprising experiences and instructing the brain, with dopamine, to learn from them. The first set of experiments in the paper used multiple methods for tracing neural circuits to see whether and how cells in the VTA and the amygdala connect. They found a clear pattern: Rspo2 neurons were targeted by dopaminergic neurons in the anterior and left and right sides of the VTA. Ppp1r1b neurons received dopaminergic input from neurons in the center and posterior sections of the VTA. The density of connections was greater on the Ppp1r1b neurons than for the Rspo2 ones.

The circuit tracing showed that dopamine is available to amygdala neurons that encode fear and its extinction, but do those neurons care about dopamine? The team showed that indeed they express “D1” receptors for the neuromodulator. Commensurate with the degree of dopamine connectivity, Ppp1r1b cells had more receptors than Rspo2 neurons.

Dopamine does a lot of things, so the next question was whether its activity in the amygdala actually correlated with fear encoding and extinction. Using a method to track and visualize it in the brain, the team watched dopamine in the amygdala as mice underwent a three-day experiment. On Day One, they went to an enclosure where they experienced three mild shocks on the feet. On Day Two, they went back to the enclosure for 45 minutes, where they didn’t experience any new shocks — at first, the mice froze in anticipation of a shock, but then relaxed after about 15 minutes. On Day Three they returned again to test whether they had indeed extinguished the fear they showed at the beginning of Day Two.

The dopamine activity tracking revealed that during the shocks on Day One, Rspo2 neurons had the larger response to dopamine, but in the early moments of Day Two, when the anticipated shocks didn’t come and the mice eased up on freezing, the Ppp1r1b neurons showed the stronger dopamine activity. More strikingly, the mice that learned to extinguish their fear most strongly also showed the greatest dopamine signal at those neurons.

Causal connections

The final sets of experiments sought to show that dopamine is not just available and associated with fear encoding and extinction, but also actually causes them. In one set, they turned to optogenetics, a technology that enables scientists to activate or quiet neurons with different colors of light. Sure enough, when they quieted VTA dopaminergic inputs in the pBLA, doing so impaired fear extinction. When they activated those inputs, it accelerated fear extinction. The researchers were surprised that when they activated VTA dopaminergic inputs into the aBLA they could reinstate fear even without any new foot shocks, impairing fear extinction.

The other way they confirmed a causal role for dopamine in fear encoding and extinction was to manipulate the amygdala neurons’ dopamine receptors. In Ppp1r1b neurons, over-expressing dopamine receptors impaired fear recall and promoted extinction, whereas knocking the receptors down impaired fear extinction. Meanwhile in the Rspo2 cells, knocking down receptors reduced the freezing behavior.

“We showed that fear extinction requires VTA dopaminergic activity in the pBLA Ppp1r1b neurons by using optogenetic inhibition of VTA terminals and cell-type-specific knockdown of D1 receptors in these neurons,” the authors wrote.

The scientists are careful in the study to note that while they’ve identified the “teaching signal” for fear extinction learning, the broader phenomenon of fear extinction occurs brainwide, rather than in just this single circuit.

But the circuit seems to be a key node to consider as drug developers and psychiatrists work to combat anxiety and PTSD, Pignatelli di Spinazzola says.

“Fear learning and fear extinction provide a strong framework to study generalized anxiety and PTSD,” he says. “Our study investigates the underlying mechanisms suggesting multiple targets for a translational approach, such as pBLA and use of dopaminergic modulation.”

Marianna Rizzo is also a co-author of the study. Support for the research came from the RIKEN Center for Brain Science, the HHMI, the Freedom Together Foundation, and The Picower Institute.

Manipulating time with torpor

New research from the Hrvatin Lab recently published in Nature Aging indicates that inducing a hibernation-like state in mice slows down epigenetic changes that accompany aging.

Shafaq Zia | Whitehead Institute
March 7, 2025

Surviving extreme conditions in nature is no easy feat. Many species of mammals rely on special adaptations called daily torpor and hibernation to endure periods of scarcity. These states of dormancy are marked by a significant drop in body temperature, low metabolic activity, and reduced food intake—all of which help the animal conserve energy until conditions become favorable again.

The lab of Whitehead Institute Member Siniša Hrvatin studies daily torpor, which lasts several hours, and its longer counterpart, hibernation, in order to understand their effects on tissue damage, disease progression, and aging. In their latest study, published in Nature Aging on March 7, first author Lorna Jayne, Hrvatin, and colleagues show that inducing a prolonged torpor-like state in mice slows down epigenetic changes that accompany aging.

“Aging is a complex phenomenon that we’re just starting to unravel,” says Hrvatin, who is also an assistant professor of biology at Massachusetts Institute of Technology. “Although the full relationship between torpor and aging remains unclear, our findings point to decreased body temperature as the central driver of this anti-aging effect.”

Tampering with the biological clock

Aging is a universal process, but scientists have long struggled to find a reliable metric for measuring it. Traditional clocks fall short because biological age doesn’t always align with chronology—cells and tissues in different organisms age at varying rates.

To solve this dilemma, scientists have turned to studying molecular processes that are common to aging across many species. This, in the past decade, has led to the development of epigenetic clocks, new computational tools that can estimate an organism’s age by analyzing the accumulation of epigenetic marks in cells over time.

Think of epigenetic marks as tiny chemical tags that cling either to the DNA itself or to the proteins, called histones, around which the DNA is wrapped. Histones act like spools, allowing long strands of DNA to coil around them, much like thread around a bobbin. When epigenetic tags are added to histones, they can compact the DNA, preventing genetic information from being read, or loosen it, making the information more accessible. When epigenetic tags attach directly to DNA, they can alter how the proteins that “read” a gene bind to the DNA.

While it’s unclear if epigenetic marks are a cause or consequence of aging, this much is evident: these marks change over an organism’s lifespan, altering how genes are turned on or off, without modifying the underlying DNA sequence. These changes have enabled researchers to track the biological age of individual cells and tissues using dedicated epigenetic clocks.

In nature, states of stasis like hibernation and daily torpor help animals survive by conserving energy and avoiding predators. But now, emerging research in marmots and bats hints that hibernation may also slow down epigenetic aging, prompting researchers to explore whether there’s a deeper connection between prolonged bouts of torpor and longevity.

However, investigating this link has been challenging, as the mechanisms that trigger, regulate, and sustain torpor remain largely unknown. In 2020, Hrvatin and colleagues made a breakthrough by identifying neurons in a specific region of the mouse hypothalamus, known as the avMLPA, which act as core regulators of torpor.

“This is when we realized that we could leverage this system to induce torpor and explore mechanistically how the state of torpor might have beneficial effects on aging,” says Jayne. “You can imagine how difficult it is to study this in natural hibernators because of accessibility and the lack of tools to manipulate them in sophisticated ways.”

The age-old mystery

The researchers began by injecting adeno-associated virus in mice, a gene delivery vehicle that enables scientists to introduce new genetic material into target cells. They employed this technology to instruct neurons in the mice’s avMLPA region to produce a special receptor called Gq-DREADD, which does not respond to the brain’s natural signals but can be chemically activated by a drug. When the researchers administered this drug to the mice, it bound to the Gq-DREADD receptors, activating the torpor-regulating neurons and triggering a drop in the animals’ body temperature.

However, to investigate the effects of torpor on longevity, the researchers needed to maintain these mice in a torpor-like state for days to weeks. To achieve this, the mice were continuously administered the drug through drinking water.

The mice were kept in a torpor-like state with periodic bouts of arousal for a total of nine months. The researchers measured the blood epigenetic age of these mice at the 3-, 6-, and 9-month marks using the mammalian blood epigenetic clock. By the 9-month mark, the torpor-like state had reduced blood epigenetic aging in these mice by approximately 37%, making them biologically three months younger than their control counterparts.

To further assess the effects of torpor on aging,  the group evaluated these mice using the mouse clinical frailty index, which includes measurements like tail stiffening, gait, and spinal deformity that are commonly associated with aging. As expected, mice in the torpor-like state had a lower frailty index compared to the controls.

With the anti-aging effects of the torpor-like state established, the researchers sought to understand how each of the key factors underlying torpor—decreased body temperature, low metabolic activity, and reduced food intake—contributed to longevity.

To isolate the effects of reduced metabolic rate, the researchers induced a torpor-like state in mice, while maintaining the animal’s normal body temperature. After three months, the blood epigenetic age of these mice was similar to that of the control group, suggesting that low metabolic rate alone does not slow down epigenetic aging.

Next, Hrvatin and colleagues isolated the impact of low caloric intake on blood epigenetic aging by restricting the food intake of mice in the torpor-like state, while maintaining their normal body temperature. After three months, these mice were a similar blood epigenetic age as the control group.

When both low metabolic rate and reduced food intake were combined, the mice still exhibited higher blood epigenetic aging after three months compared to mice in the torpor state with low body temperature. These findings, combined, led the researchers to conclude that neither low metabolic rate nor reduced caloric intake alone are sufficient to slow down blood epigenetic aging. Instead, a drop in body temperature is necessary for the anti-aging effects of torpor.

Although the exact mechanisms linking low body temperature and epigenetic aging are unclear, the team hypothesizes that it may involve the cell cycle, which regulates how cells grow and divide: lower body temperatures can potentially slow down cellular processes, including DNA replication and mitosis. This, over time, may impact cell turnover and aging. With further research, the Hrvatin Lab aims to explore this link in greater depth and shed light on the lingering mystery.

Cellular interactions help explain vascular complications due to COVID-19 virus infection

Whitehead Institute Founding Member Rudolf Jaenisch and colleagues have found that cellular interactions help explain how SARS-CoV-2, the virus that causes COVID-19, could have such significant vascular complications, including blood clots, heart attacks, and strokes.

Greta Friar | Whitehead Institute
December 31, 2024

COVID-19 is a respiratory disease primarily affecting the lungs. However, the SARS-CoV-2 virus that causes COVID-19 surprised doctors and scientists by triggering an unusually large percentage of patients to experience vascular complications – issues related to blood flow, such as blood clots, heart attacks, and strokes.

Whitehead Institute Founding Member Rudolf Jaenisch and colleagues wanted to understand how this respiratory virus could have such significant vascular effects. They used pluripotent stem cells to generate three relevant vascular and perivascular cell types—cells that surround and help maintain blood vessels—so they could closely observe the effects of SARS-CoV-2 on the cells. Instead of using existing methods to generate the cells, the researchers developed a new approach, providing them with fresh insights into the mechanisms by which the virus causes vascular problems. The researchers found that SARS-CoV-2 primarily infects perivascular cells and that signals from these infected cells are sufficient to cause dysfunction in neighboring vascular cells, even when the vascular cells are not themselves infected. In a paper published in the journal Nature Communications on December 30, Jaenisch, postdoc in his lab Alexsia Richards, Harvard University Professor and Wyss Institute for Biologically Inspired Engineering Member David Mooney, and then-postdoc in the Jaenisch and Mooney labs Andrew Khalil share their findings and present a scalable stem cell-derived model system with which to study vascular cell biology and test medical therapies.

A new problem requires a new approach

When the COVID-19 pandemic began, Richards, a virologist, quickly pivoted her focus to SARS-CoV-2. Khalil, a bioengineer, had already been working on a new approach to generate vascular cells. The researchers realized that a collaboration could provide Richards with the research tool she needed and Khalil with an important research question to which his tool could be applied.

The three cell types that Khalil’s approach generated were endothelial cells, the vascular cells that form the lining of blood vessels; and smooth muscle cells and pericytes, perivascular cells that surround blood vessels and provide them with structure and maintenance, among other functions. Khalil’s biggest innovation was to generate all three cell types in the same media—the mixture of nutrients and signaling molecules in which stem cell-derived cells are grown.

The combination of signals in the media determines the final cell type into which a stem cell will mature, so it is much easier to grow each cell type separately in specially tailored media than to find a mixture that works for all three. Typically, Richards explains, virologists will generate a desired cell type using the easiest method, which means growing each cell type and then observing the effects of viral infection on it in isolation. However, this approach can limit results in several ways. Firstly, it can make it challenging to distinguish the differences in how cell types react to a virus from the differences caused by the cells being grown in different media.

“By making these cells under identical conditions, we could see in much higher resolution the effects of the virus on these different cell populations, and that was essential in order to form a strong hypothesis of the mechanisms of vascular symptom risk and progression,” Khalil says.

Secondly, infecting isolated cell types with a virus does not accurately represent what happens in the body, where cells are in constant communication as they react to viral exposure. Indeed, Richards’ and Khalil’s work ultimately revealed that the communication between infected and uninfected cell types plays a critical role in the vascular effects of COVID-19.

“The field of virology often overlooks the importance of considering how cells influence other cells and designing models to reflect that,” Richards says. “Cells do not get infected in isolation, and the value of our model is that it allows us to observe what’s happening between cells during infection.”

Viral infection of smooth muscle cells has broader, indirect effects

When the researchers exposed their cells to SARS-CoV-2, the smooth muscle cells and pericytes became infected—the former at especially high levels, and this infection resulted in strong inflammatory gene expression—but the endothelial cells resisted infection. Endothelial cells did show some response to viral exposure, likely due to interactions with proteins on the virus’ surface. Typically, endothelial cells press tightly together to form a firm barrier that keeps blood inside of blood vessels and prevents viruses from getting out. When exposed to SARS-CoV-2, the junctions between endothelial cells appeared to weaken slightly. The cells also had increased levels of reactive oxygen species, which are damaging byproducts of certain cellular processes.

However, big changes in endothelial cells only occurred after the cells were exposed to infected smooth muscle cells. This triggered high levels of inflammatory signaling within the endothelial cells. It led to changes in the expression of many genes relevant to immune response. Some of the genes affected were involved in coagulation pathways, which thicken blood and so can cause blood clots and related vascular events. The junctions between endothelial cells experienced much more significant weakening after exposure to infected smooth muscle cells, which would lead to blood leakage and viral spread. All of these changes occurred without SARS-CoV-2 ever infecting the endothelial cells.

This work shows that viral infection of smooth muscle cells, and their resultant signaling to endothelial cells, is the lynchpin in the vascular damage caused by SARS-CoV-2. This would not have been apparent if the researchers had not been able to observe the cells interacting with each other.

Clinical relevance of stem cell results

The effects that the researchers observed were consistent with patient data. Some of the genes whose expression changed in their stem cell-derived model had been identified as markers of high risk for vascular complications in COVID-19 patients with severe infections. Additionally, the researchers found that a later strain of SARS-CoV-2, an Omicron variant, had much weaker effects on the vascular and perivascular cells than did the original viral strain. This is consistent with the reduced levels of vascular complications seen in COVID-19 patients infected with recent strains.

Having identified smooth muscle cells as the main site of SARS-Cov-2 infection in the vascular system, the researchers next used their model system to test one drug’s ability to prevent infection of smooth muscle cells. They found that the drug, N, N-Dimethyl-D-erythro-sphingosine, could reduce infection of the cell type without harming smooth muscle or endothelial cells. Although preventing vascular complications of COVID-19 is not as pressing a need with current viral strains, the researchers see this experiment as proof that their stem cell model could be used for future drug development. New coronaviruses and other pathogens are frequently evolving, and when a future virus causes vascular complications, this model could be used to quickly test drugs to find potential therapies while the need is still high. The model system could also be used to answer other questions about vascular cells, how these cells interact, and how they respond to viruses.

“By integrating bioengineering strategies into the analysis of a fundamental question in viral pathology, we addressed important practical challenges in modeling human disease in culture and gained new insights into SARS-CoV-2 infection,” Mooney says.

“Our interdisciplinary approach allowed us to develop an improved stem cell model for infection of the vasculature,” says Jaenisch, who is also a professor of biology at the Massachusetts Institute of Technology. “Our lab is already applying this model to other questions of interest, and we hope that it can be a valuable tool for other researchers.”