Imperiali Lab News Brief: combining bioinformatics and biochemistry

Parsing endless possibilities

Lillian Eden | Department of Biology
December 11, 2024

New research from the Imperiali Lab in the Department of Biology at MIT combines bioinformatics and biochemistry to reveal critical players in assembling glycans, the large sugar molecules on bacterial cell surfaces responsible for behaviors such as evading immune responses and causing infections.

In most cases, single-celled organisms such as bacteria interact with their environment through complex chains of sugars known as glycans bound to lipids on their outer membranes. Glycans orchestrate biological responses and interactions, such as evading immune responses and causing infections. 

The first step in assembling most bacterial glycans is the addition of a sugar-phosphate group onto a lipid, which is catalyzed by phosphoglycosyl transferases (PGTs) on the inner membrane. This first sugar is then further built upon by other enzymes in subsequent steps in an assembly-line-like pathway. These critical biochemical processes are challenging to explore because the proteins involved in these processes are embedded in membranes, which makes them difficult to isolate and study. 

Although glycans are found in all living organisms, the sugar molecules that compose glycans are especially diverse in bacteria. There are over 30,000 known bacterial PGTs, and hundreds of sugars for them to act upon. 

Research recently published in PNAS from the Imperiali Lab in the Department of Biology at MIT uses a combination of bioinformatics and biochemistry to predict clusters of “like-minded” PGTs and verify which sugars they will use in the first step of glycan assembly. 

Defining the biochemical machinery for these assembly pathways could reveal new strategies for tackling antibiotic-resistant strains of bacteria. This comprehensive approach could also be used to develop and test inhibitors, halting the assembly pathway at this critical first step. 

Exploring Sequence Similarity

First author Theo Durand, an undergraduate student from Imperial College London who studied at MIT for a year, worked in the Imperiali Lab as part of a research placement. Durand was first tasked with determining which sugars some PGTs would use in the first step of glycan assembly, known as the sugar substrates of the PGTs. When initially those substrate-testing experiments didn’t work, Durand turned to the power of bioinformatics to develop predictive tools. 

Strategically exploring the sugar substrates for PGTs is challenging due to the sheer number of PGTs and the diversity of bacteria, each with its own assorted set of glycans and glycoconjugates. To tackle this problem, Durand deployed a tool called a Sequence Similarity Network (SSN), part of a computational toolkit developed by the Enzyme Function Initiative. 

According to senior author Barbara Imperiali, Class of 1922 Professor of Biology and Chemistry, an SSN provides a powerful way to analyze protein sequences through comparisons of the sequences of tens of thousands of proteins. In an optimized SSN, similar proteins cluster together, and, in the case of PGTs, proteins in the same cluster are likely to share the same sugar substrate. 

For example, a previously uncharacterized PGT that appears in a cluster of PGTs whose first sugar substrate is FucNAc4N would also be predicted to use FucNAc4N. The researchers could then test that prediction to verify the accuracy of the SSN. 

FucNAc4N is the sugar substrate for the PGT of Fusobacterium nucleatum (F. nucleatum), a bacterium that is normally only present in the oral cavity but is correlated with certain cancers and endometriosis, and Streptococcus pneumoniae, a bacterium that causes pneumonia. 

Adjusting the assay

The critical biochemical process of assembling glycans has historically been challenging to define, mainly because assembly is anchored to the interior side of the inner membrane of the bacterium. The purification process itself can be difficult, and the purified proteins don’t necessarily behave in the same manner once outside their native membrane environment.

To address this, the researchers modified a commercially available test to work with proteins still embedded in the membrane of the bacterium, thus saving them weeks of work to purify the proteins. They could then determine the substrate for the PGT by measuring whether there was activity. This first step in glycan assembly is chemically unique, and the test measures one of the reaction products. 

For PGTs whose substrate was unknown, Durand did a deep dive into the literature to find new substrates to test. FucNAc4N, the first sugar substrate for F. nucleatum, was, in fact, Durand’s favorite sugar – he found it in the literature and reached out to a former Imperiali Lab postdoc for the instructions and materials to make it. 

“I ended up down a rabbit hole where I was excited every time I found a new, weird sugar,” Durand recalls with a laugh. “These bacteria are doing a bunch of really complicated things and any tools to help us understand what is actually happening is useful.” 

Exploring inhibitors

Imperiali noted that this research both represents a huge step forward in our understanding of bacterial PGTs and their substrates and presents a pipeline for further exploration. She’s hoping to create a searchable database where other researchers can seed their own sequences into the SSN for their organisms of interest. 

This pipeline could also reveal antibiotic targets in bacteria. For example, she says, the team is using this approach to explore inhibitor development. 

The Imperiali lab worked with Karen Allen, a professor of Chemistry at Boston University, and graduate student Roxanne Siuda to test inhibitors, including ones for F. nucleatum, the bacterium correlated with certain cancers and endometriosis whose first sugar substrate is FucNAc4N. They are also hoping to obtain structures of inhibitors bound to the PGT to enable structure-guided optimization.

“We were able to, using the network, discover the substrate for a PGT, verify the substrate, use it in a screen, and test an inhibitor,” Imperiali says. “This is bioinformatics, biochemistry, and probe development all bundled together, and represents the best of functional genomics.”

Study suggests how the brain, with sleep, learns meaningful maps of spaces

Place cells are well known to encode individual locations, but new experiments and analysis indicate that stitching together a “cognitive map” of a whole environment requires a broader ensemble of cells, aided by sleep, to build a richer network over several days, according to new research from the Wilson Lab.

David Orenstein | The Picower Institute for Learning and Memory
December 10, 2024

On the first day of your vacation in a new city your explorations expose you to innumerable individual places. While the memories of these spots (like a beautiful garden on a quiet side street) feel immediately indelible, it might be days before you have enough intuition about the neighborhood to direct a newer tourist to that same site and then maybe to the café you discovered nearby. A new study in mice by MIT neuroscientists at The Picower Insitute for Learning and Memory provides new evidence for how the brain forms cohesive cognitive maps of whole spaces and highlights the critical importance of sleep for the process.

Scientists have known for decades that the brain devotes neurons in a region called the hippocampus to remembering specific locations. So-called “place cells” reliably activate when an animal is at the location the neuron is tuned to remember. But more useful than having markers of specific spaces is having a mental model of how they all relate in a continuous overall geography. Though such “cognitive maps” were formally theorized in 1948, neuroscientists have remained unsure of how the brain constructs them. The new study in the December edition of Cell Reports finds that the capability may depend upon subtle but meaningful changes over days in the activity of cells that are only weakly attuned to individual locations, but that increase the robustness and refinement of the hippocampus’s encoding of the whole space. With sleep, the study’s analyses indicate, these “weakly spatial” cells increasingly enrich neural network activity in the hippocampus to link together these places into a cognitive map.

“On day 1, the brain doesn’t represent the space very well,” said lead author Wei Guo, a research scientist in the lab of senior author Matthew Wilson, Sherman Fairchild Professor in The Picower Institute and MIT’s Departments of Biology and Brain and Cognitive Sciences. “Neurons represent individual locations, but together they don’t form a map. But on day 5 they form a map. If you want a map, you need all these neurons to work together in a coordinated ensemble.”

Mice mapping mazes

To conduct the study, Guo and Wilson along with labmates Jie “Jack” Zhang and Jonathan Newman introduced mice to simple mazes of varying shapes and let them explore them freely for about half an hour a day for several days. Importantly, the mice were not directed to learn anything specific through the offer of any rewards. They just wandered. Previous studies have shown that mice naturally demonstrate “latent learning” of spaces from this kind of unrewarded experience after several days.

To understand how latent learning takes hold, Guo and his colleagues visually monitored hundreds of neurons in the CA1 area of the hippocampus by engineering cells to flash when a buildup of calcium ions made them electrically active. They not only recorded the neurons’ flashes when the mice were actively exploring, but also while they were sleeping. Wilson’s lab has shown that animals “replay” their previous journeys during sleep, essentially refining their memories by dreaming about their experiences.

Analysis of the recordings showed that the activity of the place cells developed immediately and remained strong and unchanged over several days of exploration.  But this activity alone wouldn’t explain how latent learning or a cognitive map evolves over several days. So unlike in many other studies where scientists focus solely on the strong and clear activity of place cells, Guo extended his analysis to the more subtle and mysterious activity of cells that were not so strongly spatially tuned. Using an emerging technique called “manifold learning” he was able to discern that many of the “weakly spatial” cells gradually correlated their activity not with locations, but with activity patterns among other neurons in the network. As this was happening, Guo’s analyses showed, the network encoded a cognitive map of the maze that increasingly resembled the literal, physical space.

“Although not responding to specific locations like strongly spatial cells, weakly spatial cells specialize in responding to ‘‘mental locations,’’ i.e., specific ensemble firing patterns of other cells,” the study authors wrote. “If a weakly spatial cell’s mental field encompasses two subsets of strongly spatial cells that encode distinct locations, this weakly spatial cell can serve as a bridge between these locations.”

In other words, the activity of the weakly spatial cells likely stitches together the individual locations represented by the place cells into a mental map.

The need for sleep

Studies by Wilson’s lab and many others have shown that memories are consolidated, refined and processed by neural activity, such as replay, that occurs during sleep and rest. Guo and Wilson’s team therefore sought to test whether sleep was necessary for the contribution of weakly spatial cells to latent learning of cognitive maps.

To do this they let some mice explore a new maze twice during the same day with a three-hour siesta in between. Some of the mice were allowed to sleep but some were not. The ones that did showed a significant refinement of their mental map, but the ones that weren’t allowed to sleep showed no such improvement. Not only did the network encoding of the map improve, but also measures of the tuning of individual cells during showed that sleep helped cells become better attuned both to places and to patterns of network activity, so called “mental places” or “fields.”

Mental map meaning

The “cognitive maps” the mice encoded over several days were not literal, precise maps of the mazes, Guo notes. Instead they were more like schematics. Their value is that they provide the brain with a topology that can be explored mentally, without having to be in the physical space. For instance, once you’ve formed your cognitive map of the neighborhood around your hotel, you can plan the next morning’s excursion (e.g. you could imagine grabbing a croissant at the bakery you observed a few blocks west and then picture eating it on one of those benches you noticed in the park along the river).

Indeed, Wilson hypothesized that the weakly spatial cells’ activity may be overlaying salient non-spatial information that brings additional meaning to the maps (i.e. the idea of a bakery is not spatial, even if it’s closely linked to a specific location). The study, however, included no landmarks within the mazes and did not test any specific behaviors among the mice. But now that the study has identified that weakly spatial cells contribute meaningfully to mapping, Wilson said future studies can investigate what kind of information they may be incorporating into the animals’ sense of their environments. We seem to intuitively regard the spaces we inhabit as more than just sets of discrete locations.

“In this study we focused on animals behaving naturally and demonstrated that during freely exploratory behavior and subsequent sleep, in the absence of reinforcement, substantial neural plastic changes at the ensemble level still occur,” the authors concluded. “This form of implicit and unsupervised learning constitutes a crucial facet of human learning and intelligence, warranting further in-depth investigations.”

The Freedom Together Foundation, The Picower Institute for Learning and Memory and the National Institutes of Health funded the study.

Cellular traffic congestion in chronic diseases suggests new therapeutic targets

Many chronic diseases have a common denominator that could be driving their dysfunction: reduced protein mobility, which in turn reduces protein function. A new paper from the Young Lab describes this pervasive mobility defect.

Greta Friar | Whitehead Institute
November 26, 2024

Chronic diseases like type 2 diabetes and inflammatory disorders have a huge impact on humanity. They are a leading cause of disease burden and deaths around the globe, are physically and economically taxing, and the number of people with such diseases is growing.

Treating chronic disease has proven difficult because there is not one simple cause, like a single gene mutation, that a treatment could target. At least, that’s how it has appeared to scientists. However, research from Whitehead Institute Member Richard Young and colleagues, published in the journal Cell on November 27, reveals that many chronic diseases have a common denominator that could be driving their dysfunction: reduced protein mobility. What this means is that around half of all proteins active in cells slow their movement when cells are in a chronic disease state, reducing the proteins’ functions. The researchers’ findings suggest that protein mobility may be a linchpin for decreased cellular function in chronic disease, making it a promising therapeutic target.

In this paper, Young and colleagues in his lab, including postdoc Alessandra Dall’Agnese, graduate students Shannon Moreno and Ming Zheng, and research scientist Tong Ihn Lee, describe their discovery of this common mobility defect, which they call proteolethargy; explain what causes the defect and how it leads to dysfunction in cells; and propose a new therapeutic hypothesis for treating chronic diseases.

“I’m excited about what this work could mean for patients,” says Dall’Agnese. “My hope is that this will lead to a new class of drugs that restore protein mobility, which could help people with many different diseases that all have this mechanism as a common denominator.”

“This work was a collaborative, interdisciplinary effort that brought together biologists, physicists, chemists, computer scientists and physician-scientists,” Lee says. “Combining that expertise is a strength of the Young lab. Studying the problem from different viewpoints really helped us think about how this mechanism might work and how it could change our understanding of the pathology of chronic disease.”

Commuter delays cause work stoppages in the cell

How do proteins moving more slowly through a cell lead to widespread and significant cellular dysfunction? Dall’Agnese explains that every cell is like a tiny city, with proteins as the workers who keep everything running. Proteins have to commute in dense traffic in the cell, traveling from where they are created to where they work. The faster their commute, the more work they get done. Now, imagine a city that starts experiencing traffic jams along all the roads. Stores don’t open on time, groceries are stuck in transit, meetings are postponed. Essentially all operations in the city are slowed.

The slow down of operations in cells experiencing reduced protein mobility follows a similar progression. Normally, most proteins zip around the cell bumping into other molecules until they locate the molecule they work with or act on. The slower a protein moves, the fewer other molecules it will reach, and so the less likely it will be able to do its job. Young and colleagues found that such protein slow-downs lead to measurable reductions in the functional output of the proteins. When many proteins fail to get their jobs done in time, cells begin to experience a variety of problems—as they are known to do in chronic diseases.

Discovering the protein mobility problem

Young and colleagues first suspected that cells affected in chronic disease might have a protein mobility problem after observing changes in the behavior of the insulin receptor, a signaling protein that reacts to the presence of insulin and causes cells to take in sugar from blood. In people with diabetes, cells become less responsive to insulin — a state called insulin resistance — causing too much sugar to remain in the blood. In research published on insulin receptors in Nature Communications in 2022, Young and colleagues reported that insulin receptor mobility might be relevant to diabetes.

Knowing that many cellular functions are altered in diabetes, the researchers considered the possibility that altered protein mobility might somehow affect many proteins in cells. To test this hypothesis, they studied proteins involved in a broad range of cellular functions, including MED1, a protein involved in gene expression; HP1α, a protein involved in gene silencing; FIB1, a protein involved in production of ribosomes; and SRSF2, a protein involved in splicing of messenger RNA. They used single-molecule tracking and other methods to measure how each of those proteins moves in healthy cells and in cells in disease states. All but one of the proteins showed reduced mobility (about 20-35%) in the disease cells.

“I’m excited that we were able to transfer physics-based insight and methodology, which are commonly used to understand the single-molecule processes like gene transcription in normal cells, to a disease context and show that they can be used to uncover unexpected mechanisms of disease,” Zheng says. “This work shows how the random walk of proteins in cells is linked to disease pathology.”

Moreno concurs: “In school, we’re taught to consider changes in protein structure or DNA sequences when looking for causes of disease, but we’ve demonstrated that those are not the only contributing factors. If you only consider a static picture of a protein or a cell, you miss out on discovering these changes that only appear when molecules are in motion.”

 Can’t commute across the cell, I’m all tied up right now

Next, the researchers needed to determine what was causing the proteins to slow down. They suspected that the defect had to do with an increase in cells of the level of reactive oxygen species (ROS), molecules that are highly prone to interfering with other molecules and their chemical reactions. Many types of chronic-disease-associated triggers, such as higher sugar or fat levels, certain toxins, and inflammatory signals, lead to an increase in ROS, also known as an increase in oxidative stress. The researchers measured the mobility of the proteins again, in cells that had high levels of ROS and were not otherwise in a disease state, and saw comparable mobility defects, suggesting that oxidative stress was to blame for the protein mobility defect.

The final part of the puzzle was why some, but not all, proteins slow down in the presence of ROS. SRSF2 was the only one of the proteins that was unaffected in the experiments, and it had one clear difference from the others: its surface did not contain any cysteines, an amino acid building block of many proteins. Cysteines are especially susceptible to interference from ROS because it will cause them to bond to other cysteines. When this bonding occurs between two protein molecules, it slows them down because the two proteins cannot move through the cell as quickly as either protein alone.

About half of the proteins in our cells contain surface cysteines, so this single protein mobility defect can impact many different cellular pathways. This makes sense when one considers the diversity of dysfunctions that appear in cells of people with chronic diseases: dysfunctions in cell signaling, metabolic processes, gene expression and gene silencing, and more. All of these processes rely on the efficient functioning of proteins—including the diverse proteins studied by the researchers. Young and colleagues performed several experiments to confirm that decreased protein mobility does in fact decrease a protein’s function. For example, they found that when an insulin receptor experiences decreased mobility, it acts less efficiently on IRS1, a molecule to which it usually adds a phosphate group.

From understanding a mechanism to treating a disease

Discovering that decreased protein mobility in the presence of oxidative stress could be driving many of the symptoms of chronic disease provides opportunities to develop therapies to rescue protein mobility. In the course of their experiments, the researchers treated cells with an antioxidant drug—something that reduces ROS—called N-acetyl cysteine and saw that this partially restored protein mobility.

The researchers are pursuing a variety of follow ups to this work, including the search for drugs that safely and efficiently reduce ROS and restore protein mobility. They developed an assay that can be used to screen drugs to see if they restore protein mobility by comparing each drug’s effect on a simple biomarker with surface cysteines to one without. They are also looking into other diseases that may involve protein mobility, and are exploring the role of reduced protein mobility in aging.

“The complex biology of chronic diseases has made it challenging to come up with effective therapeutic hypotheses,” says Young, who is also a professor of biology at the Massachusetts Institute of Technology. “The discovery that diverse disease-associated stimuli all induce a common feature, proteolethargy, and that this feature could contribute to much of the dysregulation that we see in chronic disease, is something that I hope will be a real game changer for developing drugs that work across the spectrum of chronic diseases.”

KI Gallery Exhibit: Artifacts from a half century of cancer research

Celebrating 50 years of MIT's cancer research program and the individuals who have shaped its journey, the Koch Institute Gallery features 10 significant artifacts, from one of the earliest PCR machine developed by Nobel Laureate H. Robert Horvitz to a preserved zebrafish from the lab of Nancy Hopkins in the Koch Institute Public Galleries. Visit Monday through Friday, 9AM-5PM.

Koch Institute
November 21, 2024

Throughout 2024, MIT’s Koch Institute for Integrative Cancer Research has celebrated 50 years of MIT’s cancer research program and the individuals who have shaped its journey. In honor of this milestone anniversary year, on November 19, the Koch Institute celebrated the opening of a new exhibition: Object Lessons: Celebrating 50 Years of Cancer Research at MIT in 10 Items. Object Lessons invites the public to explore significant artifacts—from one of the earliest PCR machines, developed in the lab of Nobel laureate H. Robert Horvitz, to Greta, a groundbreaking zebrafish from the lab of Professor Nancy Hopkins—in the half century of discoveries and advancements that have positioned MIT at the forefront of the fight against cancer.

50 years of innovation

The exhibition provides a glimpse into the many contributors and advancements that have defined MIT’s cancer research history since the founding of the Center for Cancer Research in 1974. When the National Cancer Act was passed in 1971, very little was understood about the biology of cancer, and it aimed to deepen our understanding of cancer and develop better strategies for the prevention, detection, and treatment of the disease. MIT embraced this call to action, establishing a center where many leading biologists tackled cancer’s fundamental questions. Building on this foundation, the Koch Institute opened its doors in 2011, housing engineers and life scientists from many fields under one roof to accelerate progress against cancer in novel and transformative ways.

In the 13 years since, the Koch Institute’s collaborative and interdisciplinary approach to cancer research has yielded significant advances in our understanding of the underlying biology of cancer and allowed for the translation of these discoveries into meaningful patient impacts. Over 120 spin-out companies—many headquartered nearby in the Kendall Square area—have their roots in Koch Institute research, with nearly half having advanced their technologies to clinical trials or commercial applications. The Koch Institute’s collaborative approach extends beyond its labs: principal investigators often form partnerships with colleagues at world-renowned medical centers, bridging the gap between discovery and clinical impact.

Current Koch Institute Director Matthew Vander Heiden, also a practicing oncologist at the Dana-Farber Cancer Institute, is driven by patient stories.

“It is never lost on us that the work we do in the lab is important to change the reality of cancer for patients,” he says. “We are constantly motivated by the urgent need to translate our research and improve outcomes for those impacted by cancer.”

Symbols of progress

The items on display as part of Object Lessons take viewers on a journey through five decades of MIT cancer research, from the pioneering days of Salvador Luria, founding director of the Center for Cancer Research, to some of the Koch Institute’s newest investigators including Francisco Sánchez-Rivera, Eisen and Chang Career Development Professor and an assistant professor of biology, and Jessica Stark, Underwood-Prescott Career Development Professor and an assistant professor of biological engineering and chemical engineering.

Among the standout pieces is a humble yet iconic object: Salvador Luria’s ceramic mug, emblazoned with “Luria’s broth.” Lysogeny broth, often called—apocryphally—Luria Broth, is a medium for growing bacteria. Still in use today, the recipe was first published in 1951 by a research associate in Luria’s lab. The artifact, on loan from the MIT Museum, symbolizes the foundational years of the Center for Cancer Research and serves as a reminder of Luria’s influence as an early visionary. His work set the stage for a new era of biological inquiry that would shape cancer research at MIT for generations.

Visitors can explore firsthand how the Koch Institute continues to build on the legacy of its predecessors, translating decades of knowledge into new tools and therapies that have the potential to transform patient care and cancer research.

For instance, the PCR machine designed in the Horvitz Lab in the 1980s made genetic manipulation of cells easier, and gene sequencing faster and more cost-effective. At the time of its commercialization, this groundbreaking benchtop unit marked a major leap forward. In the decades since, technological advances have allowed for the visualization of DNA and biological processes at a much smaller scale, as demonstrated by the handheld BioBits® imaging device developed by Stark and on display next door to the Horvitz panel.

 “We created BioBits kits to address a need for increased equity in STEM education,” Stark says. “By making hands-on biology education approachable and affordable, BioBits kits are helping inspire and empower the next generation of scientists.”

While the exhibition showcases scientific discoveries and marvels of engineering, it also aims to underscore the human element of cancer research through personally significant items, such as a messenger bag and Seq-Well device belonging to Alex Shalek, J. W. Kieckhefer Professor in the Institute for Medical Engineering and Science and the Department of Chemistry.

Shalek investigates the molecular differences between individual cells, developing mobile RNA-sequencing devices. He could often be seen toting the bag around the Boston area, and worldwide as he perfected and shared his technology with collaborators near and far. Through his work, Shalek has helped to make single cell sequencing accessible for labs in more than 30 countries across six continents.

“The KI seamlessly brings together students, staff, clinicians, and faculty across multiple different disciplines to collaboratively derive transformative insights into cancer,” Shalek says. “To me, these sorts of partnerships are the best part about being at MIT.”

Around the corner from Shalek’s display, visitors will find an object that serves as a stark reminder of the real people impacted by Koch Institute research: Steven Keating’s SM’12, PhD ’16 3D-printed model of his own brain tumor. Keating, who passed away in 2019, became a fierce advocate for the rights of patients to their medical data, and came to know Vander Heiden through his pursuit to become an expert on his tumor type, IDH-mutant glioma. In the years since, Vander Heiden’s work has contributed to a new therapy to treat Steven’s tumor type. In 2024, the drug, called vorasidenib, gained FDA approval, providing the first therapeutic breakthrough for Keating’s cancer in more than 20 years.

As the Koch Institute looks to the future, Object Lessons stands as a celebration of the people, the science, and the culture that have defined MIT’s first half-century of breakthroughs and contributions to the field of cancer research.

“Working in the uniquely collaborative environment of the Koch Institute and MIT, I am confident that we will continue to unlock key insights in the fight against cancer,” says Vander Heiden. “Our community is poised to embark on our next 50 years with the same passion and innovation that has carried us this far.”

Object Lessons will be on view in the Koch Institute Public Galleries. Visit Monday through Friday, 9 a.m. to 5 p.m., to see the exhibit up close.

A blueprint for better cancer immunotherapies

By examining antigen architectures, MIT researchers built a therapeutic cancer vaccine that may improve tumor response to immune checkpoint blockade treatments.

Bendta Schroeder | Koch Institute
November 25, 2024

Immune checkpoint blockade (ICB) therapies can be very effective against some cancers by helping the immune system recognize cancer cells that are masquerading as healthy cells.

T cells are built to recognize specific pathogens or cancer cells, which they identify from the short fragments of proteins presented on their surface. These fragments are often referred to as antigens. Healthy cells will will not have the same short fragments or antigens on their surface, and thus will be spared from attack.

Even with cancer-associated antigens studding their surfaces, tumor cells can still escape attack by presenting a checkpoint protein, which is built to turn off the T cell. Immune checkpoint blockade therapies bind to these “off-switch” proteins and allow the T cell to attack.

Researchers have established that how cancer-associated antigens are distributed throughout a tumor determines how it will respond to checkpoint therapies. Tumors with the same antigen signal across most of its cells respond well, but heterogeneous tumors with subpopulations of cells that each have different antigens, do not. The overwhelming majority of tumors fall into the latter category and are characterized by heterogenous antigen expression. Because the mechanisms behind antigen distribution and tumor response are poorly understood, efforts to improve ICB therapy response in heterogenous tumors have been hindered.

In a new study, MIT researchers analyzed antigen expression patterns and associated T cell responses to better understand why patients with heterogenous tumors respond poorly to ICB therapies. In addition to identifying specific antigen architectures that determine how immune systems respond to tumors, the team developed an RNA-based vaccine that, when combined with ICB therapies, was effective at controlling tumors in mouse models of lung cancer.

Stefani Spranger, associate professor of biology and member of MIT’s Koch Institute for Integrative Cancer Research, is the senior author of the study, appearing recently in the Journal for Immunotherapy of Cancer. Other contributors include Koch Institute colleague Forest White, the Ned C. (1949) and Janet Bemis Rice Professor and professor of biological engineering at MIT, and Darrell Irvine, professor of immunology and microbiology at Scripps Research Institute and a former member of the Koch Institute.

While RNA vaccines are being evaluated in clinical trials, current practice of antigen selection is based on the predicted stability of antigens on the surface of tumor cells.

“It’s not so black-and-white,” says Spranger. “Even antigens that don’t make the numerical cut-off could be really valuable targets. Instead of just focusing on the numbers, we need to look inside the complex interplays between antigen hierarchies to uncover new and important therapeutic strategies.”

Spranger and her team created mouse models of lung cancer with a number of different and well-defined expression patterns of cancer-associated antigens in order to analyze how each antigen impacts T cell response. They created both “clonal” tumors, with the same antigen expression pattern across cells, and “subclonal” tumors that represent a heterogenous mix of tumor cell subpopulations expressing different antigens. In each type of tumor, they tested different combinations of antigens with strong or weak binding affinity to MHC.

The researchers found that the keys to immune response were how widespread an antigen is expressed across a tumor, what other antigens are expressed at the same time, and the relative binding strength and other characteristics of antigens expressed by multiple cell populations in the tumor

As expected, mouse models with clonal tumors were able to mount an immune response sufficient to control tumor growth when treated with ICB therapy, no matter which combinations of weak or strong antigens were present. However, the team discovered that the relative strength of antigens present resulted in dynamics of competition and synergy between T cell populations, mediated by immune recognition specialists called cross-presenting dendritic cells in tumor-draining lymph nodes. In pairings of two weak or two strong antigens, one resulting T cell population would be reduced through competition. In pairings of weak and strong antigens, overall T cell response was enhanced.

In subclonal tumors, with different cell populations emitting different antigen signals, competition rather than synergy was the rule, regardless of antigen combination. Tumors with a subclonal cell population expressing a strong antigen would be well-controlled under ICB treatment at first, but eventually parts of the tumor lacking the strong antigen began to grow and developed the ability evade immune attack and resist ICB therapy.

Incorporating these insights, the researchers then designed an RNA-based vaccine to be delivered in combination with ICB treatment with the goal of strengthening immune responses suppressed by antigen-driven dynamics. Strikingly, they found that no matter the binding affinity or other characteristics of the antigen targeted, the vaccine-ICB therapy combination was able to control tumors in mouse models. The widespread availability of an antigen across tumor cells determined the vaccine’s success, even if that antigen was associated with weak immune response.

Analysis of clinical data across tumor types showed that the vaccine-ICB therapy combination may be an effective strategy for treating patients with tumors with high heterogeneity. Patterns of antigen architectures in patient tumors correlated with T cell synergy or competition in mice models and determined responsiveness to ICB in cancer patients. In future work with the Irvine laboratory at the Scripps Research Institute, the Spranger laboratory will further optimize the vaccine with the aim of testing the therapy strategy in the clinic.

Whitehead Institute Member Sebastian Lourido receives the 2024 William Trager Award

Sebastian Lourido was awarded the 2024 William Trager Award by the American Society of Tropical Medicine and Hygiene for his pioneering use of CRISPR tools to study the biology of Toxoplasma gondii, a single-celled parasite that infects about 25% of humans.

Merrill Meadow | Whitehead Institute
November 14, 2024

The Trager Award recognizes scientists who have made substantial contributions to the study of basic parasitology through breakthroughs that have unlocked completely new areas of work.

ASTMH selected Lourido — who is also an associate professor of Biology at Massachusetts Institute of Technology and holds the Landon Clay Career Development Chair at Whitehead Institute — in recognition of his groundbreaking discoveries on the molecular biology of Toxoplasma. In particular, Lourido has been lauded for his use of cutting-edge CRISPR tools to study the fundamental biology of Toxoplasma gondii, a single-celled parasite that infects about 25 percent of humans.

“My laboratory colleagues and I are grateful for this recognition of our work, and for the wonderful opportunity it presents to more widely share the ideas and tools we have developed,” says Lourido, who will deliver a talk on his research at the ASTMH Annual Meeting in New Orleans on Nov. 15, 2024.

Research findings: Open technology platform enables new versatility for neuroscience research with more naturalistic behavior

System developed by MIT, including co-author Mathew Wilson, and Open Ephys team provides a fast, light, standardized means for combining multiple instruments with minimal hindrance of lab mouse mobility.

David Orenstein | The Picower Institute for Learning and Memory
November 13, 2024

Individual technologies for recording and controlling neural activity in the brains of research mice have each advanced rapidly but the potential of easily mixing and matching them to conduct more sophisticated experiments, all while enabling the most natural behavior possible, has been difficult to realize. To empower a new generation of neuroscience experiments, engineers and scientists at MIT and the Open Ephys cooperative have developed a new standardized, open-source hardware and software platform. They described the system, called ONIX, in a new study Nov. 11 in Nature Methods.

ONIX provides labs with a means to acquire data simultaneously from multiple popular implanted technologies (such as electrodes, microscopes and stimulation probes) while also powering and controlling those independent devices via a very thin coaxial cable and unimposing headstage. The system provides a standardized means of acquiring each instrument’s data and neatly integrating it all for efficient transmission to desktop software where scientists can then see and work with it. In the study the researchers document ONIX’s high data throughput and low latency. They also demonstrate that because the system’s headstage and cable are so physically light and resistant to twisting, mice can behave completely naturally and wear the system for days on end. In a large enclosure at MIT with a complex 3D landscape, for instance, mice wearing the system were able to nimbly scamper, climb and leap in experiments comparably to mice wearing no hardware at all.

“ONIX represents the culmination of many quantitative improvements that all come together to enable a qualitative leap in our ability to perform neural recordings in naturalistic behavior,” said corresponding author Jakob Voigts, an MIT neuroscience alumnus, co-founder of Open Ephys, and a research group leader at the Janelia Research Campus of the Howard Hughes Medical Institute. “We can now study the brain during behaviors that unfold over many hours and allow the animals to learn, to make a lot of complex decisions, and to interact with the world in ways that were previously not accessible.”

Jon Newman, a former MIT postdoc and now president of Open Ephys, and MIT postdoc Jie “Jack” Zhang led the work in the lab of co-author Matt Wilson, Sherman Fairchild Professor in The Picower Institute for Learning and Memory at MIT, together with Aarón Cuevas-López at Open Ephys. Wilson, whose lab studies neural processes underlying memory, said the idea behind developing ONIX was to develop a set of standards that would make it easy for any lab to use multiple technologies to acquire rich neural data while animals performed complex behaviors over long time periods.

“Jon’s motivation, the principle he used, was that if we need to do experiments that combined things like optogenetics, imaging, tetrode electrophysiology, and neuropixels, could we do it in a way that would not only enable experiments we were doing but also more complex experiments, involving more complex behavior, involving the integration of different recording methodologies that advances the whole community and not just one individual lab?,” said Wilson, a faculty member in MIT’s Departments of Biology and Brain and Cognitive Sciences (BCS).

Open origins

As Newman and then Zhang began to develop the technology starting in 2016 with this community-minded, open-source philosophy, Wilson said, it was natural to do so in partnership with Open Ephys, an MIT-born effort, now based in Atlanta, which develops and disseminates open, standardized systems to for neuroscience research. Making systems open-source provides researchers with many advantages, Voigts explained.

“Anyone can download the plans for the hardware as well as the software that make up the system,” Voigts said. “For technically well-versed neuroscientists this means that it is easier to modify aspects of the system. Open source also means that the system works with probes from many manufacturers because the connectors and standards aren’t proprietary. Most importantly, the open standards and design allow hardware and software developers to use ONIX as a starting point for completely new tools.”

Voigts compared ONIX to the USB standard people enjoy on their computers and phones. Any number of accessories can easily work with those devices because all they have to do is plug in. Similarly with ONIX, Wilson said, “You can mix and match and combine and then add new technologies without having to re-engineer the whole system.”

Lab demos

To validate the platform, the researchers conducted several experiments with mice including in Wilson’s lab and in the lab of co-author Mark Harnett, Associate Professor in the McGovern Institute for Brain Research and BCS Department at MIT (where Voigts did his postdoctoral work).

In their experiments they compared the mobility of mice implanted with electrodes but sometimes wearing ONIX (and its 0.3 mm tether cable) vs. sometimes wearing a commonly used and but substantially thicker (1.8 mm) tether cable over an 8-hour neural recording session. The mice proved to be much more mobile while wearing the lighter and thinner ONIX system, showing a broader range of exploration, freer head movement, and much faster running speeds. In a similar experiment in which mice were inplanted with tetrodes in the brain’s retrosplenial cortex, they even were able to jump while wearing ONIX but did not while wearing the more imposing tether. In another experiment the researchers compared mouse mobility around the enclosure between ONIX-wearing and completely unimplanted mice. The mice explored with equal freedom (as measured by motion tracking cameras) though the ONIX mice didn’t run as fast as unimplanted mice.

In further experiments, Voigts’s team at Janelia used ONIX to record for 55 hours because the system kept its cable tangle-free over that long-duration activity.

Finally the researchers showed that ONIX could transmit recordings not only from implanted electrodes and tetrodes but also from miniscopes and neuropixels, via experiments at the Allen Institute for Brain Science. They also showed how Open Ephys’s data acquisition software Bonsai (developed by co-author Goncalo Lopes) enabled the brain activity recordings to be synchronized with behavior tracking cameras to correlate neural activity and behavior.

Voigts said he hopes the system earns widespread adoption, especially as hardware costs continue to come down.

“I hope that this system convinces others to take the plunge and record neural data in more complex animal behaviors,” he said.

In addition to the authors named above, other authors are Nicholas Miller, Takato Honda, Marie-Sophie van der Goes, Alexandra Leighton Felipe Carvalho, Anna Lakunina, and Joshua Siegle, who co-founded Open Ephys with Voigts.

Funding for the study came from the National Institutes of Health, The Picower Institute for Learning and Memory, The JPB Foundation, the National Science Foundation, a Brain Science Foundation Research Grant Award, a Kavli-Grass-MBL Fellowship by the Kavli Foundation, the Grass Foundation, and Marine Biological Laboratory (MBL), an Osamu Hayaishi Memorial Scholarship for Study Abroad, a Uehara Memorial Foundation Overseas Fellowship, and Japan Society for the Promotion of Science (JSPS) Overseas Fellowship. a Mathworks Graduate Fellowship. The Simons Center for the Social Brain at MIT and the Howard Hughes Medical Institute.

A new approach to modeling complex biological systems

MIT engineers’ new model could help researchers glean insights from genomic data and other huge datasets. This is potentially critical to researchers who study any kind of complex biological system, according to senior author Douglas Lauffenburger.

Anne Trafton | MIT News
November 5, 2024

Over the past two decades, new technologies have helped scientists generate a vast amount of biological data. Large-scale experiments in genomics, transcriptomics, proteomics, and cytometry can produce enormous quantities of data from a given cellular or multicellular system.

However, making sense of this information is not always easy. This is especially true when trying to analyze complex systems such as the cascade of interactions that occur when the immune system encounters a foreign pathogen.

MIT biological engineers have now developed a new computational method for extracting useful information from these datasets. Using their new technique, they showed that they could unravel a series of interactions that determine how the immune system responds to tuberculosis vaccination and subsequent infection.

This strategy could be useful to vaccine developers and to researchers who study any kind of complex biological system, says Douglas Lauffenburger, the Ford Professor of Engineering in the departments of Biological Engineering, Biology, and Chemical Engineering.

“We’ve landed on a computational modeling framework that allows prediction of effects of perturbations in a highly complex system, including multiple scales and many different types of components,” says Lauffenburger, the senior author of the new study.

Shu Wang, a former MIT postdoc who is now an assistant professor at the University of Toronto, and Amy Myers, a research manager in the lab of University of Pittsburgh School of Medicine Professor JoAnne Flynn, are the lead authors of a new paper on the work, which appears today in the journal Cell Systems.

Modeling complex systems

When studying complex biological systems such as the immune system, scientists can extract many different types of data. Sequencing cell genomes tells them which gene variants a cell carries, while analyzing messenger RNA transcripts tells them which genes are being expressed in a given cell. Using proteomics, researchers can measure the proteins found in a cell or biological system, and cytometry allows them to quantify a myriad of cell types present.

Using computational approaches such as machine learning, scientists can use this data to train models to predict a specific output based on a given set of inputs — for example, whether a vaccine will generate a robust immune response. However, that type of modeling doesn’t reveal anything about the steps that happen in between the input and the output.

“That AI approach can be really useful for clinical medical purposes, but it’s not very useful for understanding biology, because usually you’re interested in everything that’s happening between the inputs and outputs,” Lauffenburger says. “What are the mechanisms that actually generate outputs from inputs?”

To create models that can identify the inner workings of complex biological systems, the researchers turned to a type of model known as a probabilistic graphical network. These models represent each measured variable as a node, generating maps of how each node is connected to the others.

Probabilistic graphical networks are often used for applications such as speech recognition and computer vision, but they have not been widely used in biology.

Lauffenburger’s lab has previously used this type of model to analyze intracellular signaling pathways, which required analyzing just one kind of data. To adapt this approach to analyze many datasets at once, the researchers applied a mathematical technique that can filter out any correlations between variables that are not directly affecting each other. This technique, known as graphical lasso, is an adaptation of the method often used in machine learning models to strip away results that are likely due to noise.

“With correlation-based network models generally, one of the problems that can arise is that everything seems to be influenced by everything else, so you have to figure out how to strip down to the most essential interactions,” Lauffenburger says. “Using probabilistic graphical network frameworks, one can really boil down to the things that are most likely to be direct and throw out the things that are most likely to be indirect.”

Mechanism of vaccination

To test their modeling approach, the researchers used data from studies of a tuberculosis vaccine. This vaccine, known as BCG, is an attenuated form of Mycobacterium bovis. It is used in many countries where TB is common but isn’t always effective, and its protection can weaken over time.

In hopes of developing more effective TB protection, researchers have been testing whether delivering the BCG vaccine intravenously or by inhalation might provoke a better immune response than injecting it. Those studies, performed in animals, found that the vaccine did work much better when given intravenously. In the MIT study, Lauffenburger and his colleagues attempted to discover the mechanism behind this success.

The data that the researchers examined in this study included measurements of about 200 variables, including levels of cytokines, antibodies, and different types of immune cells, from about 30 animals.

The measurements were taken before vaccination, after vaccination, and after TB infection. By analyzing the data using their new modeling approach, the MIT team was able to determine the steps needed to generate a strong immune response. They showed that the vaccine stimulates a subset of T cells, which produce a cytokine that activates a set of B cells that generate antibodies targeting the bacterium.

“Almost like a roadmap or a subway map, you could find what were really the most important paths. Even though a lot of other things in the immune system were changing one way or another, they were really off the critical path and didn’t matter so much,” Lauffenburger says.

The researchers then used the model to make predictions for how a specific disruption, such as suppressing a subset of immune cells, would affect the system. The model predicted that if B cells were nearly eliminated, there would be little impact on the vaccine response, and experiments showed that prediction was correct.

This modeling approach could be used by vaccine developers to predict the effect their vaccines may have, and to make tweaks that would improve them before testing them in humans. Lauffenburger’s lab is now using the model to study the mechanism of a malaria vaccine that has been given to children in Kenya, Ghana, and Malawi over the past few years.

“The advantage of this computational approach is that it filters out many biological targets that only indirectly influence the outcome and identifies those that directly regulate the response. Then it’s possible to predict how therapeutically altering those biological targets would change the response. This is significant because it provides the basis for future vaccine and trial designs that are more data driven,” says Kathryn Miller-Jensen, a professor of biomedical engineering at Yale University, who was not involved in the study.

Lauffenburger’s lab is also using this type of modeling to study the tumor microenvironment, which contains many types of immune cells and cancerous cells, in hopes of predicting how tumors might respond to different kinds of treatment.

The research was funded by the National Institute of Allergy and Infectious Diseases.

Sauer & Davis Lab News Brief: structures of molecular woodchippers reveal mechanism for versatility

Rest in pieces: deconstructing polypeptide degradation machinery

Lillian Eden | Department of Biology
November 12, 2024

Research from the Sauer and Davis Labs in the Department of Biology at MIT shows that conformational changes contribute to the specificity of “molecular woodchippers” 

Degradation is a crucial process for maintaining protein homeostasis by culling excess or damaged proteins whose components can then be recycled. It is also a highly regulated process—for good reason. A cell could potentially waste many resources if the degradation machinery destroys proteins it shouldn’t. 

One of the major pathways for protein degradation in bacteria and eukaryotic mitochondria involves a molecular machine called ClpXP. ClpXP is made up of two components: a star-shaped structure made up of six subunits called ClpX that engages and unfolds proteins tagged for degradation, and an associated barrel-shaped enzyme, called ClpP, that chemically breaks up proteins into small pieces called peptides. 

ClpXP is incredibly adaptable and is often compared to a woodchipper — able to take in materials and spit out their broken-down components. Thanks to biochemical experiments, this molecular degradation machine is known to be able to break down hundreds of different proteins in the cell regardless of physical or chemical properties such as size, shape, or charge. ClpX uses energy from ATP hydrolysis to unfold proteins before they are threaded through its central channel, referred to as the axial channel, and into the degradation chamber of ClpP.

In three papers, one in PNAS and two in Nature Communications, researchers from the Department of Biology at MIT have expanded our understanding of how this molecular machinery engages with, unfolds, and degrades proteins — and how that machinery refrains, by design, from unfolding proteins not tagged for degradation. 

Alireza Ghanbarpour, until recently a postdoc in the Sauer Lab and Davis Lab and first author on all three papers, began with a simple question: given the vast repertoire of potential substrates — that is, proteins to be degraded — how is ClpXP so specific?

Ghanbarpour — now an assistant professor in the Department of Biochemistry and Molecular Biology at Washington University School of Medicine in St. Louis — found that the answer to this question lies in conformational changes in the molecular machine as it engages with an ill-fated protein. 

Reverse Engineering using Structural Insights

Ghanbarpour approached the question of ClpXP’s versatility by characterizing conformational changes of the molecular machine using a technique called cryogenic electron microscopy. In cryo-EM, sample particles are frozen in solution, and images are collected; algorithms then create 3D renderings from the 2D images.

“It’s really useful to generate different structures in different conditions and then put them together until you know how a machine works,” he says. “I love structural biology, and these molecular machines make fascinating targets for structural work and biochemistry. Their structural plasticity and precise functions offer exciting opportunities to understand how nature leverages enzyme conformations to generate novel functions and tightly regulate protein degradation within the cell.”

Inside the cell, these proteases do not work alone but instead work together with “adaptor” proteins, which can promote — or inhibit — degradation by ClpXP. One of the adaptor proteins that promotes degradation by ClpXP is SspB. 

In E. coli and most other bacteria, ClpXP and SspB interact with a tag called ssrA that is added to incomplete proteins when their biosynthesis on ribosomes stalls. 

The tagging process frees up the ribosome to make more proteins, but creates a problem: incomplete proteins are prone to aggregation, which could be detrimental to cellular health and can lead to disease. By interacting with the degradation tag, ClpXP and SspB help to ensure the degradation of these incomplete proteins. Understanding this process and how it may go awry may open therapeutic avenues in the future.

“It wasn’t clear how certain adapters were interacting with the substrate and the molecular machines during substrate delivery,” Ghanbarpour notes. “My recent structure reveals that the adapter engages with the enzyme, reaching deep into the axial channel to deliver the substrate.” 

Ghanbarpour and colleagues showed that ClpX engages with both the SspB adaptor and the ssrA degradation tag of an ill-fated protein at the same time. Surprisingly, they also found that this interaction occurs while the upper part of the axial channel through ClpX is closed — in fact, the closed channel allows ClpX to contact both the tag and the adaptor simultaneously.

This result was surprising, according to senior author and Salvador E. Luria Professor of Biology Robert Sauer, whose lab has been working on understanding this molecular machine for more than two decades: it was unclear whether the channel through ClpX closes in response to a substrate interaction, or if the channel is always closed until it opens to pass an unfolded protein down to ClpP to be degraded.

Preventing Rogue Degradation

Throughout this project, Ghanbarpour was co-advised by structural biologist and Associate Professor of Biology Joey Davis and collaborated with members of the Davis Lab to better understand the conformational changes that allow these molecular machines to function. Using a cryo-EM analysis approach developed in the Davis lab called CryoDRGN, the researchers showed that there is an equilibrium between ClpXP in the open and closed states: it’s usually closed but is open in about 10% of the particles in their samples. 

The closed state is almost identical to the conformation ClpXP assumes when it is engaged with an ssrA-tagged substrate and the SspB adaptor. 

To better understand the biological significance of this equilibrium, Ghanbarpour created a mutant of ClpXP that is always in the open position. Compared to normal ClpXP, the mutant degraded some proteins lacking obvious degradation tags faster but degraded ssrA-tagged proteins more slowly. 

According to Ghanbarpour, these results indicate that the closed channel improves ClpXP’s ability to efficiently engage tagged proteins meant to be degraded, whereas the open channel allows more “promiscuous” degradation. 

Pausing the Process

The next question Ghanbarpour wanted to answer was what this molecular machine looks like while engaged with a protein it is attempting to unfold. To do that, he created a substrate with a highly stable protein attached to the degradation tag that is initially pulled into ClpX, but then dramatically slows protein unfolding and degradation.

In the structures where the degradation process stalls, Ghanbarpour found that the degradation tag was pulled far into the molecular machine—through ClpX and into ClpP—and the folded protein part of the substrate was pulled tightly against the axial channel of ClpX. 

The opening of the axial channel, called the axial pore, is made up of looping protein structures called RKH loops. These flexible loops were found to play roles both in recognizing the ssrA degradation tag and in how substrates or the SspB adaptor interact with or are pulled against the channel during degradation. 

The flexibility of these RKH loops allows ClpX to interact with a large number of different proteins and adapters, and these results clarify some previous biochemical and mutational studies of interactions between the substrate and ClpXP. 

Although Ghanbarpour’s recent work focused on just one adaptor and degradation tag, he noted there are many more targets — ClpXP is something akin to a Swiss army knife for breaking down polypeptide chains. 

The way those other substrates interact with ClpXP could differ from the structures solved with the SspB adaptor and ssrA tag. It also stands to reason that the way ClpXP reacts to each substrate may be unique. For example, given that ClpX is occasionally in an open state, some substrates may engage with ClpXP only while it’s in an open conformation. 

In his new position at Washington University, Ghanbarpour intends to continue exploring how ClpXP and other molecular machines locate their target substrates and interact with adaptors, shedding light on how cells regulate protein degradation and maintain protein homeostasis.

The structures Ghanbarpour solved involved free-floating protein degradation machinery, but membrane-bound degradation machinery also exists. The membrane-bound version’s structure and conformational adaptions potentially differ from the structures Ghanbarpour found in his previous three papers. Indeed, in a recent preprint, Ghanbarpour worked on the cryo-EM structure of a nautilus shell-shaped protein assembly that seems to control membrane-bound degradation machinery. This assembly plays a critical role in regulating protein degradation within the bacterial inner membrane.

“The function of these proteases goes beyond simply degrading damaged proteins. They also target transcription factors, regulatory proteins, and proteins that don’t exist in normal conditions,” he says. “My new lab is particularly interested in understanding how cells use these proteases and their accessory adaptors, both under normal and stress conditions, to reshape the proteome and support recovery from cellular distress.”

A cell protector collaborates with a killer

New research from the Horvitz Lab reveals what it takes for a protein that is best known for protecting cells against death to take on the opposite role.

Jennifer Michalowski | McGovern Institute
November 1, 2024

From early development to old age, cell death is a part of life. Without enough of a critical type of cell death known as apoptosis, animals wind up with too many cells, which can set the stage for cancer or autoimmune disease. But careful control is essential, because when apoptosis eliminates the wrong cells, the effects can be just as dire, helping to drive many kinds of neurodegenerative disease.

By studying the microscopic roundworm Caenorhabditis elegans—which was honored with its fourth Nobel Prize last month—scientists at MIT’s McGovern Institute have begun to unravel a longstanding mystery about the factors that control apoptosis: how a protein capable of preventing programmed cell death can also promote it. Their study, led by McGovern Investigator Robert Horvitz and reported October 9, 2024, in the journal Science Advances, sheds light on the process of cell death in both health and disease.

“These findings, by graduate student Nolan Tucker and former graduate student, now MIT faculty colleague, Peter Reddien, have revealed that a protein interaction long thought to block apoptosis in C. elegans, likely instead has the opposite effect,” says Horvitz, who shared the 2002 Nobel Prize for discovering and characterizing the genes controlling cell death in C. elegans.

Mechanisms of cell death

Horvitz, Tucker, Reddien and colleagues have provided foundational insights in the field of apoptosis by using C. elegans to analyze the mechanisms that drive apoptosis as well as the mechanisms that determine how cells ensure apoptosis happens when and where it should. Unlike humans and other mammals, which depend on dozens of proteins to control apoptosis, these worms use just a few. And when things go awry, it’s easy to tell: When there’s not enough apoptosis, researchers can see that there are too many cells inside the worms’ translucent bodies. And when there’s too much, the worms lack certain biological functions or, in more extreme cases, can’t reproduce or die during embryonic development.

Work in the Horvitz lab defined the roles of many of the genes and proteins that control apoptosis in worms. These regulators proved to have counterparts in human cells, and for that reason studies of worms have helped reveal how human cells govern cell death and pointed toward potential targets for treating disease.

A protein’s dual role

Three of C. elegans’ primary regulators of apoptosis actively promote cell death, whereas just one, CED-9, reins in the apoptosis-promoting proteins to keep cells alive. As early as the 1990s, however, Horvitz and colleagues recognized that CED-9 was not exclusively a protector of cells. Their experiments indicated that the protector protein also plays a role in promoting cell death. But while researchers thought they knew how CED-9 protected against apoptosis, its pro-apoptotic role was more puzzling.

CED-9’s dual role means that mutations in the gene that encode it can impact apoptosis in multiple ways. Most ced-9 mutations interfere with the protein’s ability to protect against cell death and result in excess cell death. Conversely, mutations that abnormally activate ced-9 cause too little cell death, just like mutations that inactivate any of the three killer genes.

An atypical ced-9 mutation, identified by Reddien when he was a PhD student in Horvitz’s lab, hinted at how CED-9 promotes cell death. That mutation altered the part of the CED-9 protein that interacts with the protein CED-4, which is proapoptotic. Since the mutation specifically leads to a reduction in apoptosis, this suggested that CED-9 might need to interact with CED-4 to promote cell death.

The idea was particularly intriguing because researchers had long thought that CED-9’s interaction with CED-4 had exactly the opposite effect: In the canonical model, CED-9 anchors CED-4 to cells’ mitochondria, sequestering the CED-4 killer protein and preventing it from associating with and activating another key killer, the CED-3 protein —thereby preventing apoptosis.

To test the hypothesis that CED-9’s interactions with the killer CED-4 protein enhance apoptosis, the team needed more evidence. So graduate student Nolan Tucker used CRISPR gene editing tools to create more worms with mutations in CED-9, each one targeting a different spot in the CED-4-binding region. Then he examined the worms. “What I saw with this particular class of mutations was extra cells and viability,” he says—clear signs that the altered CED-9 was still protecting against cell death, but could no longer promote it. “Those observations strongly supported the hypothesis that the ability to bind CED-4 is needed for the pro-apoptotic function of CED-9,” Tucker explains. Their observations also suggested that, contrary to earlier thinking, CED-9 doesn’t need to bind with CED-4 to protect against apoptosis.

When he looked inside the cells of the mutant worms, Tucker found additional evidence that these mutations prevented CED-9’s ability to interact with CED-4. When both CED-9 and CED-4 are intact, CED-4 appears associated with cells’ mitochondria. But in the presence of these mutations, CED-4 was instead at the edge of the cell nucleus. CED-9’s ability to bind CED-4 to mitochondria appeared to be necessary to promote apoptosis, not to protect against it.

Looking ahead

While the team’s findings begin to explain a long-unanswered question about one of the primary regulators of apoptosis, they raise new ones, as well. “I think that this main pathway of apoptosis has been seen by a lot of people as more or less settled science. Our findings should change that view,” Tucker says.

The researchers see important parallels between their findings from this study of worms and what’s known about cell death pathways in mammals. The mammalian counterpart to CED-9 is a protein called BCL-2, mutations in which can lead to cancer.  BCL-2, like CED-9, can both promote and protect against apoptosis. As with CED-9, the pro-apoptotic function of BCL-2 has been mysterious. In mammals, too, mitochondria play a key role in activating apoptosis. The Horvitz lab’s discovery opens opportunities to better understand how apoptosis is regulated not only in worms but also in humans, and how dysregulation of apoptosis in humans can lead to such disorders as cancer, autoimmune disease and neurodegeneration.