Alumni Profile: Desmond Edwards, SB ’22

An interest in translating medicine for a wider audience

School of Science
February 6, 2025

Growing up hearing both English and Patois in rural Jamaica, he always had an interest in understanding other languages, so he studied French in high school and minored in it at MIT. As a child with persistent illnesses, he was frustrated that doctors couldn’t explain the “how” and “why” of what was happening in his body. “I wanted to understand how an entity so small that we can’t even see it with most microscopes is able to get into a massively intricate human body and completely shut it down in a matter of days,” he says.

Edwards, now an MIT graduate and a PhD candidate in microbiology and immunology at Stanford University—with a deferred MD admission in hand as well—feels closer to understanding things. The financial support he received at MIT from the Class of 1975 Scholarship Fund, he says, was one major reason that he chose MIT.

Support for research and discovery

I took a three-week Independent Activities Period boot camp designed to expose first-years with little or no research background to basic molecular biology and microbiology techniques. We had guidance from the professor and teaching assistants, but it was up to us what path we took. That intellectual freedom was part of what made me fall in love with academic research. The lecturer, Mandana Sassanfar, made it her personal mission to connect interested students to Undergraduate Research Opportunities Program placements, which is how I found myself in Professor Rebecca Lamason’s lab.

At the end of my first year, I debated whether to prioritize my academic research projects or leave for a higher-paying summer internship. My lab helped me apply for the Peter J. Eloranta Summer Undergraduate Research Fellowship, which provided funding that allowed me to stay for the summer, and I ended up staying in the lab for the rest of my time at MIT. One paper I coauthored (about developing new genetic tools to control pathogenic bacteria’s gene expression) was published this year.

French connections

French is one of the working languages of many global health programs, and being able to read documents in their original language has been helpful because many diseases that I care about impact Francophone countries like those in sub-Saharan and west Africa. In one French class, we had to analyze an original primary historical text, so I was able to look at an outbreak of plague in the 18th century and compare their public health response with ours to Covid-19. My MIT French classes have been useful in some very cool ways that I did not anticipate.

Translating medicine for the masses

When I go home and talk about my research, I often adapt folk stories, analogies, and relatable everyday situations to get points across since there might not be exact Patois words or phrases to directly convey what I’m describing. Taking these scientific concepts and breaking them all into bite-size pieces is important for the general American public too. I want to lead a scientific career that not only advances our understanding and treatment of infectious diseases, but also positively impacts policy, education, and outreach. Right now, this looks like a combination of being an academic/medical professor and eventually leading the Centers for Disease Control and Prevention.

Kingdoms collide as bacteria and cells form captivating connections

Studying the pathogen R. parkeri, researchers discovered the first evidence of extensive and stable interkingdom contacts between a pathogen and a eukaryotic organelle.

Lillian Eden | Department of Biology
January 24, 2025

In biology textbooks, the endoplasmic reticulum is often portrayed as a distinct, compact organelle near the nucleus, and is commonly known to be responsible for protein trafficking and secretion. In reality, the ER is vast and dynamic, spread throughout the cell and able to establish contact and communication with and between other organelles. These membrane contacts regulate processes as diverse as fat metabolism, sugar metabolism, and immune responses.

Exploring how pathogens manipulate and hijack essential processes to promote their own life cycles can reveal much about fundamental cellular functions and provide insight into viable treatment options for understudied pathogens.

New research from the Lamason Lab in the Department of Biology at MIT recently published in the Journal of Cell Biology has shown that Rickettsia parkeri, a bacterial pathogen that lives freely in the cytosol, can interact in an extensive and stable way with the rough endoplasmic reticulum, forming previously unseen contacts with the organelle.

It’s the first known example of a direct interkingdom contact site between an intracellular bacterial pathogen and a eukaryotic membrane.

The Lamason Lab studies R. parkeri as a model for infection of the more virulent Rickettsia rickettsii. R. rickettsii, carried and transmitted by ticks, causes Rocky Mountain Spotted Fever. Left untreated, the infection can cause symptoms as severe as organ failure and death.

Rickettsia is difficult to study because it is an obligate pathogen, meaning it can only live and reproduce inside living cells, much like a virus. Researchers must get creative to parse out fundamental questions and molecular players in the R. parkeri life cycle, and much remains unclear about how R. parkeri spreads.

Detour to the junction

First author Yamilex Acevedo-Sánchez, a BSG-MSRP-Bio program alum and a graduate student at the time, stumbled across the ER and R. parkeri interactions while trying to observe Rickettsia reaching a cell junction.

The current model for Rickettsia infection involves R. parkeri spreading cell to cell by traveling to the specialized contact sites between cells and being engulfed by the neighboring cell in order to spread. Listeria monocytogenes, which the Lamason Lab also studies, uses actin tails to forcefully propel itself into a neighboring cell. By contrast, R. parkeri can form an actin tail, but loses it before reaching the cell junction. Somehow, R. parkeri is still able to spread to neighboring cells.

After an MIT seminar about the ER’s lesser-known functions, Acevedo-Sánchez developed a cell line to observe whether Rickettsia might be spreading to neighboring cells by hitching a ride on the ER to reach the cell junction.

Instead, she saw an unexpectedly high percentage of R. parkeri surrounded and enveloped by the ER, at a distance of about 55 nanometers. This distance is significant because membrane contacts for interorganelle communication in eukaryotic cells form connections from 10-80 nanometers wide. The researchers ruled out that what they saw was not an immune response, and the sections of the ER interacting with the R. parkeri were still connected to the wider network of the ER.

“I’m of the mind that if you want to learn new biology, just look at cells,” Acevedo-Sánchez says. “Manipulating the organelle that establishes contact with other organelles could be a great way for a pathogen to gain control during infection.”

The stable connections were unexpected because the ER is constantly breaking and reforming connections, lasting seconds or minutes. It was surprising to see the ER stably associating around the bacteria. As a cytosolic pathogen that exists freely in the cytosol of the cells it infects, it was also unexpected to see R. parkeri surrounded by a membrane at all.

Small margins

Acevedo-Sánchez collaborated with the Center for Nanoscale Systems at Harvard University to view her initial observations at higher resolution using focused ion beam scanning electron microscopy. FIB-SEM involves taking a sample of cells and blasting them with a focused ion beam in order to shave off a section of the block of cells. With each layer, a high-resolution image is taken. The result of this process is a stack of images.

From there, Acevedo-Sánchez marked what different areas of the images were — such as the mitochondria, Rickettsia, or the ER — and a program called ORS Dragonfly, a machine learning program, sorted through the thousand or so images to identify those categories. That information was then used to create 3D models of the samples.

Acevedo-Sánchez noted that less than 5 percent of R. parkeri formed connections with the ER — but small quantities of certain characteristics are known to be critical for R. parkeri infection. R. parkeri can exist in two states: motile, with an actin tail, and nonmotile, without it. In mutants unable to form actin tails, R. parkeri are unable to progress to adjacent cells — but in nonmutants, the percentage of R. parkeri that have tails starts at about 2 percent in early infection and never exceeds 15 percent at the height of it.

The ER only interacts with nonmotile R. parkeri, and those interactions increased 25-fold in mutants that couldn’t form tails.

Creating connections

Co-authors Acevedo-Sánchez, Patrick Woida, and Caroline Anderson also investigated possible ways the connections with the ER are mediated. VAP proteins, which mediate ER interactions with other organelles, are known to be co-opted by other pathogens during infection.

During infection by R. parkeri, VAP proteins were recruited to the bacteria; when VAP proteins were knocked out, the frequency of interactions between R. parkeri and the ER decreased, indicating R. parkeri may be taking advantage of these cellular mechanisms for its own purposes during infection.

Although Acevedo-Sánchez now works as a senior scientist at AbbVie, the Lamason Lab is continuing the work of exploring the molecular players that may be involved, how these interactions are mediated, and whether the contacts affect the host or bacteria’s life cycle.

Senior author and associate professor of biology Rebecca Lamason noted that these potential interactions are particularly interesting because bacteria and mitochondria are thought to have evolved from a common ancestor. The Lamason Lab has been exploring whether R. parkeri could form the same membrane contacts that mitochondria do, although they haven’t proven that yet. So far, R. parkeri is the only cytosolic pathogen that has been observed behaving this way.

“It’s not just bacteria accidentally bumping into the ER. These interactions are extremely stable. The ER is clearly extensively wrapping around the bacterium, and is still connected to the ER network,” Lamason says. “It seems like it has a purpose — what that purpose is remains a mystery.”

Alumni Profile: Matthew Dolan, SB ’81

From Bench to Bedside and Beyond

Lillian Eden | Department of Biology
January 16, 2025

Matthew Dolan, SB ‘81, worked in the U.S. and abroad during a fascinating time in the field of immunology and virology.

In medical school, Matthew Dolan, SB ‘81, briefly considered specializing in orthopedic surgery because of the materials science nature of the work — but he soon realized that he didn’t have the innate skills required for that type of work. 

“I’ll be honest with you — I can’t parallel park,” he jokes. “You can consider a lot of things, but if you find the things that you’re good at and that excite you, you can hopefully move forward with those.” 

Dolan certainly has, tackling problems from bench to bedside and beyond. Both in the U.S. and abroad through the Air Force, Dolan has emerged as a leader in immunology and virology, and has served as Director of the Defense Institute for Medical Operations. He’s worked on everything from foodborne illnesses and Ebola to biological weapons and COVID-19, and has even been a guest speaker on NPR’s Science Friday

“This is fun and interesting, and I believe that, and I work hard to convey that — and it’s contagious,” he says. “You can affect people with that excitement.” 

Pieces of the Puzzle

Dolan fondly recalls his years at MIT, and is still in touch with many of the “brilliant” and “interesting” friends he made while in Cambridge. 

He notes that the challenges that were the most rewarding in his career were also the ones that MIT had uniquely prepared him for. Dolan, a Course 7 major, naturally took many classes outside of Biology as part of his undergraduate studies: organic chemistry was foundational for understanding toxicology while studying chemical weapons, while pathogens like Legionella, which causes pneumonia and can spread through water systems like ice machines or air conditioners, are solved at the interface between public health and ecology.

Man sitting on couch next to white dog with pointy ears.
Matthew Dolan stateside with his German Shepherd Sophie. Photo courtesy of Matthew Dolan.

“I learned that learning can be a high-intensity experience,” Dolan recalls. “You can be aggressive in your learning; you can learn and excel in a wide variety of things and gather up all the knowledge and knowledgeable people to work together towards solutions.”

Dolan, for example, worked in the Amazon Basin in Peru on a public health crisis of a sharp rise in childhood mortality due to malaria. The cause was a few degrees removed from the immediate problem: human agriculture had affected the Amazon’s tributaries, leading to still and stagnant water where before there had been rushing streams and rivers. This change in the environment allowed a certain mosquito species of “avid human biters” to thrive.  

“It can be helpful and important for some people to have a really comprehensive and contextual view of scientific problems and biological problems,” he says. “It’s very rewarding to put the pieces in a puzzle like that together.” 

Choosing To Serve

Dolan says a key to finding meaning in his work, especially during difficult times, is a sentiment from Alsatian polymath and Nobel Peace Prize winner Albert Schweitzer: “The only ones among you who will be really happy are those who will have sought and found how to serve.”

One of Dolan’s early formative experiences was working in the heart of the HIV/AIDS epidemic, at a time when there was no effective treatment. No matter how hard he worked, the patients would still die. 

“Failure is not an option — unless you have to fail. You can’t let the failures destroy you,” he says. “There are a lot of other battles out there, and it’s self-indulgent to ignore them and focus on your woe.” 

Lasting Impacts

Dolan couldn’t pick a favorite country, but notes that he’s always impressed seeing how people value the chance to excel with science and medicine when offered resources and respect. Ultimately, everyone he’s worked with, no matter their differences, was committed to solving problems and improving lives. 

Dolan worked in Russia after the Berlin Wall fell, on HIV/AIDS in Moscow and Tuberculosis in the Russian Far East. Although relations with Russia are currently tense, to say the least, Dolan remains optimistic for a brighter future. 

“People that were staunch adversaries can go on to do well together,” he says. “Sometimes, peace leads to partnership. Remembering that it was once possible gives me great hope.” 

Dolan understands that the most lasting impact he has had is, likely, teaching: time marches on, and discoveries can be lost to history, but teaching and training people continues and propagates. In addition to guiding the next generation of healthcare specialists, Dolan also developed programs in laboratory biosafety and biosecurity with the State Department and the Defense Department, and taught those programs around the world. 

“Working in prevention gives you the chance to take care of process problems before they become people problems — patient care problems,” he says. “I have been so impressed with the courageous and giving people that have worked with me.” 

Cellular interactions help explain vascular complications due to COVID-19 virus infection

Whitehead Institute Founding Member Rudolf Jaenisch and colleagues have found that cellular interactions help explain how SARS-CoV-2, the virus that causes COVID-19, could have such significant vascular complications, including blood clots, heart attacks, and strokes.

Greta Friar | Whitehead Institute
December 31, 2024

COVID-19 is a respiratory disease primarily affecting the lungs. However, the SARS-CoV-2 virus that causes COVID-19 surprised doctors and scientists by triggering an unusually large percentage of patients to experience vascular complications – issues related to blood flow, such as blood clots, heart attacks, and strokes.

Whitehead Institute Founding Member Rudolf Jaenisch and colleagues wanted to understand how this respiratory virus could have such significant vascular effects. They used pluripotent stem cells to generate three relevant vascular and perivascular cell types—cells that surround and help maintain blood vessels—so they could closely observe the effects of SARS-CoV-2 on the cells. Instead of using existing methods to generate the cells, the researchers developed a new approach, providing them with fresh insights into the mechanisms by which the virus causes vascular problems. The researchers found that SARS-CoV-2 primarily infects perivascular cells and that signals from these infected cells are sufficient to cause dysfunction in neighboring vascular cells, even when the vascular cells are not themselves infected. In a paper published in the journal Nature Communications on December 30, Jaenisch, postdoc in his lab Alexsia Richards, Harvard University Professor and Wyss Institute for Biologically Inspired Engineering Member David Mooney, and then-postdoc in the Jaenisch and Mooney labs Andrew Khalil share their findings and present a scalable stem cell-derived model system with which to study vascular cell biology and test medical therapies.

A new problem requires a new approach

When the COVID-19 pandemic began, Richards, a virologist, quickly pivoted her focus to SARS-CoV-2. Khalil, a bioengineer, had already been working on a new approach to generate vascular cells. The researchers realized that a collaboration could provide Richards with the research tool she needed and Khalil with an important research question to which his tool could be applied.

The three cell types that Khalil’s approach generated were endothelial cells, the vascular cells that form the lining of blood vessels; and smooth muscle cells and pericytes, perivascular cells that surround blood vessels and provide them with structure and maintenance, among other functions. Khalil’s biggest innovation was to generate all three cell types in the same media—the mixture of nutrients and signaling molecules in which stem cell-derived cells are grown.

The combination of signals in the media determines the final cell type into which a stem cell will mature, so it is much easier to grow each cell type separately in specially tailored media than to find a mixture that works for all three. Typically, Richards explains, virologists will generate a desired cell type using the easiest method, which means growing each cell type and then observing the effects of viral infection on it in isolation. However, this approach can limit results in several ways. Firstly, it can make it challenging to distinguish the differences in how cell types react to a virus from the differences caused by the cells being grown in different media.

“By making these cells under identical conditions, we could see in much higher resolution the effects of the virus on these different cell populations, and that was essential in order to form a strong hypothesis of the mechanisms of vascular symptom risk and progression,” Khalil says.

Secondly, infecting isolated cell types with a virus does not accurately represent what happens in the body, where cells are in constant communication as they react to viral exposure. Indeed, Richards’ and Khalil’s work ultimately revealed that the communication between infected and uninfected cell types plays a critical role in the vascular effects of COVID-19.

“The field of virology often overlooks the importance of considering how cells influence other cells and designing models to reflect that,” Richards says. “Cells do not get infected in isolation, and the value of our model is that it allows us to observe what’s happening between cells during infection.”

Viral infection of smooth muscle cells has broader, indirect effects

When the researchers exposed their cells to SARS-CoV-2, the smooth muscle cells and pericytes became infected—the former at especially high levels, and this infection resulted in strong inflammatory gene expression—but the endothelial cells resisted infection. Endothelial cells did show some response to viral exposure, likely due to interactions with proteins on the virus’ surface. Typically, endothelial cells press tightly together to form a firm barrier that keeps blood inside of blood vessels and prevents viruses from getting out. When exposed to SARS-CoV-2, the junctions between endothelial cells appeared to weaken slightly. The cells also had increased levels of reactive oxygen species, which are damaging byproducts of certain cellular processes.

However, big changes in endothelial cells only occurred after the cells were exposed to infected smooth muscle cells. This triggered high levels of inflammatory signaling within the endothelial cells. It led to changes in the expression of many genes relevant to immune response. Some of the genes affected were involved in coagulation pathways, which thicken blood and so can cause blood clots and related vascular events. The junctions between endothelial cells experienced much more significant weakening after exposure to infected smooth muscle cells, which would lead to blood leakage and viral spread. All of these changes occurred without SARS-CoV-2 ever infecting the endothelial cells.

This work shows that viral infection of smooth muscle cells, and their resultant signaling to endothelial cells, is the lynchpin in the vascular damage caused by SARS-CoV-2. This would not have been apparent if the researchers had not been able to observe the cells interacting with each other.

Clinical relevance of stem cell results

The effects that the researchers observed were consistent with patient data. Some of the genes whose expression changed in their stem cell-derived model had been identified as markers of high risk for vascular complications in COVID-19 patients with severe infections. Additionally, the researchers found that a later strain of SARS-CoV-2, an Omicron variant, had much weaker effects on the vascular and perivascular cells than did the original viral strain. This is consistent with the reduced levels of vascular complications seen in COVID-19 patients infected with recent strains.

Having identified smooth muscle cells as the main site of SARS-Cov-2 infection in the vascular system, the researchers next used their model system to test one drug’s ability to prevent infection of smooth muscle cells. They found that the drug, N, N-Dimethyl-D-erythro-sphingosine, could reduce infection of the cell type without harming smooth muscle or endothelial cells. Although preventing vascular complications of COVID-19 is not as pressing a need with current viral strains, the researchers see this experiment as proof that their stem cell model could be used for future drug development. New coronaviruses and other pathogens are frequently evolving, and when a future virus causes vascular complications, this model could be used to quickly test drugs to find potential therapies while the need is still high. The model system could also be used to answer other questions about vascular cells, how these cells interact, and how they respond to viruses.

“By integrating bioengineering strategies into the analysis of a fundamental question in viral pathology, we addressed important practical challenges in modeling human disease in culture and gained new insights into SARS-CoV-2 infection,” Mooney says.

“Our interdisciplinary approach allowed us to develop an improved stem cell model for infection of the vasculature,” says Jaenisch, who is also a professor of biology at the Massachusetts Institute of Technology. “Our lab is already applying this model to other questions of interest, and we hope that it can be a valuable tool for other researchers.”

An abundant phytoplankton feeds a global network of marine microbes

New findings illuminate how Prochlorococcus’ nightly “cross-feeding” plays a role in regulating the ocean’s capacity to cycle and store carbon.

Jennifer Chu | MIT News
January 3, 2025

One of the hardest-working organisms in the ocean is the tiny, emerald-tinged Prochlorococcus marinus. These single-celled “picoplankton,” which are smaller than a human red blood cell, can be found in staggering numbers throughout the ocean’s surface waters, making Prochlorococcus the most abundant photosynthesizing organism on the planet. (Collectively, Prochlorococcus fix as much carbon as all the crops on land.) Scientists continue to find new ways that the little green microbe is involved in the ocean’s cycling and storage of carbon.

Now, MIT scientists have discovered a new ocean-regulating ability in the small but mighty microbes: cross-feeding of DNA building blocks. In a study appearing today in Science Advances, the team reports that Prochlorococcus shed these extra compounds into their surroundings, where they are then “cross-fed,” or taken up by other ocean organisms, either as nutrients, energy, or for regulating metabolism. Prochlorococcus’ rejects, then, are other microbes’ resources.

What’s more, this cross-feeding occurs on a regular cycle: Prochlorococcus tend to shed their molecular baggage at night, when enterprising microbes quickly consume the cast-offs. For a microbe called SAR11, the most abundant bacteria in the ocean, the researchers found that the nighttime snack acts as a relaxant of sorts, forcing the bacteria to slow down their metabolism and effectively recharge for the next day.

Through this cross-feeding interaction, Prochlorococcus could be helping many microbial communities to grow sustainably, simply by giving away what it doesn’t need. And they’re doing so in a way that could set the daily rhythms of microbes around the world.

“The relationship between the two most abundant groups of microbes in ocean ecosystems has intrigued oceanographers for years,” says co-author and MIT Institute Professor Sallie “Penny” Chisholm, who played a role in the discovery of Prochlorococcus in 1986. “Now we have a glimpse of the finely tuned choreography that contributes to their growth and stability across vast regions of the oceans.”

Given that Prochlorococcus and SAR11 suffuse the surface oceans, the team suspects that the exchange of molecules from one to the other could amount to one of the major cross-feeding relationships in the ocean, making it an important regulator of the ocean carbon cycle.

“By looking at the details and diversity of cross-feeding processes, we can start to unearth important forces that are shaping the carbon cycle,” says the study’s lead author, Rogier Braakman, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

Other MIT co-authors include Brandon Satinsky, Tyler O’Keefe, Shane Hogle, Jamie Becker, Robert Li, Keven Dooley, and Aldo Arellano, along with Krista Longnecker, Melissa Soule, and Elizabeth Kujawinski of Woods Hole Oceanographic Institution (WHOI).

Spotting castaways

Cross-feeding occurs throughout the microbial world, though the process has mainly been studied in close-knit communities. In the human gut, for instance, microbes are in close proximity and can easily exchange and benefit from shared resources.

By comparison, Prochlorococcus are free-floating microbes that are regularly tossed and mixed through the ocean’s surface layers. While scientists assume that the plankton are involved in some amount of cross-feeding, exactly how this occurs, and who would benefit, have historically been challenging to probe; any stuff that Prochlorococcus cast away would have vanishingly low concentrations,and be exceedingly difficult to measure.

But in work published in 2023, Braakman teamed up with scientists at WHOI, who pioneered ways to measure small organic compounds in seawater. In the lab, they grew various strains of Prochlorococcus under different conditions and characterized what the microbes released. They found that among the major “exudants,” or released molecules, were purines and pyridines, which are molecular building blocks of DNA. The molecules also happen to be nitrogen-rich — a fact that puzzled the team. Prochlorococcus are mainly found in ocean regions that are low in nitrogen, so it was assumed they’d want to retain any and all nitrogen-containing compounds they can. Why, then, were they instead throwing such compounds away?

Global symphony

In their new study, the researchers took a deep dive into the details of Prochlorococcus’ cross-feeding and how it influences various types of ocean microbes.

They set out to study how Prochlorococcus use purine and pyridine in the first place, before expelling the compounds into their surroundings. They compared published genomes of the microbes, looking for genes that encode purine and pyridine metabolism. Tracing the genes forward through the genomes, the team found that once the compounds are produced, they are used to make DNA and replicate the microbes’ genome. Any leftover purine and pyridine is recycled and used again, though a fraction of the stuff is ultimately released into the environment. Prochlorococcus appear to make the most of the compounds, then cast off what they can’t.

The team also looked to gene expression data and found that genes involved in recycling purine and pyrimidine peak several hours after the recognized peak in genome replication that occurs at dusk. The question then was: What could be benefiting from this nightly shedding?

For this, the team looked at the genomes of more than 300 heterotrophic microbes — organisms that consume organic carbon rather than making it themselves through photosynthesis. They suspected that such carbon-feeders could be likely consumers of Prochlorococcus’ organic rejects. They found most of the heterotrophs contained genes that take up either purine or pyridine, or in some cases, both, suggesting microbes have evolved along different paths in terms of how they cross-feed.

The group zeroed in on one purine-preferring microbe, SAR11, as it is the most abundant heterotrophic microbe in the ocean. When they then compared the genes across different strains of SAR11, they found that various types use purines for different purposes, from simply taking them up and using them intact to breaking them down for their energy, carbon, or nitrogen. What could explain the diversity in how the microbes were using Prochlorococcus’ cast-offs?

It turns out the local environment plays a big role. Braakman and his collaborators performed a metagenome analysis in which they compared the collectively sequenced genomes of all microbes in over 600 seawater samples from around the world, focusing on SAR11 bacteria. Metagenome sequences were collected alongside measurements of various environmental conditions and geographic locations in which they are found. This analysis showed that the bacteria gobble up purine for its nitrogen when the nitrogen in seawater is low, and for its carbon or energy when nitrogen is in surplus — revealing the selective pressures shaping these communities in different ocean regimes.

“The work here suggests that microbes in the ocean have developed relationships that advance their growth potential in ways we don’t expect,” says co-author Kujawinski.

Finally, the team carried out a simple experiment in the lab, to see if they could directly observe a mechanism by which purine acts on SAR11. They grew the bacteria in cultures, exposed them to various concentrations of purine, and unexpectedly found it causes them to slow down their normal metabolic activities and even growth. However, when the researchers put these same cells under environmentally stressful conditions, they continued growing strong and healthy cells, as if the metabolic pausing by purines helped prime them for growth, thereby avoiding the effects of the stress.

“When you think about the ocean, where you see this daily pulse of purines being released by Prochlorococcus, this provides a daily inhibition signal that could be causing a pause in SAR11 metabolism, so that the next day when the sun comes out, they are primed and ready,” Braakman says. “So we think Prochlorococcus is acting as a conductor in the daily symphony of ocean metabolism, and cross-feeding is creating a global synchronization among all these microbial cells.”

This work was supported, in part, by the Simons Foundation and the National Science Foundation.

From Molecules to Memory

On a biological foundation of ions and proteins, the brain forms, stores, and retrieves memories to inform intelligent behavior.

Noah Daly | Department of Biology
December 23, 2024

Whenever you go out to a restaurant to celebrate, your brain retrieves memories while forming new ones. You notice the room is elegant, that you’re surrounded by people you love, having meaningful conversations, and doing it all with good manners. Encoding these precious moments (and not barking at your waiter, expecting dessert before your appetizer), you rely heavily on plasticity, the ability of neurons to change the strength and quantity of their connections in response to new information or activity. The very existence of memory and our ability to retrieve it to guide our intelligent behavior are hypothesized to be movements of a neuroplastic symphony, manifested through chemical processes occurring across vast, interconnected networks of neurons.

During infancy, brain connectivity grows exponentially, rapidly increasing the number of synapses between neurons, some of which are then pruned back to select the most salient for optimal performance. This exuberant growth followed by experience-dependent optimization lays a foundation of connections to produce a functional brain, but the action doesn’t cease there. Faced with a lifetime of encountering and integrating new experiences, the brain will continue to produce and edit connections throughout adulthood, decreasing or increasing their strength to ensure that new information can be encoded.

There are a thousand times more connections in the brain than stars in the Milky Way galaxy. Neuroscientists have spent more than a century exploring that vastness for evidence of the biology of memory. In the last 30 years, advancements in microscopy, genetic sequencing and manipulation, and machine learning technologies have enabled researchers, including four MIT Professors of Biology working in The Picower Institute for Learning and Memory – Elly NediviTroy LittletonMatthew Wilson, and Susumu Tonegawa – to help refine and redefine our understanding of how plasticity works in the brain, what exactly memories are, how they are formed, consolidated, and even changed to suit our needs as we navigate an uncertain world.

Circuits and Synapses: Our Information Superhighway

Neuroscientists hypothesize that how memories come to be depends on how neurons are connected and how they can rewire these connections in response to new experiences and information. This connectivity occursat the junction between two neurons, called a synapse. When a neuron wants to pass on a signal, it will release chemical messengers called neurotransmitters into the synapse cleft from the end of a long protrusion called the axon, often called the “pre-synaptic” area.

These neurotransmitters, whose release is triggered by electrical impulses called action potentials, can bind to specialized receptors on the root-like structures of the receiving neuron, known as dendrites (the “post-synaptic” area). Dendrites are covered with receptors that are either excitatory or inhibitory, meaning they are capable of increasing or decreasing the post-synaptic neuron’s chance of firing their own action potential and carrying a message further.

Not long ago, the scientific consensus was that the brain’s circuitry became hardwired in adulthood. However, a completely fixed system does not lend itself to incorporating new information.

“While the brain doesn’t make any new neurons, it constantly adds and subtracts connections between those neurons to optimize our most basic functions,” explains Nedivi. Unused synapses are pruned away to make room for more regularly used ones. Nedivi has pioneered techniques of two-photon microscopy to examine the plasticity of synapses on axons and dendrites in vivid, three-dimensional detail in living, behaving, and learning animals.

But how does the brain determine which synapses to strengthen and which to prune? “There are three ways to do this,” Littleton explains. “One way is to make the presynaptic side release more neurotransmitters to instigate a bigger response to the same behavioral stimulus. Another is to have the postsynaptic cell respond more strongly. This is often accomplished by adding glutamate receptors to the dendritic spine so that the same signal is detected at a higher level, essentially turning the radio volume up or down.” (Glutamate, one of the most prevalent neurotransmitters in the brain, is our main excitatory messenger and can be found in every region of our neural network.)

Littleton’s lab studies how neurons can turn that radio volume up or down by changing presynaptic as well as postsynaptic output. Characterizing many of the dozens of proteins involved has helped Littleton discover in 2005, for instance, how signals from the post-synaptic area can make some pre-synaptic signals stronger and more active than others. “Our interest is really understanding how the building blocks of this critical connection between neurons work, so we study Drosophila, the simple fruit fly, as a model system to address these questions. We usually take genetic approaches where we can break the system by knocking out a gene or overexpressing it, that allows us to figure out precisely what the protein is doing.”

In general, the release of neurotransmitters can make it more or less likely the receiving cell will continue the line of communication through activation of voltage-gated channels that initiate action potentials. When these action potentials arrive at presynaptic terminals, they can trigger that neuron to release its own neurotransmitters to influence downstream partners. The conversion of electrical signals to chemical transmitters requires presynaptic calcium channels that form pores in the cell membrane that act as a switch, telling the cell to pass along the message in full, reduce the volume, or change the tune completely. By altering calcium channel function, which can be done using a host of neuromodulators or clinically relevant drugs, synaptic function can be tuned up or down to change communication between neurons.

The third mechanism, adding new synapses, has been one of the focal points of Nedivi’s research. Nedivi models this in the visual cortex, labeling and tracking cells in lab mice exposed to different visual experiences that stimulate plasticity.

In a 2016 study, Nedivi showed that the distribution of excitatory and inhibitory synaptic sites on dendrites fluctuates rapidly, with the number of inhibitory sites disappearing and reappearing in the course of a single day. The action, she explains, is in the spines that protrude from dendrites along their length and house post-synaptic areas.

“We found that some spines which were previously thought to have only excitatory synapses are actually dually innervated, meaning they have both excitatory and inhibitory synapses,” Nedivi says. “The excitatory synapses are always stable, and yet on the same spine, about 70% of the inhibitory synapses are dynamic, meaning they can come and go. It’s as if the excitatory synapses on the dually innervated spines are hard-wired, but their activity can be attenuated by the presence of an inhibitory synapse that can gate their activity. Thus, Nedivi found that the number of inhibitory synapses, which make up roughly 15% of the synaptic density of the brain as a whole, play an outsized role in managing the passage of signals that lead to the formation of memory.

“We didn’t start out thinking about it this way, but the inhibitory circuitry is so much more dynamic.” she says. “That’s where the plasticity is.”

Inside Engrams: Memory Storage & Recall

A brain that has made many connections and can continually edit them to process information is well set up for its neurons to work together to form a memory. Understanding the mystery of how it does this excited Susumu Tonegawa, a molecular biologist who won the Nobel Prize for his prior work in immunology.

“More than 100 years ago, it was theorized that, for the brain to form a biological basis for storing information, neurons form localized groupings called engrams,” Tonegawa explains. Whenever an experience exposes the brain to new information, synapses among ensembles of neurons undergo persistent chemical and physical changes to form an engram.

Engram cells can be reactivated and modified physically or chemically by a new learning experience. Repeating stimuli present during a prior learning experience (or at least some part of it) also allows the brain to retrieve some of that information.

In 1992, Tonegawa’s lab was the first to show that knocking out a gene for the synaptic protein, alpha-CamKII could disrupt memory formation, helping to establish molecular biology as a tool to understand how memories are encoded. The lab has made numerous contributions on that front since then.

By 2012, neuroscience approaches had advanced to the point where Tonegawa and colleagues could directly test for the existence of engrams. In a study in Nature, Tonegawa’s lab reported that directly activating a subset of neurons involved in the formation of memory–an engram–was sufficient to induce the behavioral expression of that memory. They pinpointed cells involved in forming a memory (a moment of fear instilled in a mouse by giving its foot a little shock) by tracking the timely expression of the protein c-fos in neurons in the hippocampus. They then labeled these cells using specialized ion channels that activate the neurons when exposed to light. After observing what cells were activated during the formation of a fear memory, the researchers traced the synaptic circuits linking them.

It turned out that they only needed to optically activate the neurons involved in the memory of the footshock to trigger the mouse to freeze (just like it does when returned to the fearful scene), which proved those cells were sufficient to elicit the memory. Later, Tonegawa and his team also found that when this memory forms, it forms simultaneously in the cortex and the basolateral amygdala, where the brain forms emotional associations. This discovery contradicted the standard theory of memory consolidation, where memories form in the hippocampus before migrating to the cortex for retrieval later.

Tonegawa has also found key distinctions between memory storage and recall. In 2017, he and colleagues induced a form of amnesia in mice by disrupting their ability to make proteins needed for strengthening synapses. The lab found that engrams could still be reactivated artificially, instigating the freezing behavior, even though they could not be retrieved anymore through natural recall cues. They dubbed these no-longer naturally retrievable memories “silent engrams.” The research showed that while synapse strengthening was needed to recall a memory, the mere pattern of connectivity in the engram was enough to store it.

While recalling memories stored in silent engrams is possible, they require stronger than normal stimuli to be activated. “This is caused in part by the lower density of dendritic spines on neurons that participate in silent engrams,” Tonegawa says. Notably, Tonegawa sees applications of this finding in studies of Alzheimer’s disease. While working with a mouse model that presents with the early stages of the disease, Tonegawa’s lab could stimulate silent engrams to help them retrieve memories.

Making memory useful

Our neural circuitry is far from a hard drive or a scrapbook. Instead, the brain actively evaluates the information stored in our memories to build models of the world and then make modifications to better utilize our accumulated knowledge in intelligent behavior.

Processing memory includes making structural and chemical changes throughout life. This requires focused energy, like during sleep or waking states of rest. To hit replay on essential events and simulate how they might be replicated in the future, we need to power down and let the mind work. These so-called “offline states” and the processes of memory refinement and prediction they enable fascinate Matt Wilson. Wilson has spent the last several decades examining the ways different regions of the brain communicate with one another during various states of consciousness to learn, retrieve, and augment memories to serve an animal’s intelligent behavior.

“An organism that has successfully evolved an adaptive intelligent system already knows how to respond to new situations,” Wilson says. “They might refine their behavior, but the fact that they had adaptive behavior in the first place suggests that they have to have embedded some kind of a model of expectation that is good enough to get by with. When we experience something for the first time, we make refinements to the model–we learn–and then what we retain from that is what we think of as memory. So the question becomes, how do we refine those models based on experiences?”

Wilson’s fascination with resting states began during his postdoctoral research at the University of Arizona, where he noticed a sleeping lab rat was producing the same electrical activity in its brain as it did while running through a maze. Since then, he has shown that different offline states, including different states of sleep, represent different kinds of offline functions, such as replaying experiences or simulating them. In 2002, Wilson’s work with slow-wave sleep showed the important role the hippocampus plays in spatial learning. Using electrophysiology, where probes are directly inserted into the brain tissue of the mouse, Wilson found that the sequential firing of the same hippocampal neurons activated while it sought pieces of chocolate on either end of a linear track occurred 20 times faster while the rat was in slow-wave sleep.

In 2006, Wilson co-authored a study in Nature that showed mice can retrace their steps after completing a maze. Using electrophysiological recording of the activity of many individual neurons, Wilson showed that the mice replay the memory of each turn it took in reverse, doing so multiple times whenever they had an opportunity to rest between trials.
These replays manifested as ripples in electrical activity that occur during slow-wave sleep.

“REM sleep, on the other hand, can produce novel recapitulation of action-based states, where long sequences and movement information are also repeated.” (e.g. when your dog is moving its legs during sleep, it could be producing a full-fledged simulation of running). Three years after his initial replay study, Wilson found that mice can initiate replay from any point in the sequence of turns in the maze and can do so forward or in reverse.

“Memory is not just about storing my experience,” Wilson explains. “It’s about making modifications in an existing adaptive model, one that’s been developed based on prior experience. In the case of A.I.s such as large language models [like ChatGPT], you just dump everything in there. For biology, it’s all about the experience being folded into the evolutionary operating system, governed by developmental rules. In a sense, you can put this complexity into the machine, but you just can’t train an animal up de novo; there has to be something that allows it to work through these developmental mechanisms.”

The property of the brain that many neuroscientists believe enables this versatile, flexible, and adaptive approach to storing, recalling, and using memory is its plasticity. Because the brain’s machinery is molecular, it is constantly renewable and rewireable, allowing us to incorporate new experiences even as we apply prior experiences. Because we’ve had many dinners in many restaurants, we can navigate the familiar experience while appreciating the novelty of a celebration. We can look into the future, imagining similarly rewarding moments that have yet to come, and game out how we might get there. The marvels of memory allow us to see much of this information in real-time, and scientists at MIT continue to learn how this molecular system guides our behavior.

Imperiali Lab News Brief: combining bioinformatics and biochemistry

Parsing endless possibilities

Lillian Eden | Department of Biology
December 11, 2024

New research from the Imperiali Lab in the Department of Biology at MIT combines bioinformatics and biochemistry to reveal critical players in assembling glycans, the large sugar molecules on bacterial cell surfaces responsible for behaviors such as evading immune responses and causing infections.

In most cases, single-celled organisms such as bacteria interact with their environment through complex chains of sugars known as glycans bound to lipids on their outer membranes. Glycans orchestrate biological responses and interactions, such as evading immune responses and causing infections. 

The first step in assembling most bacterial glycans is the addition of a sugar-phosphate group onto a lipid, which is catalyzed by phosphoglycosyl transferases (PGTs) on the inner membrane. This first sugar is then further built upon by other enzymes in subsequent steps in an assembly-line-like pathway. These critical biochemical processes are challenging to explore because the proteins involved in these processes are embedded in membranes, which makes them difficult to isolate and study. 

Although glycans are found in all living organisms, the sugar molecules that compose glycans are especially diverse in bacteria. There are over 30,000 known bacterial PGTs, and hundreds of sugars for them to act upon. 

Research recently published in PNAS from the Imperiali Lab in the Department of Biology at MIT uses a combination of bioinformatics and biochemistry to predict clusters of “like-minded” PGTs and verify which sugars they will use in the first step of glycan assembly. 

Defining the biochemical machinery for these assembly pathways could reveal new strategies for tackling antibiotic-resistant strains of bacteria. This comprehensive approach could also be used to develop and test inhibitors, halting the assembly pathway at this critical first step. 

Exploring Sequence Similarity

First author Theo Durand, an undergraduate student from Imperial College London who studied at MIT for a year, worked in the Imperiali Lab as part of a research placement. Durand was first tasked with determining which sugars some PGTs would use in the first step of glycan assembly, known as the sugar substrates of the PGTs. When initially those substrate-testing experiments didn’t work, Durand turned to the power of bioinformatics to develop predictive tools. 

Strategically exploring the sugar substrates for PGTs is challenging due to the sheer number of PGTs and the diversity of bacteria, each with its own assorted set of glycans and glycoconjugates. To tackle this problem, Durand deployed a tool called a Sequence Similarity Network (SSN), part of a computational toolkit developed by the Enzyme Function Initiative. 

According to senior author Barbara Imperiali, Class of 1922 Professor of Biology and Chemistry, an SSN provides a powerful way to analyze protein sequences through comparisons of the sequences of tens of thousands of proteins. In an optimized SSN, similar proteins cluster together, and, in the case of PGTs, proteins in the same cluster are likely to share the same sugar substrate. 

For example, a previously uncharacterized PGT that appears in a cluster of PGTs whose first sugar substrate is FucNAc4N would also be predicted to use FucNAc4N. The researchers could then test that prediction to verify the accuracy of the SSN. 

FucNAc4N is the sugar substrate for the PGT of Fusobacterium nucleatum (F. nucleatum), a bacterium that is normally only present in the oral cavity but is correlated with certain cancers and endometriosis, and Streptococcus pneumoniae, a bacterium that causes pneumonia. 

Adjusting the assay

The critical biochemical process of assembling glycans has historically been challenging to define, mainly because assembly is anchored to the interior side of the inner membrane of the bacterium. The purification process itself can be difficult, and the purified proteins don’t necessarily behave in the same manner once outside their native membrane environment.

To address this, the researchers modified a commercially available test to work with proteins still embedded in the membrane of the bacterium, thus saving them weeks of work to purify the proteins. They could then determine the substrate for the PGT by measuring whether there was activity. This first step in glycan assembly is chemically unique, and the test measures one of the reaction products. 

For PGTs whose substrate was unknown, Durand did a deep dive into the literature to find new substrates to test. FucNAc4N, the first sugar substrate for F. nucleatum, was, in fact, Durand’s favorite sugar – he found it in the literature and reached out to a former Imperiali Lab postdoc for the instructions and materials to make it. 

“I ended up down a rabbit hole where I was excited every time I found a new, weird sugar,” Durand recalls with a laugh. “These bacteria are doing a bunch of really complicated things and any tools to help us understand what is actually happening is useful.” 

Exploring inhibitors

Imperiali noted that this research both represents a huge step forward in our understanding of bacterial PGTs and their substrates and presents a pipeline for further exploration. She’s hoping to create a searchable database where other researchers can seed their own sequences into the SSN for their organisms of interest. 

This pipeline could also reveal antibiotic targets in bacteria. For example, she says, the team is using this approach to explore inhibitor development. 

The Imperiali lab worked with Karen Allen, a professor of Chemistry at Boston University, and graduate student Roxanne Siuda to test inhibitors, including ones for F. nucleatum, the bacterium correlated with certain cancers and endometriosis whose first sugar substrate is FucNAc4N. They are also hoping to obtain structures of inhibitors bound to the PGT to enable structure-guided optimization.

“We were able to, using the network, discover the substrate for a PGT, verify the substrate, use it in a screen, and test an inhibitor,” Imperiali says. “This is bioinformatics, biochemistry, and probe development all bundled together, and represents the best of functional genomics.”

Introducing MIT HEALS, a life sciences initiative to address pressing health challenges

The MIT Health and Life Sciences Collaborative will bring together researchers from across the Institute to deliver health care solutions at scale.

Anne Trafton | MIT News
December 10, 2024

At MIT, collaboration between researchers working in the life sciences and engineering is a frequent occurrence. Under a new initiative launched last week, the Institute plans to strengthen and expand those collaborations to take on some of the most pressing health challenges facing the world.

The new MIT Health and Life Sciences Collaborative, or MIT HEALS, will bring together researchers from all over the Institute to find new solutions to challenges in health care. HEALS will draw on MIT’s strengths in life sciences and other fields, including artificial intelligence and chemical and biological engineering, to accelerate progress in improving patient care.

“As a source of new knowledge, of new tools and new cures, and of the innovators and the innovations that will shape the future of biomedicine and health care, there is just no place like MIT,” MIT President Sally Kornbluth said at a launch event last Wednesday in Kresge Auditorium. “Our goal with MIT HEALS is to help inspire, accelerate, and deliver solutions, at scale, to some of society’s most urgent and intractable health challenges.”

The launch event served as a day-long review of MIT’s historical impact in the life sciences and a preview of what it hopes to accomplish in the future.

“The talent assembled here has produced some truly towering accomplishments. But also — and, I believe, more importantly — you represent a deep well of creative potential for even greater impact,” Kornbluth said.

Massachusetts Governor Maura Healey, who addressed the filled auditorium, spoke of her excitement about the new initiative, emphasizing that “MIT’s leadership and the work that you do are more important than ever.”

“One of things as governor that I really appreciate is the opportunity to see so many of our state’s accomplished scientists and bright minds come together, work together, and forge a new commitment to improving human life,” Healey said. “It’s even more exciting when you think about this convening to think about all the amazing cures and treatments and discoveries that will result from it. I’m proud to say, and I really believe this, this is something that could only happen in Massachusetts. There’s no place that has the ecosystem that we have here, and we must fight hard to always protect that and to nurture that.”

A history of impact

MIT has a long history of pioneering new fields in the life sciences, as MIT Institute Professor Phillip Sharp noted in his keynote address. Fifty years ago, MIT’s Center for Cancer Research was born, headed by Salvador Luria, a molecular biologist and a 1975 Nobel laureate.

That center helped to lead the revolutions in molecular biology, and later recombinant DNA technology, which have had significant impacts on human health. Research by MIT Professor Robert Weinberg and others identifying cancer genes has led the development of targeted drugs for cancer, including Herceptin and Gleevec.

In 2007, the Center for Cancer Research evolved into the Koch Institute for Integrative Cancer Research, whose faculty members are divided evenly between the School of Science and the School of Engineering, and where interdisciplinary collaboration is now the norm.

While MIT has long been a pioneer in this kind of collaborative health research, over the past several years, MIT’s visiting committees reported that there was potential to further enhance those collaborations, according to Nergis Mavalvala, dean of MIT’s School of Science.

“One of the very strong themes that emerged was that there’s an enormous hunger among our colleagues to collaborate more. And not just within their disciplines and within their departments, but across departmental boundaries, across school boundaries, and even with the hospitals and the biotech sector,” Mavalvala told MIT News.

To explore whether MIT could be doing more to encourage interdisciplinary research in the life sciences, Mavalvala and Anantha Chandrakasan, dean of the School of Engineering and MIT’s chief innovation and strategy officer, appointed a faculty committee called VITALS (Vision to Integrate, Translate and Advance Life Sciences).

That committee was co-chaired by Tyler Jacks, the David H. Koch Professor of Biology at MIT and a member and former director of the Koch Institute, and Kristala Jones Prather, head of MIT’s Department of Chemical Engineering.

“We surveyed the faculty, and for many people, the sense was that they could do more if there were improved mechanisms for interaction and collaboration. Not that those don’t exist — everybody knows that we have a highly collaborative environment at MIT, but that we could do even more if we had some additional infrastructure in place to facilitate bringing people together, and perhaps providing funding to initiate collaborative projects,” Jacks said before last week’s launch.

These efforts will build on and expand existing collaborative structures. MIT is already home to a number of institutes that promote collaboration across disciplines, including not only the Koch Institute but also the McGovern Institute for Brain Research, the Picower Institute for Learning and Memory, and the Institute for Medical Engineering and Science.

“We have some great examples of crosscutting work around MIT, but there’s still more opportunity to bring together faculty and researchers across the Institute,” Chandrakasan said before the launch event. “While there are these great individual pieces, we can amplify those while creating new collaborations.”

Supporting science

In her opening remarks on Wednesday, Kornbluth announced several new programs designed to support researchers in the life sciences and help promote connections between faculty at MIT, surrounding institutions and hospitals, and companies in the Kendall Square area.

“A crucial part of MIT HEALS will be finding ways to support, mentor, connect, and foster community for the very best minds, at every stage of their careers,” she said.

With funding provided by Noubar Afeyan PhD ’87, an executive member of the MIT Corporation and founder and CEO of Flagship Pioneering, MIT HEALS will offer fellowships for graduate students interested in exploring new directions in the life sciences.

Another key component of MIT HEALS will be the new Hood Pediatric Innovation Hub, which will focus on development of medical treatments specifically for children. This program, established with a gift from the Charles H. Hood Foundation, will be led by Elazer Edelman, a cardiologist and the Edward J. Poitras Professor in Medical Engineering and Science at MIT.

“Currently, the major market incentives are for medical innovations intended for adults — because that’s where the money is. As a result, children are all too often treated with medical devices and therapies that don’t meet their needs, because they’re simply scaled-down versions of the adult models,” Kornbluth said.

As another tool to help promising research projects get off the ground, MIT HEALS will include a grant program known as the MIT-MGB Seed Program. This program, which will fund joint research projects between MIT and Massachusetts General Hospital/Brigham and Women’s Hospital, is being launched with support from Analog Devices, to establish the Analog Devices, Inc. Fund for Health and Life Sciences.

Additionally, the Biswas Family Foundation is providing funding for postdoctoral fellows, who will receive four-year appointments to pursue collaborative health sciences research. The details of the fellows program will be announced in spring 2025.

“One of the things we have learned through experience is that when we do collaborative work that is cross-disciplinary, the people who are actually crossing disciplinary boundaries and going into multiple labs are students and postdocs,” Mavalvala said prior to the launch event. “The trainees, the younger generation, are much more nimble, moving between labs, learning new techniques and integrating new ideas.”

Revolutions

Discussions following the release of the VITALS committee report identified seven potential research areas where new research could have a big impact: AI and life science, low-cost diagnostics, neuroscience and mental health, environmental life science, food and agriculture, the future of public health and health care, and women’s health. However, Chandrakasan noted that research within HEALS will not be limited to those topics.

“We want this to be a very bottom-up process,” he told MIT News. “While there will be a few areas like AI and life sciences that we will absolutely prioritize, there will be plenty of room for us to be surprised on those innovative, forward-looking directions, and we hope to be surprised.”

At the launch event, faculty members from departments across MIT shared their work during panels that focused on the biosphere, brains, health care, immunology, entrepreneurship, artificial intelligence, translation, and collaboration. The program, which was developed by Amy Keating, head of the Department of Biology, and Katharina Ribbeck, the Andrew and Erna Viterbi Professor of Biological Engineering, also included a spoken-word performance by Victory Yinka-Banjo, an MIT senior majoring in computer science and molecular biology.

In her performance, called “Systems,” Yinka-Banjo urged the audience to “zoom out,” look at systems in their entirety, and pursue collective action.

“To be at MIT is to contribute to an era of infinite impact. It is to look beyond the microscope, zooming out to embrace the grander scope. To be at MIT is to latch onto hope so that in spite of a global pandemic, we fight and we cope. We fight with science and policy across clinics, academia, and industry for the betterment of our planet, for our rights, for our health,” she said.

In a panel titled “Revolutions,” Douglas Lauffenburger, the Ford Professor of Engineering and one of the founders of MIT’s Department of Biological Engineering, noted that engineers have been innovating in medicine since the 1950s, producing critical advances such as kidney dialysis, prosthetic limbs, and sophisticated medical imaging techniques.

MIT launched its program in biological engineering in 1998, and it became a full-fledged department in 2005. The department was founded based on the concept of developing new approaches to studying biology and developing potential treatments based on the new advances being made in molecular biology and genomics.

“Those two revolutions laid the foundation for a brand new kind of engineering that was not possible before them,” Lauffenburger said.

During that panel, Jacks and Ruth Lehmann, director of the Whitehead Institute for Biomedical Research, outlined several interdisciplinary projects underway at the Koch Institute and the Whitehead Institute. Those projects include using AI to analyze mammogram images and detect cancer earlier, engineering drought-resistant plants, and using CRISPR to identify genes involved in toxoplasmosis infection.

These examples illustrate the potential impact that can occur when “basic science meets translational science,” Lehmann said.

“I’m really looking forward to HEALS further enlarging the interactions that we have, and I think the possibilities for science, both at a mechanistic level and understanding the complexities of health and the planet, are really great,” she said.

The importance of teamwork

To bring together faculty and students with common interests and help spur new collaborations, HEALS plans to host workshops on different health-related topics. A faculty committee is now searching for a director for HEALS, who will coordinate these efforts.

Another important goal of the HEALS initiative, which was the focus of the day’s final panel discussion, is enhancing partnerships with Boston-area hospitals and biotech companies.

“There are many, many different forms of collaboration,” said Anne Klibanski, president and CEO of Mass General Brigham. “Part of it is the people. You bring the people together. Part of it is the ideas. But I have found certainly in our system, the way to get the best and the brightest people working together is to give them a problem to solve. You give them a problem to solve, and that’s where you get the energy, the passion, and the talent working together.”

Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute, noted the importance of tackling fundamental challenges without knowing exactly where they will lead. Langer, trained as a chemical engineer, began working in biomedical research in the 1970s, when most of his engineering classmates were going into jobs in the oil industry.

At the time, he worked with Judah Folkman at Boston Children’s Hospital on the idea of developing drugs that would starve tumors by cutting off their blood supply. “It took many, many years before those would [reach patients],” he says. “It took Genentech doing great work, building on some of the things we did that would lead to Avastin and many other drugs.”

Langer has spent much of his career developing novel strategies for delivering molecules, including messenger RNA, into cells. In 2010, he and Afeyan co-founded Moderna to further develop mRNA technology, which was eventually incorporated into mRNA vaccines for Covid.

“The important thing is to try to figure out what the applications are, which is a team effort,” Langer said. “Certainly when we published those papers in 1976, we had obviously no idea that messenger RNA would be important, that Covid would even exist. And so really it ends up being a team effort over the years.”

Study suggests how the brain, with sleep, learns meaningful maps of spaces

Place cells are well known to encode individual locations, but new experiments and analysis indicate that stitching together a “cognitive map” of a whole environment requires a broader ensemble of cells, aided by sleep, to build a richer network over several days, according to new research from the Wilson Lab.

David Orenstein | The Picower Institute for Learning and Memory
December 10, 2024

On the first day of your vacation in a new city your explorations expose you to innumerable individual places. While the memories of these spots (like a beautiful garden on a quiet side street) feel immediately indelible, it might be days before you have enough intuition about the neighborhood to direct a newer tourist to that same site and then maybe to the café you discovered nearby. A new study in mice by MIT neuroscientists at The Picower Insitute for Learning and Memory provides new evidence for how the brain forms cohesive cognitive maps of whole spaces and highlights the critical importance of sleep for the process.

Scientists have known for decades that the brain devotes neurons in a region called the hippocampus to remembering specific locations. So-called “place cells” reliably activate when an animal is at the location the neuron is tuned to remember. But more useful than having markers of specific spaces is having a mental model of how they all relate in a continuous overall geography. Though such “cognitive maps” were formally theorized in 1948, neuroscientists have remained unsure of how the brain constructs them. The new study in the December edition of Cell Reports finds that the capability may depend upon subtle but meaningful changes over days in the activity of cells that are only weakly attuned to individual locations, but that increase the robustness and refinement of the hippocampus’s encoding of the whole space. With sleep, the study’s analyses indicate, these “weakly spatial” cells increasingly enrich neural network activity in the hippocampus to link together these places into a cognitive map.

“On day 1, the brain doesn’t represent the space very well,” said lead author Wei Guo, a research scientist in the lab of senior author Matthew Wilson, Sherman Fairchild Professor in The Picower Institute and MIT’s Departments of Biology and Brain and Cognitive Sciences. “Neurons represent individual locations, but together they don’t form a map. But on day 5 they form a map. If you want a map, you need all these neurons to work together in a coordinated ensemble.”

Mice mapping mazes

To conduct the study, Guo and Wilson along with labmates Jie “Jack” Zhang and Jonathan Newman introduced mice to simple mazes of varying shapes and let them explore them freely for about half an hour a day for several days. Importantly, the mice were not directed to learn anything specific through the offer of any rewards. They just wandered. Previous studies have shown that mice naturally demonstrate “latent learning” of spaces from this kind of unrewarded experience after several days.

To understand how latent learning takes hold, Guo and his colleagues visually monitored hundreds of neurons in the CA1 area of the hippocampus by engineering cells to flash when a buildup of calcium ions made them electrically active. They not only recorded the neurons’ flashes when the mice were actively exploring, but also while they were sleeping. Wilson’s lab has shown that animals “replay” their previous journeys during sleep, essentially refining their memories by dreaming about their experiences.

Analysis of the recordings showed that the activity of the place cells developed immediately and remained strong and unchanged over several days of exploration.  But this activity alone wouldn’t explain how latent learning or a cognitive map evolves over several days. So unlike in many other studies where scientists focus solely on the strong and clear activity of place cells, Guo extended his analysis to the more subtle and mysterious activity of cells that were not so strongly spatially tuned. Using an emerging technique called “manifold learning” he was able to discern that many of the “weakly spatial” cells gradually correlated their activity not with locations, but with activity patterns among other neurons in the network. As this was happening, Guo’s analyses showed, the network encoded a cognitive map of the maze that increasingly resembled the literal, physical space.

“Although not responding to specific locations like strongly spatial cells, weakly spatial cells specialize in responding to ‘‘mental locations,’’ i.e., specific ensemble firing patterns of other cells,” the study authors wrote. “If a weakly spatial cell’s mental field encompasses two subsets of strongly spatial cells that encode distinct locations, this weakly spatial cell can serve as a bridge between these locations.”

In other words, the activity of the weakly spatial cells likely stitches together the individual locations represented by the place cells into a mental map.

The need for sleep

Studies by Wilson’s lab and many others have shown that memories are consolidated, refined and processed by neural activity, such as replay, that occurs during sleep and rest. Guo and Wilson’s team therefore sought to test whether sleep was necessary for the contribution of weakly spatial cells to latent learning of cognitive maps.

To do this they let some mice explore a new maze twice during the same day with a three-hour siesta in between. Some of the mice were allowed to sleep but some were not. The ones that did showed a significant refinement of their mental map, but the ones that weren’t allowed to sleep showed no such improvement. Not only did the network encoding of the map improve, but also measures of the tuning of individual cells during showed that sleep helped cells become better attuned both to places and to patterns of network activity, so called “mental places” or “fields.”

Mental map meaning

The “cognitive maps” the mice encoded over several days were not literal, precise maps of the mazes, Guo notes. Instead they were more like schematics. Their value is that they provide the brain with a topology that can be explored mentally, without having to be in the physical space. For instance, once you’ve formed your cognitive map of the neighborhood around your hotel, you can plan the next morning’s excursion (e.g. you could imagine grabbing a croissant at the bakery you observed a few blocks west and then picture eating it on one of those benches you noticed in the park along the river).

Indeed, Wilson hypothesized that the weakly spatial cells’ activity may be overlaying salient non-spatial information that brings additional meaning to the maps (i.e. the idea of a bakery is not spatial, even if it’s closely linked to a specific location). The study, however, included no landmarks within the mazes and did not test any specific behaviors among the mice. But now that the study has identified that weakly spatial cells contribute meaningfully to mapping, Wilson said future studies can investigate what kind of information they may be incorporating into the animals’ sense of their environments. We seem to intuitively regard the spaces we inhabit as more than just sets of discrete locations.

“In this study we focused on animals behaving naturally and demonstrated that during freely exploratory behavior and subsequent sleep, in the absence of reinforcement, substantial neural plastic changes at the ensemble level still occur,” the authors concluded. “This form of implicit and unsupervised learning constitutes a crucial facet of human learning and intelligence, warranting further in-depth investigations.”

The Freedom Together Foundation, The Picower Institute for Learning and Memory and the National Institutes of Health funded the study.

Cellular traffic congestion in chronic diseases suggests new therapeutic targets

Many chronic diseases have a common denominator that could be driving their dysfunction: reduced protein mobility, which in turn reduces protein function. A new paper from the Young Lab describes this pervasive mobility defect.

Greta Friar | Whitehead Institute
November 26, 2024

Chronic diseases like type 2 diabetes and inflammatory disorders have a huge impact on humanity. They are a leading cause of disease burden and deaths around the globe, are physically and economically taxing, and the number of people with such diseases is growing.

Treating chronic disease has proven difficult because there is not one simple cause, like a single gene mutation, that a treatment could target. At least, that’s how it has appeared to scientists. However, research from Whitehead Institute Member Richard Young and colleagues, published in the journal Cell on November 27, reveals that many chronic diseases have a common denominator that could be driving their dysfunction: reduced protein mobility. What this means is that around half of all proteins active in cells slow their movement when cells are in a chronic disease state, reducing the proteins’ functions. The researchers’ findings suggest that protein mobility may be a linchpin for decreased cellular function in chronic disease, making it a promising therapeutic target.

In this paper, Young and colleagues in his lab, including postdoc Alessandra Dall’Agnese, graduate students Shannon Moreno and Ming Zheng, and research scientist Tong Ihn Lee, describe their discovery of this common mobility defect, which they call proteolethargy; explain what causes the defect and how it leads to dysfunction in cells; and propose a new therapeutic hypothesis for treating chronic diseases.

“I’m excited about what this work could mean for patients,” says Dall’Agnese. “My hope is that this will lead to a new class of drugs that restore protein mobility, which could help people with many different diseases that all have this mechanism as a common denominator.”

“This work was a collaborative, interdisciplinary effort that brought together biologists, physicists, chemists, computer scientists and physician-scientists,” Lee says. “Combining that expertise is a strength of the Young lab. Studying the problem from different viewpoints really helped us think about how this mechanism might work and how it could change our understanding of the pathology of chronic disease.”

Commuter delays cause work stoppages in the cell

How do proteins moving more slowly through a cell lead to widespread and significant cellular dysfunction? Dall’Agnese explains that every cell is like a tiny city, with proteins as the workers who keep everything running. Proteins have to commute in dense traffic in the cell, traveling from where they are created to where they work. The faster their commute, the more work they get done. Now, imagine a city that starts experiencing traffic jams along all the roads. Stores don’t open on time, groceries are stuck in transit, meetings are postponed. Essentially all operations in the city are slowed.

The slow down of operations in cells experiencing reduced protein mobility follows a similar progression. Normally, most proteins zip around the cell bumping into other molecules until they locate the molecule they work with or act on. The slower a protein moves, the fewer other molecules it will reach, and so the less likely it will be able to do its job. Young and colleagues found that such protein slow-downs lead to measurable reductions in the functional output of the proteins. When many proteins fail to get their jobs done in time, cells begin to experience a variety of problems—as they are known to do in chronic diseases.

Discovering the protein mobility problem

Young and colleagues first suspected that cells affected in chronic disease might have a protein mobility problem after observing changes in the behavior of the insulin receptor, a signaling protein that reacts to the presence of insulin and causes cells to take in sugar from blood. In people with diabetes, cells become less responsive to insulin — a state called insulin resistance — causing too much sugar to remain in the blood. In research published on insulin receptors in Nature Communications in 2022, Young and colleagues reported that insulin receptor mobility might be relevant to diabetes.

Knowing that many cellular functions are altered in diabetes, the researchers considered the possibility that altered protein mobility might somehow affect many proteins in cells. To test this hypothesis, they studied proteins involved in a broad range of cellular functions, including MED1, a protein involved in gene expression; HP1α, a protein involved in gene silencing; FIB1, a protein involved in production of ribosomes; and SRSF2, a protein involved in splicing of messenger RNA. They used single-molecule tracking and other methods to measure how each of those proteins moves in healthy cells and in cells in disease states. All but one of the proteins showed reduced mobility (about 20-35%) in the disease cells.

“I’m excited that we were able to transfer physics-based insight and methodology, which are commonly used to understand the single-molecule processes like gene transcription in normal cells, to a disease context and show that they can be used to uncover unexpected mechanisms of disease,” Zheng says. “This work shows how the random walk of proteins in cells is linked to disease pathology.”

Moreno concurs: “In school, we’re taught to consider changes in protein structure or DNA sequences when looking for causes of disease, but we’ve demonstrated that those are not the only contributing factors. If you only consider a static picture of a protein or a cell, you miss out on discovering these changes that only appear when molecules are in motion.”

 Can’t commute across the cell, I’m all tied up right now

Next, the researchers needed to determine what was causing the proteins to slow down. They suspected that the defect had to do with an increase in cells of the level of reactive oxygen species (ROS), molecules that are highly prone to interfering with other molecules and their chemical reactions. Many types of chronic-disease-associated triggers, such as higher sugar or fat levels, certain toxins, and inflammatory signals, lead to an increase in ROS, also known as an increase in oxidative stress. The researchers measured the mobility of the proteins again, in cells that had high levels of ROS and were not otherwise in a disease state, and saw comparable mobility defects, suggesting that oxidative stress was to blame for the protein mobility defect.

The final part of the puzzle was why some, but not all, proteins slow down in the presence of ROS. SRSF2 was the only one of the proteins that was unaffected in the experiments, and it had one clear difference from the others: its surface did not contain any cysteines, an amino acid building block of many proteins. Cysteines are especially susceptible to interference from ROS because it will cause them to bond to other cysteines. When this bonding occurs between two protein molecules, it slows them down because the two proteins cannot move through the cell as quickly as either protein alone.

About half of the proteins in our cells contain surface cysteines, so this single protein mobility defect can impact many different cellular pathways. This makes sense when one considers the diversity of dysfunctions that appear in cells of people with chronic diseases: dysfunctions in cell signaling, metabolic processes, gene expression and gene silencing, and more. All of these processes rely on the efficient functioning of proteins—including the diverse proteins studied by the researchers. Young and colleagues performed several experiments to confirm that decreased protein mobility does in fact decrease a protein’s function. For example, they found that when an insulin receptor experiences decreased mobility, it acts less efficiently on IRS1, a molecule to which it usually adds a phosphate group.

From understanding a mechanism to treating a disease

Discovering that decreased protein mobility in the presence of oxidative stress could be driving many of the symptoms of chronic disease provides opportunities to develop therapies to rescue protein mobility. In the course of their experiments, the researchers treated cells with an antioxidant drug—something that reduces ROS—called N-acetyl cysteine and saw that this partially restored protein mobility.

The researchers are pursuing a variety of follow ups to this work, including the search for drugs that safely and efficiently reduce ROS and restore protein mobility. They developed an assay that can be used to screen drugs to see if they restore protein mobility by comparing each drug’s effect on a simple biomarker with surface cysteines to one without. They are also looking into other diseases that may involve protein mobility, and are exploring the role of reduced protein mobility in aging.

“The complex biology of chronic diseases has made it challenging to come up with effective therapeutic hypotheses,” says Young, who is also a professor of biology at the Massachusetts Institute of Technology. “The discovery that diverse disease-associated stimuli all induce a common feature, proteolethargy, and that this feature could contribute to much of the dysregulation that we see in chronic disease, is something that I hope will be a real game changer for developing drugs that work across the spectrum of chronic diseases.”