3 Questions: Sara Prescott on the brain-body connection

New MIT faculty member investigates how sensory input from within the body controls mammalian physiology and behavior.

Lillian Eden | Department of Biology | Picower Institute for Learning and Memory
May 17, 2023

Many of our body’s most important functions occur without our conscious knowledge, such as digestion, heartbeat, and breathing. These vital functions depend on the signals generated by the “interoceptive nervous system,” which enables the brain to monitor our internal organs and trigger responses that sometimes save our lives. One second you are breathing normally as you eat your salad and the next, when a vinegar-soaked crouton enters your throat, you are coughing or swallowing to protect and clear your airway. We know our bodies are sensitive to cues like irritants, but we still have a lot to learn about how the interoceptive system works to meet our physiological needs, keep organs safe and healthy, and affect our behavior. We can also learn how chronic insults may lead to organ dysfunction and use what we learn to create therapeutic interventions.

Focusing on the airway, Sara Prescott, a new faculty member in the Department of Biology and investigator in The Picower Institute for Learning and Memory, seeks to understand the ways our nervous systems detect and respond to stimuli in health and disease. Here, she describes her work.

Q: Why is understanding the peripheral nervous system important, and what parts of your background are you drawing on for your current research?

A: The lab focuses on really trying to explore the body-brain connection.

People often think that our mind exists in a vacuum, but in reality, our nervous system is heavily integrated with the rest of the body, and those neural interfaces are important, both for taking information from our body or environment and turning it into an internal representation of the world, and, in reverse, being able to process that information and being able to enact changes throughout the body. That includes things like autonomic reflexes, basic functions of the body like breathing, blood-gas regulation, digestion, and heart rate.

I’ve integrated both my graduate training and postdoctoral training into thinking about biology across multiple scales.

Graduate school for me was quite focused on deep molecular mechanism questions, particularly gene regulation, so I feel like that has been very useful for me in my general approach to neuroscience because I take a very molecular angle to all of this.

It also showed me the power of in vitro models as reductionist tools to explore fundamental aspects of cell biology. During my postdoc, I focused on larger, emergent phenotypes. We were able to manipulate specific circuits and see very impressive behavioral responses in animals. You could stimulate about 100 neurons in a mouse and see that their breathing would just stop until you remove the stimulation, and then the breathing would return to normal.

Both of those experiences inform how we approach a problem in my research. We need to understand how these circuits work, not just their connectivity at the anatomical level but what is driving their changes in sensitivity over time, the receptor expression programs that affect how they sense and signal, how these circuits emerge during development, and their gene expression.

There are still s­o many foundational questions that haven’t been answered that there’s enough to do in the mouse for quite some time.

Q: How are you specifically looking into interoceptive biology at MIT?

A: Our flagship system is the mammalian airway. We use a mouse model and modern molecular neuroscience tools to manipulate various neural pathways and observe what the effects are on respiratory function and animal health.

Neuroscience and mouse work have a reputation for being a little challenging and intense, but I think this is also where we can ask really important questions that are useful for our everyday lives — and the only place where we can fully recapitulate the complexity of nervous system signaling all the way down to our organs, back to our brain, and back to our organs.

It’s a very fun place to do science with lots of open questions.

One of the core discoveries from my postdoctoral work was focusing on the vagus nerve as a major body-to-brain conduit, as it innervates our lungs, heart, and gastrointestinal tract. We found that there were about 40 different subtypes of sensory neurons within this small nerve, which is really a remarkable amount of diversity and reflects the massive sensory space within the body. About a dozen of those vagal neurons project to the airways.

We identified a rare neuron type specifically responsible for triggering protective responses, like coughing when water or acid entered the airway. We also discovered a separate population of neurons that make us feel and act sick when we get a flu infection. The field now knows what four to five vagal populations of neurons are actually sensing in the airways, but the remaining populations are still a mystery to us; we don’t know what those populations of sensory neurons are detecting, what their anatomy is, and what reflex effects those neurons are evoking.

Looking ahead, there are many exciting directions for the interoceptive biology field. For example, there’s been a lot of focus on characterizing the circuits underlying acute motor reflexes, like rapid responses to visceral stimuli on the timescale of minutes to hours. But we don’t have a lot of information about what happens when these circuits are activated over long periods of time. For example, respiratory tract infections often last for weeks or longer. We know that the airways undergo changes in composition when they’re exposed to different types of infection or stress to better accommodate future threats. One of the hypotheses we’re testing is that chronically activating neural circuits may drive changes in organ composition. We have this idea, which we’re calling reflexive remodeling: neurons may be communicating with stem cells and progenitor cells in the periphery to drive adaptive remodeling responses.

We have the genetic, molecular, and circuit scale tools to explore this pheno­­­menon in mice. In parallel, we’re also setting up some in vitro models of the mouse airway mucosa to expedite receptor screening and to explore basic mechanisms of neuron-epithelium cross-talk. We hope this will inform our understanding of how the airway surface senses and responds to different types of irritants or damage.

Q: This all sounds fascinating. Where does it lead?

A: Human health has been my north star for a long time and I’ve taken a long, wandering path to find particular areas where I can scratch whatever intellectual itch that I have.

I originally thought I would be a doctor and then realized that I felt like I could have a more lasting impact by discovering fundamental truths about how our bodies work. I think there are a number of chronic diseases in which autonomic imbalance is actually a huge clinical component of the disorder.

We have a lot of interest in some of these very common airway remodeling diseases, like chronic obstructive pulmonary disorder — COPD — asthma, and potentially lung cancer. We want to ask questions like how autonomic circuits are altered in disease contexts, and when neurons actually drive features of disease.

Perhaps this research will help us come up with better molecular, cellular, or tissue engineering approaches to improve the outcomes for a variety of autonomic diseases.

It’s very easy for me to imagine how one day, not too far from now, we can turn these findings into something actionable for human health.

3 Questions: Brady Weissbourd on a new model of nervous system form, function, and evolution

Developing a new neuroscience model is no small feat. New faculty member Brady Weissbourd has risen to the challenge in order to study nervous system evolution, development, regeneration, and function.

Lillian Eden | Department of Biology
April 26, 2023

How does animal behavior emerge from networks of connected neurons?  How are these incredible nervous systems and behaviors actually generated by evolution? Are there principles shared by all nervous systems or is evolution constantly innovating? What did the first nervous system look like that gave rise to the incredible diversity of life that we see around us?

Combining the study of animal behavior with studies of nervous system form, function, and evolution, Brady Weissbourd, a new faculty member in the Department of Biology and investigator in The Picower Institute for Learning and Memory, uses the tiny, transparent jellyfish Clytia hemisphaerica, a new neuroscience model.

Q: In your work, you developed a new model organism for neuroscience research, the transparent jellyfish Clytia hemisphaerica. How do these jellyfish answer questions about neuroscience, the nervous system, and evolution in ways that other models cannot?

A: First, I believe in the importance of more broadly understanding the natural world and diversifying the organisms that we deeply study. One reason is to find experimentally tractable organisms to identify generalizable biological principles – for example, we understand the basis of how neurons “fire” from studies of the squid giant axon. Another reason is that transformative breakthroughs have come from identifying evolutionary innovations that already exist in nature – for example, green fluorescent protein (GFP, from jellyfish) or CRISPR (from bacteria). In both ways, this jellyfish is a valuable complement to existing models.

I have always been interested in the intersection of two types of problems: how nervous systems generate our behaviors; and how these incredible systems were actually created by evolution.

On the systems neuroscience side, ever since working on the serotonin system during my PhD I have been fascinated by the problem of how animals control all of their behaviors simultaneously in a flexible and context-dependent manner, and how behavioral choices depend not just on incoming stimuli but on how those stimuli interact with constantly changing states of the nervous system and body. These are extremely complex and difficult problems, with the particular challenge of interactions across scales, from chemical signaling and dynamic cell biology to neural networks and behavior.

To address these questions, I wanted to move into a model organism with exceptional experimental tractability.

There have been exciting breakthroughs in imaging techniques for neuroscience, including these incredible ways in which we can actually watch and manipulate neuronal activity in a living animal. So, the first thing I wanted was a small and transparent organism that would allow for this kind of optical approach. These jellyfish are a few millimeters in diameter and perfectly transparent, with interesting behaviors but relatively compact nervous systems. They have thousands of neurons where we have billions, which also puts them at a nice intermediate complexity compared to other transparent models that are widely used – for example, C. elegans have 302 neurons and larval zebrafish have something like 100,000 in the brain alone. These features will allow us to look at the activity of the whole nervous system in behaving animals to try to understand how that activity gives rise to behaviors and how that activity itself arises from networks of neurons.

On the evolution side of our work, we are interested in the origins of nervous systems, what the first nervous systems looked like, and broadly what the options are for how nervous systems are organized and functioning: to what extent there are principles versus interesting and potentially useful innovations, and if there are principles, whether those are optimal or somehow constrained by evolution. Our last common ancestor with jellyfish and their relatives (the cnidarians) was something similar to the first nervous system, so by comparing what we find in cnidarians with work in other models we can make inferences about the origins and early evolution of nervous systems. As we further explore these highly divergent animals, we are also finding exciting evolutionary innovations: specifically, they have incredible capabilities for regenerating their nervous systems. In the future, it will be exciting to better understand how these neural networks are organized to allow for such robustness.

Q: What work is required to develop a new organism as a model, and why did you choose this particular species of jellyfish?

A: If you’re choosing a new animal model, it’s not just about whether it has the right features for the questions you want to ask, but also whether it technically lets you do the right experiments. The model we’re using was first developed by a research group in France, who spent many years doing the really hard work of figuring out how to culture the whole life cycle in the lab, injecting eggs, and developing other key resources. For me, the big question was whether we’d be able to use the genetic tools that I was describing earlier for looking at neural activity. Working closely with collaborators in France, our first step was figuring out how to insert things into the jellyfish genome. If we couldn’t figure that out, I was going to switch back to working with mice. It took us about two years of troubleshooting, but now we can routinely generate genetically modified jellyfish in the lab.

Switching to a new animal model is tough – I have a mouse neuroscience background and joined a postdoc lab that used mice and flies; I was the only person working with jellyfish but had no experience. For example, building an aquaculture system and figuring out how to keep jellyfish healthy is not trivial, particularly now that we’re trying to do genetics. One of my goals is now to optimize and simplify this whole process so that when other labs want to start working with jellyfish we have a simple aquaculture platform to get them started, even if they have no experience.

In addition to the fact that these things are tiny and transparent, the main reason that we chose this particular species is because it has an amazing life cycle that makes it an exciting laboratory animal.

They have separate sexes that spawn daily with the fertilized eggs developing into larvae that then metamorphose into polyps. We grow these polyps on microscope slides, where they form colonies that are thought to be immortal. These colonies are then constantly releasing jellyfish, which are all genetically identical “clones” that can be used for experiments. That means that once you create a genetically modified strain, like a transgenic line or a knockout, you can keep it forever as a polyp colony – and since the animals are so small, we can culture them in large numbers in the lab.

There’s still a huge amount of foundational work to do, like characterizing their behavioral repertoire and nervous system organization. It’s shocking how little we know about the basics of jellyfish biology – particularly considering that they kill more people per year than sharks and stingrays combined – and the more we look into it the more questions there are.

Q: What drew you to a faculty position at MIT?

A: I wanted to be in a department that does fundamental research, is enthusiastic about basic science, is open-minded, and is very diverse in what people work on and think about. My goal is also to be able to ultimately link mechanisms at the molecular and cellular level to organismal behavior, which is something that MIT Biology is particularly strong at doing. It’s been an exciting first few months! MIT Biology is such an amazing place to do science and it’s been wonderful how enthusiastic and supportive everyone in the department has been.

I was additionally drawn to MIT by the broader community and have already found it so easy to start collaborations with people in neuroscience, engineering, and math. I’m also thrilled to have recently become a member of The Picower Institute for Learning and Memory, which further enables these collaborations in a way that I believe will be transformational for the work in my lab.

It’s a new lab. It’s a new organism. There isn’t a huge, well-established field that is taking these approaches. There’s so much we don’t know, and so much that we have to establish from scratch. My goal is for my lab to have a sense of adventure and fun, and I’m really excited to be doing that here in MIT Biology.

Sparse, small, but diverse neural connections help make perception reliable, efficient

First detailed mapping and modeling of thalamus inputs onto visual cortex neurons show brain leverages “wisdom of the crowd” to process sensory information.

David Orenstein | Picower Institute for Learning and Memory
February 2, 2023

The brain’s cerebral cortex produces perception based on the sensory information it’s fed through a region called the thalamus.

“How the thalamus communicates with the cortex in a fundamental feature of how the brain interprets the world,” says Elly Nedivi, the William R. and Linda R. Young Professor in The Picower Institute for Learning and Memory at MIT. Despite the importance of thalamic input to the cortex, neuroscientists have struggled to understand how it works so well given the relative paucity of observed connections, or “synapses,” between the two regions.

To help close this knowledge gap, Nedivi assembled a collaboration within and beyond MIT to apply several innovative methods. In a new study described in Nature Neuroscience, the team reports that thalamic inputs into superficial layers of the cortex are not only rare, but also surprisingly weak, and quite diverse in their distribution patterns. Despite this, they are reliable and efficient representatives of information in the aggregate, and their diversity is what underlies these advantages.

Essentially, by meticulously mapping every thalamic synapse on 15 neurons in layer 2/3 of the visual cortex in mice and then modeling how that input affected each neuron’s processing of visual information, the team found that wide variations in the number and arrangement of thalamic synapses made them differentially sensitive to visual stimulus features. While individual neurons therefore couldn’t reliably interpret all aspects of the stimulus, a small population of them could together reliably and efficiently assemble the overall picture.

“It seems this heterogeneity is not a bug; it’s a feature that provides not only a cost benefit, but also confers flexibility and robustness to perturbation” says Nedivi, corresponding author of the study and a member of MIT’s faculty in the departments of Biology and Brain and Cognitive Sciences.

Aygul Balcioglu, the research scientist in Nedivi’s lab who led the work, adds that the research has created a way for neuroscientists to track all the many individual inputs a cell receives as that input is happening.

“Thousands of information inputs pour into a single brain cell. The brain cell then interprets all that information before it communicates its own response to the next brain cell,” Balcioglu says. “What is new, and we feel exciting, is we can now reliably describe the identity and the characteristics of those inputs, as different inputs and characteristics convey different information to a given brain cell. Our techniques give us the ability to describe in living animals where in the structure of the single cell what kind of information gets incorporated. This was not possible until now.”

“MAP”ping and modeling

Nedivi and Balcioglu’s team chose layer 2/3 of the cortex because this layer is where there is relatively high flexibility, or “plasticity,” even in the adult brain. Yet, thalamic innervation there has rarely been characterized. Moreover, Nedivi says, even though the model organism for the study was mice, those layers are the ones that have thickened the most over the course of evolution, and therefore play especially important roles in the human cortex.

Precisely mapping all the thalamic innervation onto entire neurons in living, perceiving mice is so daunting it’s never been done.

To get started, the team used a technique established in Nedivi’s lab that enables observing whole cortical neurons under a two-photon microscope using three different color tags in the same cell simultaneously, except in this case they used one of the colors to label thalamic inputs contacting the labeled cortical neurons. Wherever the color of those thalamic inputs overlapped with the color labeling excitatory synapses on the cortical neurons, that revealed the location of putative thalamic inputs onto the cortical neurons.

Two-photon microscopes offer deep looks into living tissues, but their resolution is not sufficient to confirm that the overlapping labels are indeed synaptic contacts. To confirm their first indications of thalamic inputs, the team turned to a technique called MAP invented in the Picower Institute lab of MIT chemical engineering Associate Professor Kwanghun Chung. MAP physically enlarges tissue in the lab, effectively increasing the resolution of standard microscopes. Rebecca Gillani, a postdoc in the Nedivi lab, with help from Taeyun Ku, a Chung Lab postdoc, was able to combine the new labeling and MAP to definitely resolve, count, map, and even measure the size of all thalamic-cortical synapses onto entire neurons.

The analysis revealed that the thalamic inputs were rather small (typically presumed to also be weak and maybe temporary), and accounted for between 2 and 10 percent of the excitatory synapses on individual visual cortex neurons. The variance in thalamic synapse numbers was not just at a cellular level, but also across different “dendrite” branches of individual cells, accounting for anywhere between zero and nearly half the synapses on a given branch.

“Wisdom of the crowd”

These facts presented Nedivi’s team with a conundrum. If the thalamic inputs were weak, sparse, and widely varying, not only across neurons but even across each neuron’s dendrites, then how good could they be for reliable information transfer?

To help solve the riddle, Nedivi turned to colleague Idan Segev, a professor at Hebrew University in Jerusalem specializing in computational neuroscience. Segev and his student Michael Doron used the Nedivi lab’s detailed anatomical measurements and physiological information from the Allen Brain Atlas to create a biophysically faithful model of the cortical neurons.

Segev’s model showed that when the cells were fed visual information (the simulated signals of watching a grating go past the eyes) their electrical responses varied based on how their thalamic input varied. Some cells perked up more than others in response to different aspects of the visual information, such as contrast or shape, but no single cell revealed much about the overall picture. But with about 20 cells together, the whole visual input could be decoded from their combined activity — a so-called “wisdom of the crowd.”

Notably, Segev compared the performance of cells with the weak, sparse, and varying input akin to what Nedivi’s lab measured, to the performance of a group of cells that all acted like the best single cell of the lot. Up to about 5,000 total synapses, the “best” cell group delivered more informative results, but after that level the small, weak, and diverse group actually performed better. In the race to represent the total visual input with at least 90 percent accuracy, the small, weak, and diverse group reached that level with about 6,700 synapses, while the “best” cell group needed more than 7,900.

“Thus heterogeneity imparts a cost reduction in terms of the number of synapses required for accurate readout of visual features,” the authors wrote.

Nedivi says the study raises tantalizing implications regarding how thalamic input into the cortex works. One, she says, is that given the small size of thalamic synapses they are likely to exhibit significant “plasticity.” Another is that the surprising benefit of diversity may be a general feature, not just a special case for visual input in layer 2/ 3. Further studies, however, are needed to know for sure.

In addition to Nedivi, Balcioglu, Gillani, Ku, Chung, Segev and Doron, other authors are Kendyll Burnell and Alev Erisir.

The National Eye Institute of the National Institutes of Health, the Office of Naval Research, and the JPB Foundation funded the study.

Providing new pathways for neuroscience research and education

Payton Dupuis finds new scientific interests and career opportunities through MIT summer research program in biology.

Leah Campbell | School of Science
September 29, 2022

Payton Dupuis’s interest in biology research began where it does for many future scientists — witnessing a relative struggling with an incurable medical condition. For Dupuis, that family member was her uncle, who suffered from complications from diabetes. Dupuis, a senior at Montana State University, says that diabetes is prominent on the Flathead Reservation in Montana, where she grew up, and witnessing the impacts of the disease inspired her to pursue a career in scientific research. Since then, that passion has taken Dupuis around the country to participate in various summer research programs in the biomedical sciences.

Most recently, she was a participant in the Bernard S. and Sophie G. Gould MIT Summer Research Program in Biology (BSG-MSRP-Bio). The program, offered by the departments of Biology and Brain and Cognitive Sciences, is designed to encourage students from underrepresented groups to attend graduate school and pursue careers in science research. More than 85 percent of participants have subsequently enrolled in highly ranked graduate programs, many of them returning to MIT, just as Dupuis is considering.

Her journey from witnessing the impacts of her uncle’s diabetes to considering graduate school at MIT was made possible only by Dupuis’s love of science and her ability to “find a positive,” as she says, in every experience.

As a high-schooler, Dupuis made her first trip to the Northeast, participating in the Summer Academy of Math and Sciences at Carnegie Mellon University. For Dupuis, who hadn’t even taken calculus yet, the experience was a welcome challenge. “That definitely made me work hard,” she laughs, comparing herself to other program participants. “But I proved to myself, not for anyone else, that I belonged in that program.”

In addition to being a confidence booster, the Carnegie Mellon program also gave Dupuis her first taste of scientific research working in a biomedical lab on tissue regeneration. She was excited about the possibilities of growing new organs — such as the insulin-producing pancreas that could help regulate her uncle’s diabetes — outside of the body. Dupuis was officially hooked on biology.

Her experience that summer encouraged Dupuis to major in chemical engineering, seeing it as a good pipeline into biomedical research. Unfortunately, the chemical engineering curriculum at Montana State wasn’t what she expected, focusing less on the human body and more on the oil industry. In that context, her ability to see a silver lining served Dupuis well.

“That wasn’t really what I wanted, but it was still interesting because there were ways that I could apply it to the body,” she explains. “Like fluid mechanics — instead of water flowing through a pipe, I was thinking about blood flowing through veins.”

Dupuis adds that the chemical engineering program also gave her problem-solving skills that have been valuable as she’s undertaken biology-focused summer programs to help refine her interests. One summer, she worked in the chemistry department at Montana State, getting hands-on experience in a wet lab. “I didn’t really know any of the chemistry behind what I was doing,” she admits, “but I fell in love with it.” Another summer, she participated in the Tufts Building Diversity in Biomedical Sciences program, exploring the genetic side of research through a project on bone development in mice.

In 2020, a mentor at the local tribal college connected Dupuis with Keith Henry, an associate professor of biomedical sciences at the University of North Dakota. With Henry, Dupuis looked for new binding sites for the neurotransmitter serotonin that could help minimize the side effects that come with long-term use of selective serotonin reuptake inhibitors (SSRIs), the most common class of antidepressants. That summer was Dupuis’s first exposure to brain research, and her first experience modeling biological processes with computers. She loved it. In fact, as soon as she returned to Montana State, Dupuis enrolled as a computer science minor.

Because of the minor, Dupuis needs an extra year to graduate, which left her one more summer for a research program. Her older sister had previously participated in the general MSRP program at MIT, so it was a no-brainer for Dupuis to apply for the biology-specific program.

This summer, Dupuis was placed in the lab of Troy Littleton, the Menicon Professor in Neuroscience at The Picower Institute for Learning and Memory. “I definitely fell in love with the lab,” she says. With Littleton, Dupuis completed a project looking at complexin, a protein that can both inhibit and facilitate the release of neurotransmitters like serotonin. It’s also essential for the fusion of synaptic vesicles, the parts of neurons that store and release neurotransmitters.

A number of human neurological diseases have been linked to a deficiency in complexin, although Dupuis says that scientists are still figuring out what the protein does and how it works.

To that end, Dupuis focused this summer on fruit flies, which have two different types of complexin — humans, in comparison, have four. Using gene editing, she designed an experiment comparing fruit flies possessing various amounts of different subtypes of the protein. There was the positive control group, which was untouched; the negative control group that had no complexin; and two experimental groups, each with one of the subtypes removed. Using fluorescent staining, Dupuis compared how neurons lit up in each group of flies, illuminating how altering the amount of complexin changed how the flies released neurotransmitters and formed new synaptic connections.

After touching on so many different areas of biological research through summer programs, Dupuis says that researching neuronal activity in fruit flies this summer was the perfect fit intellectually, and a formative experience as a researcher.

“I’ve definitely learned how to take an experiment and make it my own and figure out what works best for me, but still produces the results we need,” she says.

As for what’s next, Dupuis says her experience at MIT has sold her on pursuing graduate work in brain sciences. “Boston is really where I want to be and eventually work, with all the biotech and biopharma companies around,” she says. One of the perks of the MSRP-Bio program is professional development opportunities. Though Dupuis had always been interested in industry, she says she appreciated attending career panels this summer that demystified what that career path really looks like and what it takes to get there.

Perhaps the most important aspect of the program for Dupuis, though, was the confidence it provided as she continues to navigate the world of biomedical research. She intends to take that back with her to Montana State to encourage classmates to seek out similar summer opportunities.

“There’s so many people that I know would be a great researcher and love science, but they just don’t either know about it or think they can get it,” she says. “All I’d say is, you just got to apply. You just have to put yourself out there.”

Brandon (Brady) Weissbourd

Education

  • Graduate: PhD, 2016, Stanford University
  • Undergraduate: BA, 2009, Human Evolutionary Biology, Harvard University

Research Summary

We use the tiny, transparent jellyfish, Clytia hemisphaerica, to ask questions at the interface of nervous system evolution, development, regeneration, and function. Our foundation is in systems neuroscience, where we use genetic and optical techniques to examine how behavior arises from the activity of networks of neurons. Building from this work, we investigate how the Clytia nervous system is so robust, both to the constant integration of newborn neurons and following large-scale injury. Lastly, we use Clytia’s evolutionary position to study principles of nervous system evolution and make inferences about the ultimate origins of nervous systems.

Awards

  • Searle Scholar Award, 2024
  • Klingenstein-Simons Fellowship Award in Neuroscience, 2023
  • Pathway to Independence Award (K99/R00), National Institute of Neurological Disorders and Stroke, 2020
  • Life Sciences Research Foundation Fellow, 2017
New findings reveal how neurons build and maintain their capacity to communicate

Nerve cells regulate and routinely refresh the collection of calcium channels that enable them to send messages across circuit connections.

David Orenstein | Picower Institute for Learning and Memory
July 21, 2022

The nervous system works because neurons communicate across connections called synapses. They “talk” when calcium ions flow through channels into “active zones” that are loaded with vesicles carrying molecular messages. The electrically charged calcium causes vesicles to “fuse” to the outer membrane of presynaptic neurons, releasing their communicative chemical cargo to the postsynaptic cell. In a new study, scientists at The Picower Institute for Learning and Memory at MIT provide several revelations about how neurons set up and sustain this vital infrastructure.

“Calcium channels are the major determinant of calcium influx, which then triggers vesicle fusion, so it is a critical component of the engine on the presynaptic side that converts electrical signals to chemical synaptic transmission,” says Troy Littleton, senior author of the new study in eLife and Menicon Professor of Neuroscience in MIT’s departments of Biology and Brain and Cognitive Sciences. “How they accumulate at active zones was really unclear. Our study reveals clues into how active zones accumulate and regulate the abundance of calcium channels.”

Neuroscientists have wanted these clues. One reason is that understanding this process can help reveal how neurons change how they communicate, an ability called “plasticity” that underlies learning and memory and other important brain functions. Another is that drugs such as gabapentin, which treats conditions as diverse as epilepsy, anxiety, and nerve pain, binds a protein called alpha2delta that is closely associated with calcium channels. By revealing more about alpha2delta’s exact function, the study better explains what those treatments affect.

“Modulation of the function of presynaptic calcium channels is known to have very important clinical effects,” Littleton says. “Understanding the baseline of how these channels are regulated is really important.”

MIT postdoc Karen Cunningham led the study, which was her doctoral thesis work in Littleton’s lab. Using the model system of fruit fly motor neurons, she employed a wide variety of techniques and experiments to show for the first time the step-by-step process that accounts for the distribution and upkeep of calcium channels at active zones.

A cap on Cac

Cunningham’s first question was whether calcium channels are necessary for active zones to develop in larvae. The fly calcium channel gene (called “cacophony,” or Cac) is so important, flies literally can’t live without it. So rather than knocking out Cac across the fly, Cunningham used a technique to knock it out in just one population of neurons. By doing so, she was able to show that even without Cac, active zones grow and mature normally.

Using another technique that artificially prolongs the larval stage of the fly she was also able to see that given extra time the active zone will continue to build up its structure with a protein called BRP, but that Cac accumulation ceases after the normal six days. Cunningham also found that moderate increases or decreases in the supply of available Cac in the neuron did not affect how much Cac ended up at each active zone. Even more curious, she found that while Cac amount did scale with each active zone’s size, it barely budged if she took away a lot of the BRP in the active zone. Indeed, for each active zone, the neuron seemed to enforce a consistent cap on the amount of Cac present.

“It was revealing that the neuron had very different rules for the structural proteins at the active zone like BRP that continued to accumulate over time, versus the calcium channel that was tightly regulated and had its abundance capped” Cunningham says.

Regular refresh

The findings showed there must be factors other than Cac supply or changes in BRP that regulate Cac levels so tightly. Cunningham turned to alpha2delta. When she genetically manipulated how much of that was expressed, she found that alpha2delta levels directly determined how much Cac accumulated at active zones.

In further experiments, Cunningham was also able to show that alpha2delta’s ability to maintain Cac levels depended on the neuron’s overall Cac supply. That finding suggested that rather than controlling Cac amount at active zones by stabilizing it, alpha2delta likely functioned upstream, during Cac trafficking, to supply and resupply Cac to active zones.

Cunningham used two different techniques to watch that resupply happen, producing measurements of its extent and its timing. She chose a moment after a few days of development to image active zones and measure Cac abundance to ascertain the landscape. Then she bleached out that Cac fluorescence to erase it. After 24 hours, she visualized Cac fluorescence anew to highlight only the new Cac that was delivered to active zones over that 24 hours. She saw that over that day there was Cac delivery across virtually all active zones, but that one day’s work was indeed only a fraction compared to what had built up over several days before. Moreover, she could see that the larger active zones accrued more Cac than smaller ones. And in flies with mutated alpha2delta, there was very little new Cac delivery at all.

If Cac channels were indeed constantly being resupplied, then Cunningham wanted to know at what pace Cac channels are removed from active zones. To determine that, she used a staining technology with a photoconvertible protein called Maple tagged to the Cac protein that allowed her to change the color with a flash of light at the time of her choosing. That way she could first see how much Cac accumulated by a certain time (shown in green) and then flash the light to turn that Cac red. When she checked back five days later, about 30 percent of the red Cac had been replaced with new green Cac, suggesting 30 percent turnover. When she reduced Cac delivery levels by mutating alpha2 delta or reducing Cac biosynthesis, Cac turnover stopped. That means a significant amount of Cac is turned over each day at active zones and that the turnover is prompted by new Cac delivery.

Littleton says his lab is eager to build on these results. Now that the rules of calcium channel abundance and replenishment are clear, he wants to know how they differ when neurons undergo plasticity — for instance, when new incoming information requires neurons to adjust their communication to scale up or down synaptic communication. He says he is also eager to track individual calcium channels as they are made in the cell body and then move down the neural axon to the active zones, and he wants to determine what other genes may affect Cac abundance.

In addition to Cunningham and Littleton, the paper’s other authors are Chad Sauvola and Sara Tavana.

The National Institutes of Health and the JPB Foundation provided support for the research.

Opioids and the brain: new insights through epigenetics
Greta Friar | Whitehead Institute
April 18, 2022

Drug overdose, mostly from opioid use, is the leading cause of accidental death in the United States. Prior studies of twins have revealed that genetics play a key role in opioid use disorder. Researchers know that a mixture of genetic and environmental risk factors contribute to heritability of the disorder, but identifying the specific risk factors is challenging. Opioid use disorder is complex, so instead of one or a few genes causing the disorder, there may be many contributing factors that can combine in different ways. Researchers want to understand which genes contribute to opioid use disorder because this will lead to a better understanding of its underlying biology and could help identify people who will be most at risk if exposed to opioids, enabling researchers, health care providers, and social services to develop strategies for prevention, treatment, and support.

The usual approach for finding genes associated with disease risk is to do a genome wide association study, which compares the genetics of many people to identify patterns in different gene versions occurring in association with a disease. This approach is being used to look at opioid use disorder, but requires many more patient samples than are currently available to reach clear conclusions. Researchers from multiple research universities and institutes, including Whitehead Institute Member Olivia Corradin and her former PhD advisor, Case Western Reserve University Professor Peter Scacheri; as well as Icahn School of Medicine Professor Schahram Akbarian; Eric O. Johnson, a distinguished fellow at RTI International; Dr. Kiran C. Patel College of Allopathic Medicine at Nova South Eastern University Professor Deborah C. Mash; and Richard Sallari of Axiotl, Inc., developed a shortcut for identifying genes that are associated with opioid use disorder and may contribute to it using only a small number of patient samples. Genome wide studies may require hundreds of thousands of samples, but this new method, described in their research published in the journal Molecular Psychiatry on March 17, uses only around 100 samples—51 cases and 51 controls—to narrow in on five candidate genes.

“With this work, we think we’re only seeing the tip of the iceberg of the complex, diverse factors contributing to opioid overdose,” says Corradin, who is also an assistant professor of biology at the Massachusetts Institute of Technology. “However, we hope our findings can help prioritize genes for further study, to speed up the identification of risk markers and possible therapeutic targets.”

In order to learn more about the underlying biology of opioid use disorder, the researchers analyzed brain tissue samples from people who had died of opioid overdoses and compared them with samples from people with no known opioid use history who died of other accidental causes. They specifically looked at neurons from the dorsolateral prefrontal cortex, an area of the brain known to play important roles in addiction. Instead of analyzing the genes in these cells directly, the researchers instead looked at the regulators of the genes’ activity, and searched for changes in these regulators that could point them to genes of interest.

To identify a gene, first map its community

Genes have DNA regions, often close to the gene, that can ratchet up and down the gene’s expression, or the strength of its activity in certain cells. Researchers have only recently been able to map the three-dimensional organization of DNA in a cell well enough to identify all of the regulators that are close to and acting upon target genes. Corradin and her collaborators call a gene’s collection of close regulatory elements its “plexus.” Their approach finds genes of interest by searching for patterns of variation across each gene’s entire plexus, which can be easier to spot with a small sample size.

The patterns that the researchers look for in a plexus are epigenetic changes: differences in the chemical tags that affect regulatory DNA and in turn, modify the expression of the regulators’ target gene. In this case, the researchers looked at a type of epigenetic tag called H3K27 acetylation, which is linked to increases in the activity of regulatory regions. They found nearly 400 locations in the DNA that consistently had less H3K27 acetylation in the brains of people who died of opioid overdose, which would lower activity of target genes. They also identified under-acetylated DNA locations that were often specific to individuals rather than uniform across all opioid overdose cases. The researchers then looked at how many of those locations belonged to regulatory elements in the same plexus. Surprisingly, these individual-specific changes often occurred within the same gene’s plexus. A gene whose plexus had been heavily affected as a collective was flagged as a possible contributor to opioid use disorder.

“We know that the factors that contribute to opioid use disorder are numerous, and that it’s an extremely complex disease that by definition is going to be extremely heterogeneous,” Scacheri says. “The idea was to figure out an approach that embraces that heterogeneity, and then try to spot the themes within it.”

Using this approach, the researchers identified five candidate genes, ASTN2, KCNMA1, DUSP4, GABBR2, and ENOX1. One of the genes, ASTN2, is related to pain tolerance, while KCNMA1DUSP4, and GABBR2 are active in signaling pathways that have been linked more broadly to addiction. Follow up experiments can confirm whether these genes contribute to opioid use disorder.

The five genes and their plexi are also involved in the heritability of generalized anxiety disorder, metrics of tolerance for risk-taking, and educational attainment. Heritability of these traits and opioid use disorder have previously been found to coincide, and people with opioid use disorder often also have generalized anxiety. Furthermore, heritability of these traits and opioid use disorder all have been associated with early childhood adversity. These connections suggest the possibility that early childhood adversity could be contributing to the epigenetic changes observed by the researchers in the brains of people who died of opioid overdose—a useful hypothesis for further research.

The researchers hope that these results will provide some insights into the genetics and neurobiology of opioid use disorder. They are interested in moving their research forward in several ways: they would like to see if they can identify more candidate genes by increasing their sample number, examine different parts of the brain and different cell types, and further analyze the genes already identified. They also hope that their results demonstrate the potency of their approach, which was able to discern useful patterns and identify candidate genes from the neurons of only 51 cases.

“We’re trying a different approach here that relies on this idea of convergence and leverages our understanding of the three-dimensional architecture of DNA, and I hope this approach will be applied to further our understanding of all sorts of complex diseases,” Scacheri says.

A single memory is stored across many connected brain regions

Innovative brain-wide mapping study shows that “engrams,” the ensembles of neurons encoding a memory, are widely distributed, including among regions not previously realize

Picower Institute
April 12, 2022

A new study by scientists at The Picower Institute for Learning and Memory at MIT provides the most comprehensive and rigorous evidence yet that the mammalian brain stores a single memory across a widely distributed, functionally connected complex spanning many brain regions, rather than in just one or even a few places.

Memory pioneer Richard Semon had predicted such a “unified engram complex” more than a century ago, but achieving the new study’s affirmation of his hypothesis required the application of several technologies developed only recently. In the study, the team identified and ranked dozens of areas that were not previously known to be involved in memory and showed that memory recall becomes more behaviorally powerful when multiple memory-storing regions are reactivated, rather than just one.

“When talking about memory storage we all usually talk about the hippocampus or the cortex,” said co-lead and co-corresponding author Dheeraj Roy. He began the research while a graduate student in the RIKEN-MIT Laboratory for Neural Circuit Genetics at The Picower Institute led by senior author Susumu Tonegawa, Picower Professor in the Departments of Biology and Brain and Cognitive Sciences. “This study reflects the most comprehensive description of memory encoding cells, or memory ‘engrams,’ distributed across the brain, not just in the well-known memory regions. It basically provides the first rank-ordered list for high-probability engram regions. This list should lead to many future studies, which we are excited about, both in our labs and by other groups.”

In addition to Roy, who is now a McGovern Fellow in the Broad Institute of MIT and Harvard and the lab of MIT neuroscience Professor Guoping Feng, the study’s other lead authors are Young-Gyun Park, Minyoung Kim, Ying Zhang and Sachie Ogawa.

Mapping Memory

The team was able to map regions participating in an engram complex by conducting an unbiased analysis of more than 247 brain regions in mice who were taken from their home cage to another cage where they felt a small but memorable electrical zap. In one group of mice their neurons were engineered to become fluorescent when they expressed a gene required for memory encoding. In another group, cells activated by naturally recalling the zap memory (e.g. when the mice returned to the scene of the zap) were fluorescently labeled instead. Cells that were activated by memory encoding or by recall could therefore readily be seen under a microscope after the brains were preserved and optically cleared using a technology called SHIELD, developed by co-corresponding author Kwanghun Chung, Associate Professor in The Picower Institute, the Institute for Medical Engineering & Science and the Department of Chemical Engineering. By using a computer to count fluorescing cells in each sample, the team produced brain-wide maps of regions with apparently significant memory encoding or recall activity.

The maps highlighted many regions expected to participate in memory but also many that were not. To help factor out regions that might have been activated by activity unrelated to the zap memory, the team compared what they saw in zap-encoding or zap-recalling mice to what they saw in the brains of controls who were simply left in their home cage. This allowed them to calculate an “engram index” to rank order 117 brain regions with a significant likelihood of being involved in the memory engram complex. They deepened the analysis by engineering new mice in which neurons involved in both memory encoding and in recall could be doubly labeled, thereby revealing which cells exhibited overlap of those activities.

To really be an engram cell, the authors noted, a neuron should be activated both in encoding and recall.

“These experiments not only revealed significant engram reactivation in known hippocampal and amygdala regions, but also showed reactivation in many thalamic, cortical, midbrain and brainstem structures,” the authors wrote. “Importantly when we compared the brain regions identified by the engram index analysis with these reactivated regions, we observed that ~60 percent of the regions were consistent between analyses.”

Memory manipulations

Having ranked regions significantly likely to be involved in the engram complex, the team engaged in several manipulations to directly test their predictions and to determine how engram complex regions might work together.

For instance, they engineered mice such that cells activated by memory encoding would also become controllable with flashes of light (a technique called “optogenetics”). The researchers then applied light flashes to select brain regions from their engram index list to see if stimulating those would artificially reproduce the fear memory behavior of freezing in place, even when mice were placed in a “neutral” cage where the zap had not occurred.

“Strikingly, all these brain regions induced robust memory recall when they were optogenetically stimulated,” the researchers observed. Moreover, stimulating areas that their analysis suggested were insignificant to zap memory indeed produced no freezing behavior.

The team then demonstrated how different regions within an engram complex connect. They chose two well-known memory regions, CA1 of the hippocampus and the basolateral amygdala (BLA), and optogenetically activated engram cells there to induce memory recall behavior in a neutral cage. They found that stimulating those regions produced memory recall activity in specific “downstream” areas identified as being probable members of the engram complex. Meanwhile, optogenetically inhibiting natural zap memory recall in CA1 or the BLA (i.e. when mice were placed back in the cage where they experienced the zap) led to reduced activity in downstream engram complex areas compared to what they measured in mice with unhindered natural recall.

Further experiments showed that optogenetic reactivations of engram complex neurons followed similar patterns as those observed in natural memory recall. So having established that natural memory encoding and recall appears to occur across a wide engram complex, the team decided to test whether reactivating multiple regions would improve memory recall compared to reactivating just one. After all, prior experiments have shown that activating just one engram area does not produce recall as vividly as natural recall. This time the team used a chemical means to stimulate different engram complex regions and when they did, they found that indeed stimulating up to three involved regions simultaneously produced more robust freezing behavior than stimulating just one or two.

Meaning of distributed storage

Roy said that by storing a single memory across such a widespread complex the brain might be making memory more efficient and resilient.

“Different memory engrams may allow us to recreate memories more efficiently when we are trying to remember a previous event (and similarly for the initial encoding where different engrams may contribute different information from the original experience),” he said. “Secondly, in disease states, if a few regions are impaired, distributed memories would allow us to remember previous events and in some ways be more robust against regional damages.”

In the long term that second idea might suggest a clinical strategy for dealing with memory impairment: “If some memory impairments are because of hippocampal or cortical dysfunction, could we target understudied engram cells in other regions and could such a manipulation restore some memory functions?”

That’s just one of many new questions researchers can ask now that the study has revealed a listing of where to look for at least one kind of memory in the mammalian brain.

The paper’s other authors are Nicholas DiNapoli, Xinyi Gu, Jae Cho, Heejin Choi, Lee Kamentsky, Jared Martin, Olivia Mosto and Tomomi Aida.

Funding sources included the JPB Foundation, the RIKEN Center for Brain Science, the Howard Hughes Medical Institute, a Warren Alpert Distinguished Scholar Award, the National Institutes of Health, the Burroughs Wellcome Fund, the Searle Scholars Program, a Packard Award in Science and Engineering, a NARSAD Young Investigator Award, the McKnight Foundation Technology Award, the NCSOFT Cultural Foundation, and the Institute for Basic Science.

The model remodeler

A Picower Institute primer on ‘plasticity,’ the brain’s amazing ability to constantly adapt to and learn from experience

Picower Institute
March 17, 2022

Muscles and bones strengthen with exercise and the immune system ‘learns’ from vaccines or infections, but none of those changes match the versatility and flexibility your central nervous system shows in adapting to the world. The brain is a model remodeler. If it weren’t, you wouldn’t have learned how to read this and you wouldn’t remember it anyway.

The brain’s ability to change its cells, their circuit connections, and even its broader architectures in response to experience and activity, for instance to learn new rules and store memories, is called “plasticity.” The phenomenon explains how the brand-new brain of an infant can emerge from a womb and make increasingly refined sense of whatever arbitrary world it encounters – ranging from tuning its visual perception in the early months to getting an A in eighth-grade French. Plasticity becomes subtler during adulthood, but it never stops. It occurs via so many different mechanisms and at so many different scales and rates, it’s… mind-bending.

Plasticity’s indispensable role in allowing the brain to incorporate experience has made understanding exactly how it works – and what the mental health ramifications are when it doesn’t – the inspiration and research focus of several Picower Institute professors (and hundreds of colleagues). This site uses  the term so often in reports on both fundamental neuroscience and on disorders such as autism, it seemed high time to provide a primer. So here goes.

Beginning in the 1980s and 1990s, advances in neuroanatomy, genetics, molecular biology and imaging made it possible to not only observe, but even experimentally manipulate mechanisms of how the brain changes at scales including the individual connections between neurons, called synapses; across groups of synapses on each neuron; and in whole neural circuits. The potential to discover tangible physical mechanisms of these changes proved irresistible to Picower Institute scientists such as Mark BearTroy LittletonElly Nedivi and Mriganka Sur.

Bear got hooked by experiments in which by temporarily covering one eye of a young animal, scientists could weaken the eye’s connections to the brain just as their visual circuitry was still developing. Such “monocular deprivation” produced profound changes in brain anatomy and neuronal electrical activity as neurons rewired circuits to support the unobstructed eye rather than the one with weakened activity. 

“There was this enormous effect of experience on the physiology of the brain and a very clear anatomical basis for that,” Bear said. “It was pretty exhilarating.”

Littleton became inspired during graduate and medical school by new ways to identify genes whose protein products formed the components of synapses. To understand how synapses work was to understand how neurons communicate and therefore how the brain functions.

“Once we were able to think about the proteins that are required to make the whole engine work, we could figure out how you might rev it up and down to encode changes in the way the system might be working to increase or decrease information flow as a function of behavioral change,” Littleton said.

Built to rebuild

So what is the lay of the land for plasticity? Start with a neuron. Though there are thousands of types, a typical neuron will extend a vine-like axon to forge synapses on the root-like dendrites of other neurons. These dendrites may host thousands of synapses. Whenever neurons connect, they form circuits that can relay information across the brain via electrical and chemical signals. Most synapses are meant to increase the electrical excitement of the receiving neuron so that it will eventually pass a signal along, but other synapses modulate that process by inhibiting activity.

Hundreds of proteins are involved in building and operating every synapse, both on the “pre-synaptic” (axonal) side and the “post-synaptic” (dendritic) side of the connection. Some of these proteins contribute to the synapse’s structure. Some on the pre-synaptic side coordinate the release of chemicals called neurotransmitters from blobs called vesicles, while some on the postsynaptic side form or manage the receptors that receive those messages. Neurotransmitters may compel the receiving neuron to take in more ions (hence building up electric charge), but synapses aren’t just passive relay stations of current. They adjust in innumerable ways according to changing conditions, such as the amount of communication activity the host cells are experiencing. Across many synapses the pace and amount of neurotransmitter signaling can be frequently changed by either the presynaptic or postsynaptic side. And sometimes, especially early in life, synapses will appear or disappear altogether.

Moreover, plasticity doesn’t just occur at the level of the single synapse. Combinations of synapses along a section of dendrite can all change in coordination so that the way a neuron works within a circuit is altered. These numerous dimensions of plasticity help to explain how the brain can quickly and efficiently accomplish the physical implementation of something as complex as learning and memory, Nedivi said.

“You might think that when you learn something new it has nothing to do with individual synapses,” Nedivi said. “But in fact, the way that things like this happen is that individual synapses can change in strength or can be added and removed, and then it also matters which synapses, and how many synapses, and how they are organized on the dendrites, and how those changes are integrated and summated on the cell. These parameters will alter the cell’s response properties within its circuit and that affects how the circuit works and how it affects behavior.”

A 2018 study in Sur’s lab illustrated learning occurring at a neural circuit level. His lab trained mice on a task where they had to take a physical action based on a visual cue (e.g. drivers know that “green means go”). As mice played the game, the scientists monitored neural circuits in a region called the posterior parietal cortex where the brain converts vision into action. There, ensembles of neurons increased activity specifically in response the “go” cue. When the researchers then changed the game’s rules (i.e. “red means go”) the circuits switched to only respond to the new go cue. Plasticity had occurred en masse to implement learning.

Many mechanisms 

To carry out that rewiring, synapses can change in many ways. Littleton’s studies of synaptic protein components have revealed many examples of how they make plasticity happen. Working in the instructive model of the fruit fly, his lab is constantly making new findings that illustrate how changes in protein composition can modulate synaptic strength.

For instance, in a 2020 study his lab showed that synaptotagmin 7 limits neurotransmitter release by regulating the speed with which the supply of neurotransmitter-carrying vesicles becomes replenished. By manipulating expression of the protein’s gene, his lab was able to crank neurotransmitter release, and therefore synaptic strength, up or down like a radio volume dial. 

Other recent studies revealed how proteins influence the diversity of neural plasticity. At the synapses flies use to control muscles, “phasic” neurons release quick, big bursts of the neurotransmitter glutamate, while tonic ones steadily release a low amount. In 2020 Littleton’s lab showed that when phasic neurons are disrupted, tonic neurons will plasticly step up glutamate release, but phasic ones don’t return the favor when tonic ones are hindered. Then last year, his team showed that a major difference between the two neurons was their levels of a protein called tomosyn, which turns out to restrict glutamate release. Tonic ones have a lot but phasic ones have very little. Tonic neurons therefore can vary their glutamate release by reducing tomosyn expression, while phasic neurons lack that flexibility. 

Nedivi, too, looks at how neurons use their genes and the proteins they encode to implement plasticity. She tracks “structural plasticity” in the living mouse brain, where synapses don’t just strengthen or weaken, but come and go completely. She’s found that even in adult animal brains, inhibitory synapses will transiently appear or disappear to regulate the influence of more permanent excitatory synapses.

Nedivi has revealed how experience can make excitatory synapses permanent. After discovering that mice lacking a synaptic protein called CPG15 were slow learners, Nedivi hypothesized that it was because the protein helped cement circuit connections that implement learning. To test that, her lab exposed normal mice and others lacking CPG15 to stretches of time in the light, when they could gain visual experience, and the dark, where there was no visual experience. Using special microscopes to literally watch fledgling synapses come and go in response, they could compare protein levels in those synapses in normal mice and the ones without CPG15. They found that CPG15 helped experience make synapses stick around because upon exposure to increased activity, CPG15 recruited a structural protein called PSD95 to solidify the synapses. That explained why CPG15-lacking mice don’t learn as well: they lack that mechanism for experience and activity to stabilize their circuit connections. 

Another Sur Lab study in 2018 helped to show how multiple synapses sometimes change in concert to implement plasticity. Focusing on a visual cortex neuron whose job was to respond to locations within a mouse’s field of view, his team purposely changed which location it preferred by manipulating “spike-timing dependent plasticity.” Essentially right after they put a visual stimulus in a new location (rather than the neuron’s preferred one), they artificially excited the neuron. The reinforcement of this specifically timed excitement strengthened the synapse that received input about the new location. After about 100 repetitions, the neuron changed its preference to the new location. Not only did the corresponding synapse strengthen, but also the researchers saw a compensatory weakening among neighboring synapses (orchestrated by a protein called Arc). In this way, the neuron learned a new role and shifted the strength of several synapses along a dendrite to ensure that new focus.

Lest one think that plasticity is all about synapses or even dendrites, Nedivi has helped to show that it isn’t. For instance, her research has shown that amid monocular deprivation, inhibitory neurons go so far as to pare down their axons to enable circuit rewiring to occur. In 2020 her lab collaborated with Harvard scientists to show that to respond to changes in visual experience, some neurons will even adjust how well they insulate their axons with a fatty sheathing called myelin that promotes electrical conductance. The study added strong evidence that myelination also contributes to the brain’s adaptation to changing experience.

It’s not clear why the brain has evolved so many different ways to effect change (these examples are but a small sampling) but Nedivi points out a couple of advantages: robustness and versatility.

“Whenever you see what seems to you like redundancy it usually means it’s a really important process. You can’t afford to have just one way of doing it,” she said. “Also having multiple ways of doing things gives you more precision and flexibility and the ability to work over multiple time scales, too.”

Insights into illness

Another way to appreciate the importance of plasticity is to recognize its central role in neurodevelopmental diseases and conditions. Through their fundamental research into plasticity mechanisms, Bear, Littleton, Nedivi and Sur have all discovered how pivotal they are to breakdowns in brain health.

Beginning in the early 1990s, Bear led pioneering experiments showing that by multiple means, post-synaptic sensitivity could decline when receptors received only weak input, a plasticity called long-term depression (LTD). LTD explained how monocular deprivation weakens an occluded eye’s connections to the brain. Unfortunately, this occurs naturally in millions of children with visual impairment, resulting in a developmental vision disorder called amblyopia. But Bear’s research on plasticity, including mechanisms of LTD, has also revealed that plasticity itself is plastic (he calls that “metaplasticity”). That insight has allowed his lab to develop a potential new treatment in which by completely but temporarily suspending all input to the affected eye by anesthetizing the retina, the threshold for strengthening vs. weakening can be lowered such that when input resumes, it triggers a newly restorative connection.

Bear’s investigations of a specific form of LTD have also led to key discoveries about Fragile X syndrome, a genetic cause of autism and intellectual disability. He found that LTD can occur when stimulation of metabotropic glutamate receptor 5 (mGluR5) causes proteins to be synthesized at the dendrite, reducing post-synaptic sensitivity. A protein called FMRP is supposed to be a brake on this synthesis but mutation of the FMR1 gene in Fragile X causes loss of FMRP. That can exaggerate LTD in the hippocampus, a brain region crucial for memory and cognition. The insight has allowed Bear to advance drugs to clinical trials that inhibit MGlur5 activity to compensate for FMRP loss.

Littleton, too, has produced insight into autism by studying the consequences of mutation in the gene Shank3, which encodes a protein that helps to build developing synapses on the post-synaptic side. In a 2016 paper his team reported multiple problems in synapses when Shank was knocked out in fruit flies. Receptors for a key form of molecular signaling from the presynaptic side called Wnt failed to be internalized by the postsynaptic cell, meaning they could not influence the transcription of genes that promote maturation of the synapse as they normally would. A consequence of disrupted synaptic maturation is that a developing brain would struggle to complete the connections needed to efficiently encode experience and that may explain some of the cognitive and behavioral outcomes in Shank-associated autism. To set the stage for potential drug development, Littleton’s lab was able to demonstrate ways to bypass Wnt signaling that rescued synaptic development.

By studying plasticity proteins Sur’s lab, too, has discovered a potential way to help people with Rett syndrome, a severe autism-like disorder. The disease is caused by mutations in the gene MECP2. Sur’s lab showed that MECP2’s contribution to synaptic maturation comes via a protein called IGF1 that is reduced among people with Rett. That insight allowed them to show that treating Rett-model mice with extra IGF1 peptide or IGF1 corrected many defects of MECP2 mutation. Both treatment forms have advanced to clinical trials. Late last year IGF1 peptide was shown to be effective in a comprehensive phase 3 trial for Rett syndrome and is progressing toward FDA approval as the first-ever mechanism-based treatment for a neurodevelopmental disorder, Sur said. 

Nedivi’s plasticity studies, meanwhile, have yielded new insights into bipolar disorder. During years of fundamental studies, Nedivi discovered CPG2, a protein expressed in response to neural activity that helps regulate the number of glutamate receptors at excitatory synapses. The gene encoding CPG2 was recently identified as a risk gene for bipolar disorder. In a 2019 study her lab found that people with bipolar disorder indeed had reduced levels of CPG2 because of variations in the SYNE1 gene. When they cloned these variants into rats, they found they reduced the ability of CPG2 to locate in the dendritic “spines” that house excitatory synapses or decreased the proper cycling of glutamate receptors within synapses.

The brain’s ever-changing nature makes it both wonderful and perhaps vulnerable. Both to understand it and heal it, neuroscientists will eagerly continue studying its plasticity for a long time to come.