Sparse, small, but diverse neural connections help make perception reliable, efficient

First detailed mapping and modeling of thalamus inputs onto visual cortex neurons show brain leverages “wisdom of the crowd” to process sensory information.

David Orenstein | Picower Institute for Learning and Memory
February 2, 2023

The brain’s cerebral cortex produces perception based on the sensory information it’s fed through a region called the thalamus.

“How the thalamus communicates with the cortex in a fundamental feature of how the brain interprets the world,” says Elly Nedivi, the William R. and Linda R. Young Professor in The Picower Institute for Learning and Memory at MIT. Despite the importance of thalamic input to the cortex, neuroscientists have struggled to understand how it works so well given the relative paucity of observed connections, or “synapses,” between the two regions.

To help close this knowledge gap, Nedivi assembled a collaboration within and beyond MIT to apply several innovative methods. In a new study described in Nature Neuroscience, the team reports that thalamic inputs into superficial layers of the cortex are not only rare, but also surprisingly weak, and quite diverse in their distribution patterns. Despite this, they are reliable and efficient representatives of information in the aggregate, and their diversity is what underlies these advantages.

Essentially, by meticulously mapping every thalamic synapse on 15 neurons in layer 2/3 of the visual cortex in mice and then modeling how that input affected each neuron’s processing of visual information, the team found that wide variations in the number and arrangement of thalamic synapses made them differentially sensitive to visual stimulus features. While individual neurons therefore couldn’t reliably interpret all aspects of the stimulus, a small population of them could together reliably and efficiently assemble the overall picture.

“It seems this heterogeneity is not a bug; it’s a feature that provides not only a cost benefit, but also confers flexibility and robustness to perturbation” says Nedivi, corresponding author of the study and a member of MIT’s faculty in the departments of Biology and Brain and Cognitive Sciences.

Aygul Balcioglu, the research scientist in Nedivi’s lab who led the work, adds that the research has created a way for neuroscientists to track all the many individual inputs a cell receives as that input is happening.

“Thousands of information inputs pour into a single brain cell. The brain cell then interprets all that information before it communicates its own response to the next brain cell,” Balcioglu says. “What is new, and we feel exciting, is we can now reliably describe the identity and the characteristics of those inputs, as different inputs and characteristics convey different information to a given brain cell. Our techniques give us the ability to describe in living animals where in the structure of the single cell what kind of information gets incorporated. This was not possible until now.”

“MAP”ping and modeling

Nedivi and Balcioglu’s team chose layer 2/3 of the cortex because this layer is where there is relatively high flexibility, or “plasticity,” even in the adult brain. Yet, thalamic innervation there has rarely been characterized. Moreover, Nedivi says, even though the model organism for the study was mice, those layers are the ones that have thickened the most over the course of evolution, and therefore play especially important roles in the human cortex.

Precisely mapping all the thalamic innervation onto entire neurons in living, perceiving mice is so daunting it’s never been done.

To get started, the team used a technique established in Nedivi’s lab that enables observing whole cortical neurons under a two-photon microscope using three different color tags in the same cell simultaneously, except in this case they used one of the colors to label thalamic inputs contacting the labeled cortical neurons. Wherever the color of those thalamic inputs overlapped with the color labeling excitatory synapses on the cortical neurons, that revealed the location of putative thalamic inputs onto the cortical neurons.

Two-photon microscopes offer deep looks into living tissues, but their resolution is not sufficient to confirm that the overlapping labels are indeed synaptic contacts. To confirm their first indications of thalamic inputs, the team turned to a technique called MAP invented in the Picower Institute lab of MIT chemical engineering Associate Professor Kwanghun Chung. MAP physically enlarges tissue in the lab, effectively increasing the resolution of standard microscopes. Rebecca Gillani, a postdoc in the Nedivi lab, with help from Taeyun Ku, a Chung Lab postdoc, was able to combine the new labeling and MAP to definitely resolve, count, map, and even measure the size of all thalamic-cortical synapses onto entire neurons.

The analysis revealed that the thalamic inputs were rather small (typically presumed to also be weak and maybe temporary), and accounted for between 2 and 10 percent of the excitatory synapses on individual visual cortex neurons. The variance in thalamic synapse numbers was not just at a cellular level, but also across different “dendrite” branches of individual cells, accounting for anywhere between zero and nearly half the synapses on a given branch.

“Wisdom of the crowd”

These facts presented Nedivi’s team with a conundrum. If the thalamic inputs were weak, sparse, and widely varying, not only across neurons but even across each neuron’s dendrites, then how good could they be for reliable information transfer?

To help solve the riddle, Nedivi turned to colleague Idan Segev, a professor at Hebrew University in Jerusalem specializing in computational neuroscience. Segev and his student Michael Doron used the Nedivi lab’s detailed anatomical measurements and physiological information from the Allen Brain Atlas to create a biophysically faithful model of the cortical neurons.

Segev’s model showed that when the cells were fed visual information (the simulated signals of watching a grating go past the eyes) their electrical responses varied based on how their thalamic input varied. Some cells perked up more than others in response to different aspects of the visual information, such as contrast or shape, but no single cell revealed much about the overall picture. But with about 20 cells together, the whole visual input could be decoded from their combined activity — a so-called “wisdom of the crowd.”

Notably, Segev compared the performance of cells with the weak, sparse, and varying input akin to what Nedivi’s lab measured, to the performance of a group of cells that all acted like the best single cell of the lot. Up to about 5,000 total synapses, the “best” cell group delivered more informative results, but after that level the small, weak, and diverse group actually performed better. In the race to represent the total visual input with at least 90 percent accuracy, the small, weak, and diverse group reached that level with about 6,700 synapses, while the “best” cell group needed more than 7,900.

“Thus heterogeneity imparts a cost reduction in terms of the number of synapses required for accurate readout of visual features,” the authors wrote.

Nedivi says the study raises tantalizing implications regarding how thalamic input into the cortex works. One, she says, is that given the small size of thalamic synapses they are likely to exhibit significant “plasticity.” Another is that the surprising benefit of diversity may be a general feature, not just a special case for visual input in layer 2/ 3. Further studies, however, are needed to know for sure.

In addition to Nedivi, Balcioglu, Gillani, Ku, Chung, Segev and Doron, other authors are Kendyll Burnell and Alev Erisir.

The National Eye Institute of the National Institutes of Health, the Office of Naval Research, and the JPB Foundation funded the study.

Providing new pathways for neuroscience research and education

Payton Dupuis finds new scientific interests and career opportunities through MIT summer research program in biology.

Leah Campbell | School of Science
September 29, 2022

Payton Dupuis’s interest in biology research began where it does for many future scientists — witnessing a relative struggling with an incurable medical condition. For Dupuis, that family member was her uncle, who suffered from complications from diabetes. Dupuis, a senior at Montana State University, says that diabetes is prominent on the Flathead Reservation in Montana, where she grew up, and witnessing the impacts of the disease inspired her to pursue a career in scientific research. Since then, that passion has taken Dupuis around the country to participate in various summer research programs in the biomedical sciences.

Most recently, she was a participant in the Bernard S. and Sophie G. Gould MIT Summer Research Program in Biology (BSG-MSRP-Bio). The program, offered by the departments of Biology and Brain and Cognitive Sciences, is designed to encourage students from underrepresented groups to attend graduate school and pursue careers in science research. More than 85 percent of participants have subsequently enrolled in highly ranked graduate programs, many of them returning to MIT, just as Dupuis is considering.

Her journey from witnessing the impacts of her uncle’s diabetes to considering graduate school at MIT was made possible only by Dupuis’s love of science and her ability to “find a positive,” as she says, in every experience.

As a high-schooler, Dupuis made her first trip to the Northeast, participating in the Summer Academy of Math and Sciences at Carnegie Mellon University. For Dupuis, who hadn’t even taken calculus yet, the experience was a welcome challenge. “That definitely made me work hard,” she laughs, comparing herself to other program participants. “But I proved to myself, not for anyone else, that I belonged in that program.”

In addition to being a confidence booster, the Carnegie Mellon program also gave Dupuis her first taste of scientific research working in a biomedical lab on tissue regeneration. She was excited about the possibilities of growing new organs — such as the insulin-producing pancreas that could help regulate her uncle’s diabetes — outside of the body. Dupuis was officially hooked on biology.

Her experience that summer encouraged Dupuis to major in chemical engineering, seeing it as a good pipeline into biomedical research. Unfortunately, the chemical engineering curriculum at Montana State wasn’t what she expected, focusing less on the human body and more on the oil industry. In that context, her ability to see a silver lining served Dupuis well.

“That wasn’t really what I wanted, but it was still interesting because there were ways that I could apply it to the body,” she explains. “Like fluid mechanics — instead of water flowing through a pipe, I was thinking about blood flowing through veins.”

Dupuis adds that the chemical engineering program also gave her problem-solving skills that have been valuable as she’s undertaken biology-focused summer programs to help refine her interests. One summer, she worked in the chemistry department at Montana State, getting hands-on experience in a wet lab. “I didn’t really know any of the chemistry behind what I was doing,” she admits, “but I fell in love with it.” Another summer, she participated in the Tufts Building Diversity in Biomedical Sciences program, exploring the genetic side of research through a project on bone development in mice.

In 2020, a mentor at the local tribal college connected Dupuis with Keith Henry, an associate professor of biomedical sciences at the University of North Dakota. With Henry, Dupuis looked for new binding sites for the neurotransmitter serotonin that could help minimize the side effects that come with long-term use of selective serotonin reuptake inhibitors (SSRIs), the most common class of antidepressants. That summer was Dupuis’s first exposure to brain research, and her first experience modeling biological processes with computers. She loved it. In fact, as soon as she returned to Montana State, Dupuis enrolled as a computer science minor.

Because of the minor, Dupuis needs an extra year to graduate, which left her one more summer for a research program. Her older sister had previously participated in the general MSRP program at MIT, so it was a no-brainer for Dupuis to apply for the biology-specific program.

This summer, Dupuis was placed in the lab of Troy Littleton, the Menicon Professor in Neuroscience at The Picower Institute for Learning and Memory. “I definitely fell in love with the lab,” she says. With Littleton, Dupuis completed a project looking at complexin, a protein that can both inhibit and facilitate the release of neurotransmitters like serotonin. It’s also essential for the fusion of synaptic vesicles, the parts of neurons that store and release neurotransmitters.

A number of human neurological diseases have been linked to a deficiency in complexin, although Dupuis says that scientists are still figuring out what the protein does and how it works.

To that end, Dupuis focused this summer on fruit flies, which have two different types of complexin — humans, in comparison, have four. Using gene editing, she designed an experiment comparing fruit flies possessing various amounts of different subtypes of the protein. There was the positive control group, which was untouched; the negative control group that had no complexin; and two experimental groups, each with one of the subtypes removed. Using fluorescent staining, Dupuis compared how neurons lit up in each group of flies, illuminating how altering the amount of complexin changed how the flies released neurotransmitters and formed new synaptic connections.

After touching on so many different areas of biological research through summer programs, Dupuis says that researching neuronal activity in fruit flies this summer was the perfect fit intellectually, and a formative experience as a researcher.

“I’ve definitely learned how to take an experiment and make it my own and figure out what works best for me, but still produces the results we need,” she says.

As for what’s next, Dupuis says her experience at MIT has sold her on pursuing graduate work in brain sciences. “Boston is really where I want to be and eventually work, with all the biotech and biopharma companies around,” she says. One of the perks of the MSRP-Bio program is professional development opportunities. Though Dupuis had always been interested in industry, she says she appreciated attending career panels this summer that demystified what that career path really looks like and what it takes to get there.

Perhaps the most important aspect of the program for Dupuis, though, was the confidence it provided as she continues to navigate the world of biomedical research. She intends to take that back with her to Montana State to encourage classmates to seek out similar summer opportunities.

“There’s so many people that I know would be a great researcher and love science, but they just don’t either know about it or think they can get it,” she says. “All I’d say is, you just got to apply. You just have to put yourself out there.”

Brandon (Brady) Weissbourd

Education

  • Graduate: PhD, 2016, Stanford University
  • Undergraduate: BA, 2009, Human Evolutionary Biology, Harvard University

Research Summary

We use the tiny, transparent jellyfish, Clytia hemisphaerica, to ask questions at the interface of nervous system evolution, development, regeneration, and function. Our foundation is in systems neuroscience, where we use genetic and optical techniques to examine how behavior arises from the activity of networks of neurons. Building from this work, we investigate how the Clytia nervous system is so robust, both to the constant integration of newborn neurons and following large-scale injury. Lastly, we use Clytia’s evolutionary position to study principles of nervous system evolution and make inferences about the ultimate origins of nervous systems.

Awards

  • Searle Scholar Award, 2024
  • Klingenstein-Simons Fellowship Award in Neuroscience, 2023
  • Pathway to Independence Award (K99/R00), National Institute of Neurological Disorders and Stroke, 2020
  • Life Sciences Research Foundation Fellow, 2017
New findings reveal how neurons build and maintain their capacity to communicate

Nerve cells regulate and routinely refresh the collection of calcium channels that enable them to send messages across circuit connections.

David Orenstein | Picower Institute for Learning and Memory
July 21, 2022

The nervous system works because neurons communicate across connections called synapses. They “talk” when calcium ions flow through channels into “active zones” that are loaded with vesicles carrying molecular messages. The electrically charged calcium causes vesicles to “fuse” to the outer membrane of presynaptic neurons, releasing their communicative chemical cargo to the postsynaptic cell. In a new study, scientists at The Picower Institute for Learning and Memory at MIT provide several revelations about how neurons set up and sustain this vital infrastructure.

“Calcium channels are the major determinant of calcium influx, which then triggers vesicle fusion, so it is a critical component of the engine on the presynaptic side that converts electrical signals to chemical synaptic transmission,” says Troy Littleton, senior author of the new study in eLife and Menicon Professor of Neuroscience in MIT’s departments of Biology and Brain and Cognitive Sciences. “How they accumulate at active zones was really unclear. Our study reveals clues into how active zones accumulate and regulate the abundance of calcium channels.”

Neuroscientists have wanted these clues. One reason is that understanding this process can help reveal how neurons change how they communicate, an ability called “plasticity” that underlies learning and memory and other important brain functions. Another is that drugs such as gabapentin, which treats conditions as diverse as epilepsy, anxiety, and nerve pain, binds a protein called alpha2delta that is closely associated with calcium channels. By revealing more about alpha2delta’s exact function, the study better explains what those treatments affect.

“Modulation of the function of presynaptic calcium channels is known to have very important clinical effects,” Littleton says. “Understanding the baseline of how these channels are regulated is really important.”

MIT postdoc Karen Cunningham led the study, which was her doctoral thesis work in Littleton’s lab. Using the model system of fruit fly motor neurons, she employed a wide variety of techniques and experiments to show for the first time the step-by-step process that accounts for the distribution and upkeep of calcium channels at active zones.

A cap on Cac

Cunningham’s first question was whether calcium channels are necessary for active zones to develop in larvae. The fly calcium channel gene (called “cacophony,” or Cac) is so important, flies literally can’t live without it. So rather than knocking out Cac across the fly, Cunningham used a technique to knock it out in just one population of neurons. By doing so, she was able to show that even without Cac, active zones grow and mature normally.

Using another technique that artificially prolongs the larval stage of the fly she was also able to see that given extra time the active zone will continue to build up its structure with a protein called BRP, but that Cac accumulation ceases after the normal six days. Cunningham also found that moderate increases or decreases in the supply of available Cac in the neuron did not affect how much Cac ended up at each active zone. Even more curious, she found that while Cac amount did scale with each active zone’s size, it barely budged if she took away a lot of the BRP in the active zone. Indeed, for each active zone, the neuron seemed to enforce a consistent cap on the amount of Cac present.

“It was revealing that the neuron had very different rules for the structural proteins at the active zone like BRP that continued to accumulate over time, versus the calcium channel that was tightly regulated and had its abundance capped” Cunningham says.

Regular refresh

The findings showed there must be factors other than Cac supply or changes in BRP that regulate Cac levels so tightly. Cunningham turned to alpha2delta. When she genetically manipulated how much of that was expressed, she found that alpha2delta levels directly determined how much Cac accumulated at active zones.

In further experiments, Cunningham was also able to show that alpha2delta’s ability to maintain Cac levels depended on the neuron’s overall Cac supply. That finding suggested that rather than controlling Cac amount at active zones by stabilizing it, alpha2delta likely functioned upstream, during Cac trafficking, to supply and resupply Cac to active zones.

Cunningham used two different techniques to watch that resupply happen, producing measurements of its extent and its timing. She chose a moment after a few days of development to image active zones and measure Cac abundance to ascertain the landscape. Then she bleached out that Cac fluorescence to erase it. After 24 hours, she visualized Cac fluorescence anew to highlight only the new Cac that was delivered to active zones over that 24 hours. She saw that over that day there was Cac delivery across virtually all active zones, but that one day’s work was indeed only a fraction compared to what had built up over several days before. Moreover, she could see that the larger active zones accrued more Cac than smaller ones. And in flies with mutated alpha2delta, there was very little new Cac delivery at all.

If Cac channels were indeed constantly being resupplied, then Cunningham wanted to know at what pace Cac channels are removed from active zones. To determine that, she used a staining technology with a photoconvertible protein called Maple tagged to the Cac protein that allowed her to change the color with a flash of light at the time of her choosing. That way she could first see how much Cac accumulated by a certain time (shown in green) and then flash the light to turn that Cac red. When she checked back five days later, about 30 percent of the red Cac had been replaced with new green Cac, suggesting 30 percent turnover. When she reduced Cac delivery levels by mutating alpha2 delta or reducing Cac biosynthesis, Cac turnover stopped. That means a significant amount of Cac is turned over each day at active zones and that the turnover is prompted by new Cac delivery.

Littleton says his lab is eager to build on these results. Now that the rules of calcium channel abundance and replenishment are clear, he wants to know how they differ when neurons undergo plasticity — for instance, when new incoming information requires neurons to adjust their communication to scale up or down synaptic communication. He says he is also eager to track individual calcium channels as they are made in the cell body and then move down the neural axon to the active zones, and he wants to determine what other genes may affect Cac abundance.

In addition to Cunningham and Littleton, the paper’s other authors are Chad Sauvola and Sara Tavana.

The National Institutes of Health and the JPB Foundation provided support for the research.

Opioids and the brain: new insights through epigenetics
Greta Friar | Whitehead Institute
April 18, 2022

Drug overdose, mostly from opioid use, is the leading cause of accidental death in the United States. Prior studies of twins have revealed that genetics play a key role in opioid use disorder. Researchers know that a mixture of genetic and environmental risk factors contribute to heritability of the disorder, but identifying the specific risk factors is challenging. Opioid use disorder is complex, so instead of one or a few genes causing the disorder, there may be many contributing factors that can combine in different ways. Researchers want to understand which genes contribute to opioid use disorder because this will lead to a better understanding of its underlying biology and could help identify people who will be most at risk if exposed to opioids, enabling researchers, health care providers, and social services to develop strategies for prevention, treatment, and support.

The usual approach for finding genes associated with disease risk is to do a genome wide association study, which compares the genetics of many people to identify patterns in different gene versions occurring in association with a disease. This approach is being used to look at opioid use disorder, but requires many more patient samples than are currently available to reach clear conclusions. Researchers from multiple research universities and institutes, including Whitehead Institute Member Olivia Corradin and her former PhD advisor, Case Western Reserve University Professor Peter Scacheri; as well as Icahn School of Medicine Professor Schahram Akbarian; Eric O. Johnson, a distinguished fellow at RTI International; Dr. Kiran C. Patel College of Allopathic Medicine at Nova South Eastern University Professor Deborah C. Mash; and Richard Sallari of Axiotl, Inc., developed a shortcut for identifying genes that are associated with opioid use disorder and may contribute to it using only a small number of patient samples. Genome wide studies may require hundreds of thousands of samples, but this new method, described in their research published in the journal Molecular Psychiatry on March 17, uses only around 100 samples—51 cases and 51 controls—to narrow in on five candidate genes.

“With this work, we think we’re only seeing the tip of the iceberg of the complex, diverse factors contributing to opioid overdose,” says Corradin, who is also an assistant professor of biology at the Massachusetts Institute of Technology. “However, we hope our findings can help prioritize genes for further study, to speed up the identification of risk markers and possible therapeutic targets.”

In order to learn more about the underlying biology of opioid use disorder, the researchers analyzed brain tissue samples from people who had died of opioid overdoses and compared them with samples from people with no known opioid use history who died of other accidental causes. They specifically looked at neurons from the dorsolateral prefrontal cortex, an area of the brain known to play important roles in addiction. Instead of analyzing the genes in these cells directly, the researchers instead looked at the regulators of the genes’ activity, and searched for changes in these regulators that could point them to genes of interest.

To identify a gene, first map its community

Genes have DNA regions, often close to the gene, that can ratchet up and down the gene’s expression, or the strength of its activity in certain cells. Researchers have only recently been able to map the three-dimensional organization of DNA in a cell well enough to identify all of the regulators that are close to and acting upon target genes. Corradin and her collaborators call a gene’s collection of close regulatory elements its “plexus.” Their approach finds genes of interest by searching for patterns of variation across each gene’s entire plexus, which can be easier to spot with a small sample size.

The patterns that the researchers look for in a plexus are epigenetic changes: differences in the chemical tags that affect regulatory DNA and in turn, modify the expression of the regulators’ target gene. In this case, the researchers looked at a type of epigenetic tag called H3K27 acetylation, which is linked to increases in the activity of regulatory regions. They found nearly 400 locations in the DNA that consistently had less H3K27 acetylation in the brains of people who died of opioid overdose, which would lower activity of target genes. They also identified under-acetylated DNA locations that were often specific to individuals rather than uniform across all opioid overdose cases. The researchers then looked at how many of those locations belonged to regulatory elements in the same plexus. Surprisingly, these individual-specific changes often occurred within the same gene’s plexus. A gene whose plexus had been heavily affected as a collective was flagged as a possible contributor to opioid use disorder.

“We know that the factors that contribute to opioid use disorder are numerous, and that it’s an extremely complex disease that by definition is going to be extremely heterogeneous,” Scacheri says. “The idea was to figure out an approach that embraces that heterogeneity, and then try to spot the themes within it.”

Using this approach, the researchers identified five candidate genes, ASTN2, KCNMA1, DUSP4, GABBR2, and ENOX1. One of the genes, ASTN2, is related to pain tolerance, while KCNMA1DUSP4, and GABBR2 are active in signaling pathways that have been linked more broadly to addiction. Follow up experiments can confirm whether these genes contribute to opioid use disorder.

The five genes and their plexi are also involved in the heritability of generalized anxiety disorder, metrics of tolerance for risk-taking, and educational attainment. Heritability of these traits and opioid use disorder have previously been found to coincide, and people with opioid use disorder often also have generalized anxiety. Furthermore, heritability of these traits and opioid use disorder all have been associated with early childhood adversity. These connections suggest the possibility that early childhood adversity could be contributing to the epigenetic changes observed by the researchers in the brains of people who died of opioid overdose—a useful hypothesis for further research.

The researchers hope that these results will provide some insights into the genetics and neurobiology of opioid use disorder. They are interested in moving their research forward in several ways: they would like to see if they can identify more candidate genes by increasing their sample number, examine different parts of the brain and different cell types, and further analyze the genes already identified. They also hope that their results demonstrate the potency of their approach, which was able to discern useful patterns and identify candidate genes from the neurons of only 51 cases.

“We’re trying a different approach here that relies on this idea of convergence and leverages our understanding of the three-dimensional architecture of DNA, and I hope this approach will be applied to further our understanding of all sorts of complex diseases,” Scacheri says.

A single memory is stored across many connected brain regions

Innovative brain-wide mapping study shows that “engrams,” the ensembles of neurons encoding a memory, are widely distributed, including among regions not previously realize

Picower Institute
April 12, 2022

A new study by scientists at The Picower Institute for Learning and Memory at MIT provides the most comprehensive and rigorous evidence yet that the mammalian brain stores a single memory across a widely distributed, functionally connected complex spanning many brain regions, rather than in just one or even a few places.

Memory pioneer Richard Semon had predicted such a “unified engram complex” more than a century ago, but achieving the new study’s affirmation of his hypothesis required the application of several technologies developed only recently. In the study, the team identified and ranked dozens of areas that were not previously known to be involved in memory and showed that memory recall becomes more behaviorally powerful when multiple memory-storing regions are reactivated, rather than just one.

“When talking about memory storage we all usually talk about the hippocampus or the cortex,” said co-lead and co-corresponding author Dheeraj Roy. He began the research while a graduate student in the RIKEN-MIT Laboratory for Neural Circuit Genetics at The Picower Institute led by senior author Susumu Tonegawa, Picower Professor in the Departments of Biology and Brain and Cognitive Sciences. “This study reflects the most comprehensive description of memory encoding cells, or memory ‘engrams,’ distributed across the brain, not just in the well-known memory regions. It basically provides the first rank-ordered list for high-probability engram regions. This list should lead to many future studies, which we are excited about, both in our labs and by other groups.”

In addition to Roy, who is now a McGovern Fellow in the Broad Institute of MIT and Harvard and the lab of MIT neuroscience Professor Guoping Feng, the study’s other lead authors are Young-Gyun Park, Minyoung Kim, Ying Zhang and Sachie Ogawa.

Mapping Memory

The team was able to map regions participating in an engram complex by conducting an unbiased analysis of more than 247 brain regions in mice who were taken from their home cage to another cage where they felt a small but memorable electrical zap. In one group of mice their neurons were engineered to become fluorescent when they expressed a gene required for memory encoding. In another group, cells activated by naturally recalling the zap memory (e.g. when the mice returned to the scene of the zap) were fluorescently labeled instead. Cells that were activated by memory encoding or by recall could therefore readily be seen under a microscope after the brains were preserved and optically cleared using a technology called SHIELD, developed by co-corresponding author Kwanghun Chung, Associate Professor in The Picower Institute, the Institute for Medical Engineering & Science and the Department of Chemical Engineering. By using a computer to count fluorescing cells in each sample, the team produced brain-wide maps of regions with apparently significant memory encoding or recall activity.

The maps highlighted many regions expected to participate in memory but also many that were not. To help factor out regions that might have been activated by activity unrelated to the zap memory, the team compared what they saw in zap-encoding or zap-recalling mice to what they saw in the brains of controls who were simply left in their home cage. This allowed them to calculate an “engram index” to rank order 117 brain regions with a significant likelihood of being involved in the memory engram complex. They deepened the analysis by engineering new mice in which neurons involved in both memory encoding and in recall could be doubly labeled, thereby revealing which cells exhibited overlap of those activities.

To really be an engram cell, the authors noted, a neuron should be activated both in encoding and recall.

“These experiments not only revealed significant engram reactivation in known hippocampal and amygdala regions, but also showed reactivation in many thalamic, cortical, midbrain and brainstem structures,” the authors wrote. “Importantly when we compared the brain regions identified by the engram index analysis with these reactivated regions, we observed that ~60 percent of the regions were consistent between analyses.”

Memory manipulations

Having ranked regions significantly likely to be involved in the engram complex, the team engaged in several manipulations to directly test their predictions and to determine how engram complex regions might work together.

For instance, they engineered mice such that cells activated by memory encoding would also become controllable with flashes of light (a technique called “optogenetics”). The researchers then applied light flashes to select brain regions from their engram index list to see if stimulating those would artificially reproduce the fear memory behavior of freezing in place, even when mice were placed in a “neutral” cage where the zap had not occurred.

“Strikingly, all these brain regions induced robust memory recall when they were optogenetically stimulated,” the researchers observed. Moreover, stimulating areas that their analysis suggested were insignificant to zap memory indeed produced no freezing behavior.

The team then demonstrated how different regions within an engram complex connect. They chose two well-known memory regions, CA1 of the hippocampus and the basolateral amygdala (BLA), and optogenetically activated engram cells there to induce memory recall behavior in a neutral cage. They found that stimulating those regions produced memory recall activity in specific “downstream” areas identified as being probable members of the engram complex. Meanwhile, optogenetically inhibiting natural zap memory recall in CA1 or the BLA (i.e. when mice were placed back in the cage where they experienced the zap) led to reduced activity in downstream engram complex areas compared to what they measured in mice with unhindered natural recall.

Further experiments showed that optogenetic reactivations of engram complex neurons followed similar patterns as those observed in natural memory recall. So having established that natural memory encoding and recall appears to occur across a wide engram complex, the team decided to test whether reactivating multiple regions would improve memory recall compared to reactivating just one. After all, prior experiments have shown that activating just one engram area does not produce recall as vividly as natural recall. This time the team used a chemical means to stimulate different engram complex regions and when they did, they found that indeed stimulating up to three involved regions simultaneously produced more robust freezing behavior than stimulating just one or two.

Meaning of distributed storage

Roy said that by storing a single memory across such a widespread complex the brain might be making memory more efficient and resilient.

“Different memory engrams may allow us to recreate memories more efficiently when we are trying to remember a previous event (and similarly for the initial encoding where different engrams may contribute different information from the original experience),” he said. “Secondly, in disease states, if a few regions are impaired, distributed memories would allow us to remember previous events and in some ways be more robust against regional damages.”

In the long term that second idea might suggest a clinical strategy for dealing with memory impairment: “If some memory impairments are because of hippocampal or cortical dysfunction, could we target understudied engram cells in other regions and could such a manipulation restore some memory functions?”

That’s just one of many new questions researchers can ask now that the study has revealed a listing of where to look for at least one kind of memory in the mammalian brain.

The paper’s other authors are Nicholas DiNapoli, Xinyi Gu, Jae Cho, Heejin Choi, Lee Kamentsky, Jared Martin, Olivia Mosto and Tomomi Aida.

Funding sources included the JPB Foundation, the RIKEN Center for Brain Science, the Howard Hughes Medical Institute, a Warren Alpert Distinguished Scholar Award, the National Institutes of Health, the Burroughs Wellcome Fund, the Searle Scholars Program, a Packard Award in Science and Engineering, a NARSAD Young Investigator Award, the McKnight Foundation Technology Award, the NCSOFT Cultural Foundation, and the Institute for Basic Science.

The model remodeler

A Picower Institute primer on ‘plasticity,’ the brain’s amazing ability to constantly adapt to and learn from experience

Picower Institute
March 17, 2022

Muscles and bones strengthen with exercise and the immune system ‘learns’ from vaccines or infections, but none of those changes match the versatility and flexibility your central nervous system shows in adapting to the world. The brain is a model remodeler. If it weren’t, you wouldn’t have learned how to read this and you wouldn’t remember it anyway.

The brain’s ability to change its cells, their circuit connections, and even its broader architectures in response to experience and activity, for instance to learn new rules and store memories, is called “plasticity.” The phenomenon explains how the brand-new brain of an infant can emerge from a womb and make increasingly refined sense of whatever arbitrary world it encounters – ranging from tuning its visual perception in the early months to getting an A in eighth-grade French. Plasticity becomes subtler during adulthood, but it never stops. It occurs via so many different mechanisms and at so many different scales and rates, it’s… mind-bending.

Plasticity’s indispensable role in allowing the brain to incorporate experience has made understanding exactly how it works – and what the mental health ramifications are when it doesn’t – the inspiration and research focus of several Picower Institute professors (and hundreds of colleagues). This site uses  the term so often in reports on both fundamental neuroscience and on disorders such as autism, it seemed high time to provide a primer. So here goes.

Beginning in the 1980s and 1990s, advances in neuroanatomy, genetics, molecular biology and imaging made it possible to not only observe, but even experimentally manipulate mechanisms of how the brain changes at scales including the individual connections between neurons, called synapses; across groups of synapses on each neuron; and in whole neural circuits. The potential to discover tangible physical mechanisms of these changes proved irresistible to Picower Institute scientists such as Mark BearTroy LittletonElly Nedivi and Mriganka Sur.

Bear got hooked by experiments in which by temporarily covering one eye of a young animal, scientists could weaken the eye’s connections to the brain just as their visual circuitry was still developing. Such “monocular deprivation” produced profound changes in brain anatomy and neuronal electrical activity as neurons rewired circuits to support the unobstructed eye rather than the one with weakened activity. 

“There was this enormous effect of experience on the physiology of the brain and a very clear anatomical basis for that,” Bear said. “It was pretty exhilarating.”

Littleton became inspired during graduate and medical school by new ways to identify genes whose protein products formed the components of synapses. To understand how synapses work was to understand how neurons communicate and therefore how the brain functions.

“Once we were able to think about the proteins that are required to make the whole engine work, we could figure out how you might rev it up and down to encode changes in the way the system might be working to increase or decrease information flow as a function of behavioral change,” Littleton said.

Built to rebuild

So what is the lay of the land for plasticity? Start with a neuron. Though there are thousands of types, a typical neuron will extend a vine-like axon to forge synapses on the root-like dendrites of other neurons. These dendrites may host thousands of synapses. Whenever neurons connect, they form circuits that can relay information across the brain via electrical and chemical signals. Most synapses are meant to increase the electrical excitement of the receiving neuron so that it will eventually pass a signal along, but other synapses modulate that process by inhibiting activity.

Hundreds of proteins are involved in building and operating every synapse, both on the “pre-synaptic” (axonal) side and the “post-synaptic” (dendritic) side of the connection. Some of these proteins contribute to the synapse’s structure. Some on the pre-synaptic side coordinate the release of chemicals called neurotransmitters from blobs called vesicles, while some on the postsynaptic side form or manage the receptors that receive those messages. Neurotransmitters may compel the receiving neuron to take in more ions (hence building up electric charge), but synapses aren’t just passive relay stations of current. They adjust in innumerable ways according to changing conditions, such as the amount of communication activity the host cells are experiencing. Across many synapses the pace and amount of neurotransmitter signaling can be frequently changed by either the presynaptic or postsynaptic side. And sometimes, especially early in life, synapses will appear or disappear altogether.

Moreover, plasticity doesn’t just occur at the level of the single synapse. Combinations of synapses along a section of dendrite can all change in coordination so that the way a neuron works within a circuit is altered. These numerous dimensions of plasticity help to explain how the brain can quickly and efficiently accomplish the physical implementation of something as complex as learning and memory, Nedivi said.

“You might think that when you learn something new it has nothing to do with individual synapses,” Nedivi said. “But in fact, the way that things like this happen is that individual synapses can change in strength or can be added and removed, and then it also matters which synapses, and how many synapses, and how they are organized on the dendrites, and how those changes are integrated and summated on the cell. These parameters will alter the cell’s response properties within its circuit and that affects how the circuit works and how it affects behavior.”

A 2018 study in Sur’s lab illustrated learning occurring at a neural circuit level. His lab trained mice on a task where they had to take a physical action based on a visual cue (e.g. drivers know that “green means go”). As mice played the game, the scientists monitored neural circuits in a region called the posterior parietal cortex where the brain converts vision into action. There, ensembles of neurons increased activity specifically in response the “go” cue. When the researchers then changed the game’s rules (i.e. “red means go”) the circuits switched to only respond to the new go cue. Plasticity had occurred en masse to implement learning.

Many mechanisms 

To carry out that rewiring, synapses can change in many ways. Littleton’s studies of synaptic protein components have revealed many examples of how they make plasticity happen. Working in the instructive model of the fruit fly, his lab is constantly making new findings that illustrate how changes in protein composition can modulate synaptic strength.

For instance, in a 2020 study his lab showed that synaptotagmin 7 limits neurotransmitter release by regulating the speed with which the supply of neurotransmitter-carrying vesicles becomes replenished. By manipulating expression of the protein’s gene, his lab was able to crank neurotransmitter release, and therefore synaptic strength, up or down like a radio volume dial. 

Other recent studies revealed how proteins influence the diversity of neural plasticity. At the synapses flies use to control muscles, “phasic” neurons release quick, big bursts of the neurotransmitter glutamate, while tonic ones steadily release a low amount. In 2020 Littleton’s lab showed that when phasic neurons are disrupted, tonic neurons will plasticly step up glutamate release, but phasic ones don’t return the favor when tonic ones are hindered. Then last year, his team showed that a major difference between the two neurons was their levels of a protein called tomosyn, which turns out to restrict glutamate release. Tonic ones have a lot but phasic ones have very little. Tonic neurons therefore can vary their glutamate release by reducing tomosyn expression, while phasic neurons lack that flexibility. 

Nedivi, too, looks at how neurons use their genes and the proteins they encode to implement plasticity. She tracks “structural plasticity” in the living mouse brain, where synapses don’t just strengthen or weaken, but come and go completely. She’s found that even in adult animal brains, inhibitory synapses will transiently appear or disappear to regulate the influence of more permanent excitatory synapses.

Nedivi has revealed how experience can make excitatory synapses permanent. After discovering that mice lacking a synaptic protein called CPG15 were slow learners, Nedivi hypothesized that it was because the protein helped cement circuit connections that implement learning. To test that, her lab exposed normal mice and others lacking CPG15 to stretches of time in the light, when they could gain visual experience, and the dark, where there was no visual experience. Using special microscopes to literally watch fledgling synapses come and go in response, they could compare protein levels in those synapses in normal mice and the ones without CPG15. They found that CPG15 helped experience make synapses stick around because upon exposure to increased activity, CPG15 recruited a structural protein called PSD95 to solidify the synapses. That explained why CPG15-lacking mice don’t learn as well: they lack that mechanism for experience and activity to stabilize their circuit connections. 

Another Sur Lab study in 2018 helped to show how multiple synapses sometimes change in concert to implement plasticity. Focusing on a visual cortex neuron whose job was to respond to locations within a mouse’s field of view, his team purposely changed which location it preferred by manipulating “spike-timing dependent plasticity.” Essentially right after they put a visual stimulus in a new location (rather than the neuron’s preferred one), they artificially excited the neuron. The reinforcement of this specifically timed excitement strengthened the synapse that received input about the new location. After about 100 repetitions, the neuron changed its preference to the new location. Not only did the corresponding synapse strengthen, but also the researchers saw a compensatory weakening among neighboring synapses (orchestrated by a protein called Arc). In this way, the neuron learned a new role and shifted the strength of several synapses along a dendrite to ensure that new focus.

Lest one think that plasticity is all about synapses or even dendrites, Nedivi has helped to show that it isn’t. For instance, her research has shown that amid monocular deprivation, inhibitory neurons go so far as to pare down their axons to enable circuit rewiring to occur. In 2020 her lab collaborated with Harvard scientists to show that to respond to changes in visual experience, some neurons will even adjust how well they insulate their axons with a fatty sheathing called myelin that promotes electrical conductance. The study added strong evidence that myelination also contributes to the brain’s adaptation to changing experience.

It’s not clear why the brain has evolved so many different ways to effect change (these examples are but a small sampling) but Nedivi points out a couple of advantages: robustness and versatility.

“Whenever you see what seems to you like redundancy it usually means it’s a really important process. You can’t afford to have just one way of doing it,” she said. “Also having multiple ways of doing things gives you more precision and flexibility and the ability to work over multiple time scales, too.”

Insights into illness

Another way to appreciate the importance of plasticity is to recognize its central role in neurodevelopmental diseases and conditions. Through their fundamental research into plasticity mechanisms, Bear, Littleton, Nedivi and Sur have all discovered how pivotal they are to breakdowns in brain health.

Beginning in the early 1990s, Bear led pioneering experiments showing that by multiple means, post-synaptic sensitivity could decline when receptors received only weak input, a plasticity called long-term depression (LTD). LTD explained how monocular deprivation weakens an occluded eye’s connections to the brain. Unfortunately, this occurs naturally in millions of children with visual impairment, resulting in a developmental vision disorder called amblyopia. But Bear’s research on plasticity, including mechanisms of LTD, has also revealed that plasticity itself is plastic (he calls that “metaplasticity”). That insight has allowed his lab to develop a potential new treatment in which by completely but temporarily suspending all input to the affected eye by anesthetizing the retina, the threshold for strengthening vs. weakening can be lowered such that when input resumes, it triggers a newly restorative connection.

Bear’s investigations of a specific form of LTD have also led to key discoveries about Fragile X syndrome, a genetic cause of autism and intellectual disability. He found that LTD can occur when stimulation of metabotropic glutamate receptor 5 (mGluR5) causes proteins to be synthesized at the dendrite, reducing post-synaptic sensitivity. A protein called FMRP is supposed to be a brake on this synthesis but mutation of the FMR1 gene in Fragile X causes loss of FMRP. That can exaggerate LTD in the hippocampus, a brain region crucial for memory and cognition. The insight has allowed Bear to advance drugs to clinical trials that inhibit MGlur5 activity to compensate for FMRP loss.

Littleton, too, has produced insight into autism by studying the consequences of mutation in the gene Shank3, which encodes a protein that helps to build developing synapses on the post-synaptic side. In a 2016 paper his team reported multiple problems in synapses when Shank was knocked out in fruit flies. Receptors for a key form of molecular signaling from the presynaptic side called Wnt failed to be internalized by the postsynaptic cell, meaning they could not influence the transcription of genes that promote maturation of the synapse as they normally would. A consequence of disrupted synaptic maturation is that a developing brain would struggle to complete the connections needed to efficiently encode experience and that may explain some of the cognitive and behavioral outcomes in Shank-associated autism. To set the stage for potential drug development, Littleton’s lab was able to demonstrate ways to bypass Wnt signaling that rescued synaptic development.

By studying plasticity proteins Sur’s lab, too, has discovered a potential way to help people with Rett syndrome, a severe autism-like disorder. The disease is caused by mutations in the gene MECP2. Sur’s lab showed that MECP2’s contribution to synaptic maturation comes via a protein called IGF1 that is reduced among people with Rett. That insight allowed them to show that treating Rett-model mice with extra IGF1 peptide or IGF1 corrected many defects of MECP2 mutation. Both treatment forms have advanced to clinical trials. Late last year IGF1 peptide was shown to be effective in a comprehensive phase 3 trial for Rett syndrome and is progressing toward FDA approval as the first-ever mechanism-based treatment for a neurodevelopmental disorder, Sur said. 

Nedivi’s plasticity studies, meanwhile, have yielded new insights into bipolar disorder. During years of fundamental studies, Nedivi discovered CPG2, a protein expressed in response to neural activity that helps regulate the number of glutamate receptors at excitatory synapses. The gene encoding CPG2 was recently identified as a risk gene for bipolar disorder. In a 2019 study her lab found that people with bipolar disorder indeed had reduced levels of CPG2 because of variations in the SYNE1 gene. When they cloned these variants into rats, they found they reduced the ability of CPG2 to locate in the dendritic “spines” that house excitatory synapses or decreased the proper cycling of glutamate receptors within synapses.

The brain’s ever-changing nature makes it both wonderful and perhaps vulnerable. Both to understand it and heal it, neuroscientists will eagerly continue studying its plasticity for a long time to come.

‘What Were you Thinking?’

How brain circuits integrate many sources of context to flexibly guide behavior

Picower Institute
September 29, 2021

Mating is instinctual for a mouse but sometimes, for instance when his potential partner smells sick, a male mouse will keep away. When Mark Hyman Jr. Career Development Associate Professor Gloria Choi and colleagues published a study in Nature in April revealing how this primal form of social distancing occurs, they provided an exquisite (and timely) example of how brain circuits factor context into behaviors, making them adaptive and appropriate even when they are innate, or “hardwired.” 

When the odor of illness enters the mouse’s nose, that stimulates neurons in its vomeronasal organ to send an electrical signal through a nerve to the brain’s olfactory bulb. Cells there, Choi’s team discovered, relay the signal on to neurons in a region called the cortical amygdala that govern the mating instinct. Finally, completing the health-preserving circuit that will inhibit the mating instinct, those neurons pass on the message to brethren in the neighboring medial amygdalar nucleus. In so doing, this sequence feeds a sensory context, the female’s ill odor, into a circuit to override the default context of an internal state, the instinct to mate. The researchers even showed that by artificially stimulating cortical amygdala neurons they could prevent a mouse from mating with a healthy partner and by artificially silencing those same cells they could make a mouse mate with an ill-smelling one.

As you can learn below, the brain has much greater flexibility in how it operates than the electrical circuits that power your house or even the chips that drive your cell phone. But fundamentally it is the routing of electrical signals from neuron to neuron that forms the basis not only for how we behave, but also how we match behavior appropriately to the circumstances we encounter, Choi said.

“The closest component to behaviors and internal states, and changes in those, are still believed to be neurons and circuits,” she said.

Understanding how brain circuits produce behavior is an exciting area of neuroscience research, including in many Picower Institute labs. Their studies are helping to elucidate how the brain’s anatomy is arranged to process information, and how the many dimensions of flexibility that the central nervous system overlays upon that infrastructure can integrate context to guide appropriate behavior. Context, after all, comes from many sources in many forms—from the senses, like scents and sounds and sights; from internal states, like mating drive or hunger or sleepiness; and even from time and place and from what we’ve learned and remember.

So what were you thinking when you did “this” instead of “that”? You were thinking about the context and relying on your brain’s ability to account for it.

Chemical control

The popular “circuit” metaphor makes it easy to think of neurons as merely switches and wires that pass electrical transmissions from one point to another. And indeed they do that, although instead of being screwed and soldered to metal contacts, they use molecules called neurotransmitters to send signals across tiny junctions called synapses. But if that were all that was going on, the brain would be pretty static and it is anything but. Many members of the Institute’s faculty study how learning occurs and memories are formed when the brain changes its synapses to create or edit circuit connections, but none of that is strictly necessary for existing circuits to flexibly control behaviors that we’ve already learned or that are innate. The brain has other ways to flexibly change how it operates. Choi’s team, for instance, found that the behavioral change of inhibiting mating could not occur without the cortical amygdala neurons also sending a chemical, thyrotrophin releasing hormone (TRH), to the medial amygdalar nucleus neurons. 

In the lab of Lister Brothers Associate Professor Steven Flavell, researchers study how internal states and behaviors emerge and change using a worm so simple that its complete, invariant “wiring diagram” has been completely mapped out for decades. Yet even in C. elegans, with its exact total of 302 neurons, scientists are still discovering how the animal adapts its actions to survive and thrive in a world of ever-changing contexts.

“Since 1986, that wiring diagram has been staring at researchers,” Flavell quipped. “Many of the small circuits embedded in the wiring diagram have been closely studied, while others haven’t. But a key question that we are trying to answer is how does the whole system work. How are these circuits coupled together to give rise to so-called ‘brain states’?”

In several studies Flavell has shown how a small number of neurons encode contexts and then signal that those circumstances are afoot by releasing chemicals called “neuromodulators” to many other neurons, giving rise to a brain state. Just as TRH may be doing in the circuit Choi uncovered, neuromodulators such as serotonin and dopamine, which are also ubiquitous in humans, add an extra dimension of tuning that can change, or “modulate,” how hardwired circuits process information and output behaviors, Flavell said. Neuromodulators can make neurons more or less electrically excitable given the same degree of input, Flavell explained. They can also make transmission at individual synapses more or less effective.

“The physical connections are like a roadmap, but the way that traffic is actually flowing on the road, the way that neurons are coupled to each other, is dynamic and changes with the animal’s context,” Flavell said. Neuromodulators are one way to make that happen.

For instance, in a 2019 paper in Cell, Flavell’s lab showed how a hungry worm knows to slow down and savor a patch of yummy bacteria when it finds one. A single neuron called NSM extends a little tendril called a neurite into the worm’s pharynx. Equipped with bacterial sensors (that turn out to also be present in the human intestine), the neurite detects when the worm has started to ingest and mash up its food. NSM releases serotonin, which finds its way to many of the neurons in worm’s brain that control locomotion. Upon sensing the serotonin, they hit the brakes.

In a more recent study in bioRxiv, the lab takes their investigation of neuromodulators even further. The study characterizes exactly how serotonin release from NSM modulates that activity of specific neurons in the C. elegans brain. In addition, Flavell’s group found that a neuron called AIA integrates information from sensory neurons about the smell of food. NSM can help determine what it does with that information, depending on whether it detects that the worm is eating or not. If it is, the smell of food (detected by AIA) reinforces that it should stick around to continue dining, a state maintained with serotonin. If the worm isn’t eating, the food smells signal that the animal should go exploring to find the source of that enticing odor. AIA, in that case, can instead trigger neurons that produce a different neuromodulator, called PDF, that cause the worm to start roaming (toward the food odor). Even in the simple circuitry of C. elegans, context changes how neurons interact, giving the animal flexibility to process sensory information.

That neurons capable of emitting neuromodulators can exert far-flung influence over behavior is illustrated by research in Newton Professor Mriganka Sur’s lab, too. There Sur’s team has a focus on a deeply situated, tiny brain region called the locus coeruleus (LC) that happens to supply most of the brain’s norepinephrine. Classically, neuroscientists have regarded norepinephrine from the LC as increasing the brain’s internal state of general arousal, but recent research in the Sur lab suggests it has profound, context-dependent effects on learning and behavior.

For instance, members of the lab have trained mice to expect a reward if they push a lever after hearing a high-pitched tone; the mice also receive an unexpected and irritating puff of air if they mistakenly press the lever after a low-pitched tone. By varying the loudness of the tones, the researchers can also vary the certainty the mice have about what tone they heard. Sur’s lab has found that the louder a high-pitched tone, the more norepinephrine a mouse will send to the motor cortex, which plans movement, before pushing the lever – as if greater certainty prompts it more strongly to push the lever. 

Once the lever has been pushed and the mouse gets its feedback of reward or air puff, LC neurons producing norepinephrine then act to fine-tune learning by calling attention to any surprising feedback, Sur’s team has seen. For instance, if the tone was high pitched and faint, but the mouse took the risk to push the lever, the neurons will send a burst of norepinephrine to the prefrontal cortex to note that pleasant surprise. The biggest post-push surge of the neuromodulator, however, occurs when the mouse guesses wrong: that norepinephrine release to the prefrontal cortex appears to signal that the adverse result must be noted. Sure enough, Sur said, the team has seen that the mouse’s performance typically improves after making an error. The LC’s neuromodulatory actions may contribute to that behavioral improvement, though more research is needed to prove it.

Sur’s is not the only research in The Picower Institute showing that the LC communicates with the prefrontal cortex to improve task performance, though. Last November in the Proceedings of the National Academy of Sciences, Picower Professor Susumu Tonegawa’s lab showed that LC norepinephrine neurons connect via distinct circuits to two different parts of the prefrontal cortex to endow mice with both the ability to curb impulses (i.e. to not “jump the gun” when waiting to perform tasks) and to ignore distractions, such as false cues. 

Rhythms among regions

Much as the Sur and Tonegawa labs have been investigating the LC, Fairchild Professor Matt Wilson’s lab studies how a different region appears to be a key hub for integrating contexts such as location, motion and memories of reward into behaviors such as navigation: the lateral septum (LS). As rats learn to find and return to the location of a reward in a maze, the lab’s extensive measurements of electrical activity among neurons in the LS shows that those cells are taking in and processing crucial contextual input from many other regions. The LS then appears to package that context to help direct the rat’s navigational plans and actions.

Over the past two years, Wilson and former graduate student Hannah Wirtshafter have published papers in Current Biology and in eLife showing that populations of LS neurons distinctively encode place information coming from the hippocampus, reward information coming from the ventral tegmental area and speed and acceleration information coming from the brainstem. The encoding is apparent in changes in the timing and rate at which the neurons “fire,” or electrically activate, in these different contexts. Some LS neurons, for example, become especially active specifically when the rat nears the reward location. In a new article published in Neuroscience and Biobehavioral Reviews in July, Wilson and Wirtshafter combined their observations with those of other labs to propose that the lateral septum packages all this contextual information into an “integrated movement value signal.”

“The lateral septum has a ton of different inputs,” Wirtshafter said. “What could the animal be doing with place-related firing that’s reward modulated and then velocity and acceleration? The answer, we think, based on where the LS outputs to, is that it is sending a signal about the context and whatever reward is part of that context. It includes what movement needs to be done and whether that movement is worth it in that context.”

While there are ample signs in the research that neuromodulators such as dopamine help the LS communicate about contexts like the feeling of reward, the studies also highlight the key role of another mechanism of flexibility: brain rhythms. Also known as brain waves or oscillations, these rhythms arise from the coordinated fluctuation of electrical activity among neurons that are working in concert. They allow neurons in brain regions to broadcast information and neurons in other regions to tune into those broadcasts, so that they can work together to perform a function, Wilson said.  

“These brain dynamics ensure that whoever is sending the information and whoever is receiving the information are doing it at the same time,” Wilson said.

In fact, Picower Professor Earl Miller, who has published numerous studies on how brain rhythms guide the flow of information across the many regions of the brain’s cortex, uses much the same kind of traffic analogy in talking about the function of rhythms that Flavell uses when talking about neuromodulators. Much as those chemicals can, oscillations also flexibly direct the flow of information on the network of “roads” that physical circuit connections create. The traffic metaphor perhaps combines well with the broadcasting one: Just like drivers who tune into a radio traffic report can decide to take an alternate route when they hear about an accident ahead, neurons in a brain region may act differently when they tune into new contextual information coming in from another brain region.

Wilson and Wirtshafter’s research, for example, demonstrates that lateral septum neurons tune into the hippocampus’s broadcast of location information via a specific “theta” frequency of brain waves. In particular, movement through a place is represented by the phase (peak or trough) of the theta waves with which neurons spike. 

“In the hippocampus, the phase at which a cell fires during theta can communicate information about the current, prospective, or retrospective spatial location,” Wilson and Wirtshafter wrote in their article. “For instance, …firing of individual hippocampus place cells begins on a particular phase of theta rhythm and progressively shifts forward as the animal moves through the place field.”

So maybe you are not a mouse deciding whether to mate or a rat rooting through a maze for a treat, but you are a person who has stayed out late at a friend’s house. Your internal state is that you are tired. You could head out on long drive home to the reward of your clean, warm bed, or you could sleep on your friend’s notably mustier couch and explain it your spouse the next morning. Then you remember from the drive to your friend’s place earlier, that there was an all-night rest stop along the highway where you could get coffee. Whether you decide to take the wheel or your friend’s offer of the couch will come from how a combination of neuromodulators and rhythms route information along circuits through key brain regions to integrate all this context—your internal state of tiredness, the memory of where that rest stop was, and the reward of your bed (or the punishment of an angry spouse who might ask “What were you thinking?”). Your brain gives you all the flexibility you need.

Sara Prescott

Education

  • PhD, 2016, Stanford University School of Medicine
  • BA, 2008, Molecular Biology, Princeton University

Research Summary

Our bodies are tuned to detect and respond to cues from the outside world and from within through exquisite collaborations between cells. For example, the cells lining our airways communicate with sensory neurons in response to chemical and mechanical signals, and evoke key reflexes such as coughing. This cellular collaboration protects our airways from damage and stabilizes breathing, but can become dysregulated in disease. Despite their vital importance to human health, fundamental questions about how sensory transduction is accomplished at these sites remain unsolved. We use the mammalian airways as a model system to investigate how physiological insults are detected, encoded, and addressed at essential barrier tissues — with the ultimate goal of providing new ways to treat autonomic dysfunction.

Awards

  • Warren Alpert Distinguished Scholars Award, 2021
  • Life Sciences Research Foundation Fellowship, 2018