On a biological foundation of ions and proteins, the brain forms, stores, and retrieves memories to inform intelligent behavior.
Noah Daly | Department of Biology
December 23, 2024
Whenever you go out to a restaurant to celebrate, your brain retrieves memories while forming new ones. You notice the room is elegant, that you’re surrounded by people you love, having meaningful conversations, and doing it all with good manners. Encoding these precious moments (and not barking at your waiter, expecting dessert before your appetizer), you rely heavily on plasticity, the ability of neurons to change the strength and quantity of their connections in response to new information or activity. The very existence of memory and our ability to retrieve it to guide our intelligent behavior are hypothesized to be movements of a neuroplastic symphony, manifested through chemical processes occurring across vast, interconnected networks of neurons.
During infancy, brain connectivity grows exponentially, rapidly increasing the number of synapses between neurons, some of which are then pruned back to select the most salient for optimal performance. This exuberant growth followed by experience-dependent optimization lays a foundation of connections to produce a functional brain, but the action doesn’t cease there. Faced with a lifetime of encountering and integrating new experiences, the brain will continue to produce and edit connections throughout adulthood, decreasing or increasing their strength to ensure that new information can be encoded.
There are a thousand times more connections in the brain than stars in the Milky Way galaxy. Neuroscientists have spent more than a century exploring that vastness for evidence of the biology of memory. In the last 30 years, advancements in microscopy, genetic sequencing and manipulation, and machine learning technologies have enabled researchers, including four MIT Professors of Biology working in The Picower Institute for Learning and Memory – Elly Nedivi, Troy Littleton, Matthew Wilson, and Susumu Tonegawa – to help refine and redefine our understanding of how plasticity works in the brain, what exactly memories are, how they are formed, consolidated, and even changed to suit our needs as we navigate an uncertain world.
Circuits and Synapses: Our Information Superhighway
Neuroscientists hypothesize that how memories come to be depends on how neurons are connected and how they can rewire these connections in response to new experiences and information. This connectivity occursat the junction between two neurons, called a synapse. When a neuron wants to pass on a signal, it will release chemical messengers called neurotransmitters into the synapse cleft from the end of a long protrusion called the axon, often called the “pre-synaptic” area.
These neurotransmitters, whose release is triggered by electrical impulses called action potentials, can bind to specialized receptors on the root-like structures of the receiving neuron, known as dendrites (the “post-synaptic” area). Dendrites are covered with receptors that are either excitatory or inhibitory, meaning they are capable of increasing or decreasing the post-synaptic neuron’s chance of firing their own action potential and carrying a message further.
Not long ago, the scientific consensus was that the brain’s circuitry became hardwired in adulthood. However, a completely fixed system does not lend itself to incorporating new information.
“While the brain doesn’t make any new neurons, it constantly adds and subtracts connections between those neurons to optimize our most basic functions,” explains Nedivi. Unused synapses are pruned away to make room for more regularly used ones. Nedivi has pioneered techniques of two-photon microscopy to examine the plasticity of synapses on axons and dendrites in vivid, three-dimensional detail in living, behaving, and learning animals.
But how does the brain determine which synapses to strengthen and which to prune? “There are three ways to do this,” Littleton explains. “One way is to make the presynaptic side release more neurotransmitters to instigate a bigger response to the same behavioral stimulus. Another is to have the postsynaptic cell respond more strongly. This is often accomplished by adding glutamate receptors to the dendritic spine so that the same signal is detected at a higher level, essentially turning the radio volume up or down.” (Glutamate, one of the most prevalent neurotransmitters in the brain, is our main excitatory messenger and can be found in every region of our neural network.)
Littleton’s lab studies how neurons can turn that radio volume up or down by changing presynaptic as well as postsynaptic output. Characterizing many of the dozens of proteins involved has helped Littleton discover in 2005, for instance, how signals from the post-synaptic area can make some pre-synaptic signals stronger and more active than others. “Our interest is really understanding how the building blocks of this critical connection between neurons work, so we study Drosophila, the simple fruit fly, as a model system to address these questions. We usually take genetic approaches where we can break the system by knocking out a gene or overexpressing it, that allows us to figure out precisely what the protein is doing.”
In general, the release of neurotransmitters can make it more or less likely the receiving cell will continue the line of communication through activation of voltage-gated channels that initiate action potentials. When these action potentials arrive at presynaptic terminals, they can trigger that neuron to release its own neurotransmitters to influence downstream partners. The conversion of electrical signals to chemical transmitters requires presynaptic calcium channels that form pores in the cell membrane that act as a switch, telling the cell to pass along the message in full, reduce the volume, or change the tune completely. By altering calcium channel function, which can be done using a host of neuromodulators or clinically relevant drugs, synaptic function can be tuned up or down to change communication between neurons.
The third mechanism, adding new synapses, has been one of the focal points of Nedivi’s research. Nedivi models this in the visual cortex, labeling and tracking cells in lab mice exposed to different visual experiences that stimulate plasticity.
In a 2016 study, Nedivi showed that the distribution of excitatory and inhibitory synaptic sites on dendrites fluctuates rapidly, with the number of inhibitory sites disappearing and reappearing in the course of a single day. The action, she explains, is in the spines that protrude from dendrites along their length and house post-synaptic areas.
“We found that some spines which were previously thought to have only excitatory synapses are actually dually innervated, meaning they have both excitatory and inhibitory synapses,” Nedivi says. “The excitatory synapses are always stable, and yet on the same spine, about 70% of the inhibitory synapses are dynamic, meaning they can come and go. It’s as if the excitatory synapses on the dually innervated spines are hard-wired, but their activity can be attenuated by the presence of an inhibitory synapse that can gate their activity. Thus, Nedivi found that the number of inhibitory synapses, which make up roughly 15% of the synaptic density of the brain as a whole, play an outsized role in managing the passage of signals that lead to the formation of memory.
“We didn’t start out thinking about it this way, but the inhibitory circuitry is so much more dynamic.” she says. “That’s where the plasticity is.”
Inside Engrams: Memory Storage & Recall
A brain that has made many connections and can continually edit them to process information is well set up for its neurons to work together to form a memory. Understanding the mystery of how it does this excited Susumu Tonegawa, a molecular biologist who won the Nobel Prize for his prior work in immunology.
“More than 100 years ago, it was theorized that, for the brain to form a biological basis for storing information, neurons form localized groupings called engrams,” Tonegawa explains. Whenever an experience exposes the brain to new information, synapses among ensembles of neurons undergo persistent chemical and physical changes to form an engram.
Engram cells can be reactivated and modified physically or chemically by a new learning experience. Repeating stimuli present during a prior learning experience (or at least some part of it) also allows the brain to retrieve some of that information.
In 1992, Tonegawa’s lab was the first to show that knocking out a gene for the synaptic protein, alpha-CamKII could disrupt memory formation, helping to establish molecular biology as a tool to understand how memories are encoded. The lab has made numerous contributions on that front since then.
By 2012, neuroscience approaches had advanced to the point where Tonegawa and colleagues could directly test for the existence of engrams. In a study in Nature, Tonegawa’s lab reported that directly activating a subset of neurons involved in the formation of memory–an engram–was sufficient to induce the behavioral expression of that memory. They pinpointed cells involved in forming a memory (a moment of fear instilled in a mouse by giving its foot a little shock) by tracking the timely expression of the protein c-fos in neurons in the hippocampus. They then labeled these cells using specialized ion channels that activate the neurons when exposed to light. After observing what cells were activated during the formation of a fear memory, the researchers traced the synaptic circuits linking them.
It turned out that they only needed to optically activate the neurons involved in the memory of the footshock to trigger the mouse to freeze (just like it does when returned to the fearful scene), which proved those cells were sufficient to elicit the memory. Later, Tonegawa and his team also found that when this memory forms, it forms simultaneously in the cortex and the basolateral amygdala, where the brain forms emotional associations. This discovery contradicted the standard theory of memory consolidation, where memories form in the hippocampus before migrating to the cortex for retrieval later.
Tonegawa has also found key distinctions between memory storage and recall. In 2017, he and colleagues induced a form of amnesia in mice by disrupting their ability to make proteins needed for strengthening synapses. The lab found that engrams could still be reactivated artificially, instigating the freezing behavior, even though they could not be retrieved anymore through natural recall cues. They dubbed these no-longer naturally retrievable memories “silent engrams.” The research showed that while synapse strengthening was needed to recall a memory, the mere pattern of connectivity in the engram was enough to store it.
While recalling memories stored in silent engrams is possible, they require stronger than normal stimuli to be activated. “This is caused in part by the lower density of dendritic spines on neurons that participate in silent engrams,” Tonegawa says. Notably, Tonegawa sees applications of this finding in studies of Alzheimer’s disease. While working with a mouse model that presents with the early stages of the disease, Tonegawa’s lab could stimulate silent engrams to help them retrieve memories.
Making memory useful
Our neural circuitry is far from a hard drive or a scrapbook. Instead, the brain actively evaluates the information stored in our memories to build models of the world and then make modifications to better utilize our accumulated knowledge in intelligent behavior.
Processing memory includes making structural and chemical changes throughout life. This requires focused energy, like during sleep or waking states of rest. To hit replay on essential events and simulate how they might be replicated in the future, we need to power down and let the mind work. These so-called “offline states” and the processes of memory refinement and prediction they enable fascinate Matt Wilson. Wilson has spent the last several decades examining the ways different regions of the brain communicate with one another during various states of consciousness to learn, retrieve, and augment memories to serve an animal’s intelligent behavior.
“An organism that has successfully evolved an adaptive intelligent system already knows how to respond to new situations,” Wilson says. “They might refine their behavior, but the fact that they had adaptive behavior in the first place suggests that they have to have embedded some kind of a model of expectation that is good enough to get by with. When we experience something for the first time, we make refinements to the model–we learn–and then what we retain from that is what we think of as memory. So the question becomes, how do we refine those models based on experiences?”
Wilson’s fascination with resting states began during his postdoctoral research at the University of Arizona, where he noticed a sleeping lab rat was producing the same electrical activity in its brain as it did while running through a maze. Since then, he has shown that different offline states, including different states of sleep, represent different kinds of offline functions, such as replaying experiences or simulating them. In 2002, Wilson’s work with slow-wave sleep showed the important role the hippocampus plays in spatial learning. Using electrophysiology, where probes are directly inserted into the brain tissue of the mouse, Wilson found that the sequential firing of the same hippocampal neurons activated while it sought pieces of chocolate on either end of a linear track occurred 20 times faster while the rat was in slow-wave sleep.
In 2006, Wilson co-authored a study in Nature that showed mice can retrace their steps after completing a maze. Using electrophysiological recording of the activity of many individual neurons, Wilson showed that the mice replay the memory of each turn it took in reverse, doing so multiple times whenever they had an opportunity to rest between trials.
These replays manifested as ripples in electrical activity that occur during slow-wave sleep.
“REM sleep, on the other hand, can produce novel recapitulation of action-based states, where long sequences and movement information are also repeated.” (e.g. when your dog is moving its legs during sleep, it could be producing a full-fledged simulation of running). Three years after his initial replay study, Wilson found that mice can initiate replay from any point in the sequence of turns in the maze and can do so forward or in reverse.
“Memory is not just about storing my experience,” Wilson explains. “It’s about making modifications in an existing adaptive model, one that’s been developed based on prior experience. In the case of A.I.s such as large language models [like ChatGPT], you just dump everything in there. For biology, it’s all about the experience being folded into the evolutionary operating system, governed by developmental rules. In a sense, you can put this complexity into the machine, but you just can’t train an animal up de novo; there has to be something that allows it to work through these developmental mechanisms.”
The property of the brain that many neuroscientists believe enables this versatile, flexible, and adaptive approach to storing, recalling, and using memory is its plasticity. Because the brain’s machinery is molecular, it is constantly renewable and rewireable, allowing us to incorporate new experiences even as we apply prior experiences. Because we’ve had many dinners in many restaurants, we can navigate the familiar experience while appreciating the novelty of a celebration. We can look into the future, imagining similarly rewarding moments that have yet to come, and game out how we might get there. The marvels of memory allow us to see much of this information in real-time, and scientists at MIT continue to learn how this molecular system guides our behavior.