Pioneering a deeper understanding of metabolism
Merrill Meadow | Whitehead Institute
March 23, 2022

Metabolism is the sum of life-sustaining chemical reactions occurring in cells and across whole organisms. The human genome codes for thousands of metabolic enzymes, and specific metabolic pathways play significant roles in many biological processes—from breaking down food to release energy, to normal proliferation and differentiation of cells, to pathologies underlying diabetes, cancer, and other diseases.

For decades, Whitehead Institute researchers have helped both to clarify how metabolism works in healthy states and to identify how metabolic processes gone awry contribute to diseases. Among Whitehead Founding Member Harvey Lodish’s wide-ranging accomplishments, for example, are the identification of genes and proteins involved in development of insulin resistance and stress responses in fat cells. His lab explored the hormones controlling fatty acid and glucose metabolism, broadening understanding of obesity and type 2 diabetes. In 1995, the lab cloned adiponectin, a hormone made exclusively by fat cells. A long series of studies has shown that adiponectin causes muscle to burn fatty acids faster – so they are not stored as fat – and increases the metabolism of the sugar glucose. More recently the lab identified and characterized types of RNAs that are specifically expressed in fat cells – including a microRNA unique to brown fat, which burns rather than stores fatty acids. In addition, former Member David Sabatini’s discovery of the mTOR protein and his subsequent work elaborating many ways in which the mTOR pathway affects cells function has proven to be fundamental to understanding the relationship between metabolism and an array of diseases.

Today, Institute researchers continue to pioneer a deeper understanding of how metabolic processes contribute to health and disease – with long-term implications that could range from new treatments for obesity and type 2 diabetes to methods for slowing the aging process. Here are a few examples of Whitehead Institute scientists’ creative and pioneering work in the field of metabolism.

Understanding hibernation and torpor 

Research inspiration comes in many forms. For example, Whitehead Institute Member Siniša Hrvatin – who joined the faculty in January 2022 from Harvard Medical School (HMS) – was inspired to pursue his current research by science-fiction tales about suspended animation for long-term space travel. And during graduate school, he realized that the ability of some mammals to enter a state of greatly reduced metabolism – such as occurs in hibernation –  was a mild but real-world form of suspended animation.

Hrvatin’s doctoral research in Doug Melton’s lab at Harvard University focused primarily on stem cell biology. But his subsequent postdoctoral research positions at Massachusetts Institute of Technology (MIT) and HMS enabled him to begin exploring the mechanisms and impact of reduced metabolic states in mammals. The timing was serendipitous, too, because he was able to use the growing array of genetic tools that were becoming available – and create some new tools of his own as well.

“To survive extreme environments, many animals have evolved the ability to profoundly decrease metabolic rate and body temperature and enter states of dormancy, such as hibernation and torpor,” Hrvatin says. Hibernating animals enter repeated states of significantly reduced metabolic activity, each lasting days to weeks. By comparison, daily torpor is shorter, with animals entering repeated periods of lower-than-normal metabolic activity lasting several hours.

Hrvatin’s lab studies the mysteries of how animals and their cells initiate, regulate, and survive these adaptations. “Our long-term goal is to determine if these adaptations can be harnessed to create therapeutic applications for humans.” He and his team are focusing on three broad questions regarding the mechanisms underlying torpor in mice and hibernation in hamsters.

First: How do the animals’ brains initiate and regulate the metabolic processes involved in this process? During his postdoctoral research,  Hrvatin published details of his discovery of neurons involved in the regulation of mouse torpor. “Now we are investigating how these torpor-regulating neurons receive information about the body’s energy-state,” he explains, “and studying how these neurons then drive a decrease of  metabolic rate and body temperature throughout the body.”

Second: How do individual cells – and their genomes – adapt to extreme or changing environments; and how do these adaptations differ between types of organisms?

“Cells from hibernating organisms ranging from rodents to bears have evolved the ability to survive extreme cold temperatures for many weeks to months,” Hrvatin notes. “We are using genetic screens to identify species-specific mechanisms of tolerance to extreme cold. Then we will explore whether these mechanisms can be induced in non-hibernating organisms – potentially to provide health benefits.”

Third: Can we deliberately and specifically slow down tissue damage, disease progression, and or aging in cells and whole organisms by inducing torpor or hibernation – or facets of those states? It has long been known that hibernating animals live longer than closely related non-hibernators; that cancer cells do not replicate during hibernation; and that cold can help protect neurons from the effects of loss of oxygen. However, the cellular mechanisms underlying these phenomena remain largely unknown. Hrvatin’s lab will induce a long-term hibernation-like state in mice and natural hibernation in hamsters, and study how those states affect processes such as tissue repair, cancer progression, and aging.

“In the lab, if you take many human cell types and put them in a cold environment they die, but cells from hibernators survive,” Hrvatin notes. “We’re fascinated by the cellular processes underlying those survival capacities. As a starting place, we are using novel CRISPR screening approaches to help us identify the genomic mechanisms involved.”

And then? “Ultimately, we hope to take on the biggest question: Is it possible to transfer some of those survival abilities to humans?

Solving a mitochondrial conundrum

When Whitehead Institute postdoctoral researcher Jessica Spinelli was studying cancer metabolism in graduate school, she became interested in what seemed to be a scientific paradox regarding mitochondria, the cell’s energy-producing organelles: Mitochondria are believed to be important for tumor growth; but they generally need oxygen to function, and substantial portions of tumors have very low oxygen levels. Pursuing research in the lab of former Whitehead Institute Member David Sabatini, Spinelli sought to understand how those facts fit together and whether mitochondria could somehow adapt to function with limited oxygen levels.

Recently, Spinelli and colleagues published an answer to the conundrum – one that could inform research into medical conditions including ischemia, diabetes and cancer. In a Science paper for which Spinelli was first author, the team demonstrated that when cells are deprived of oxygen, a molecule called fumarate can serve as a substitute and enable mitochondria to continue functioning.

As Spinelli explains, humans need oxygen molecules for the cellular respiration process that takes place in our cells’ mitochondria. In this process – called the electron transport chain – electrons are passed along in a sort of cellular relay race that, ultimately, allows the cell to create the energy needed to perform its vital functions. Usually, oxygen is necessary to keep that process operating.

Using mass spectrometry to measure the quantities of molecules produced through cellular respiration in varied conditions, Spinelli and the team found that cells deprived of oxygen had a high level of succinate molecules, which form when electrons are added to a molecule called fumarate. “From this, we hypothesized that the accumulation of succinate in low-oxygen environments is caused by mitochondria using fumarate as a substitute for oxygen’s role in the electron transport chain,” Spinelli explains. “That could explain how mitochondria function with relatively little oxygen.” The next step was to test that hypothesis in mice, and those studies provided several interesting findings: Only mitochondria in kidney, liver, and brain tissues could use fumarate in the electron transport chain. And even in normal conditions, mitochondria in these tissues used both fumarate and oxygen to function – shifting to rely more heavily on fumarate when oxygen was reduced. In contrast, heart and skeletal muscle mitochondria made minimal use of fumarate and did not function well with limited oxygen.

“We foresee some exciting work ahead, learning exactly how this process is regulated in different tissues,” Spinelli says, “and, especially, in solid tumor cancers, where oxygen levels vary between regions.”

Seeking a more accurate model of diabetes

Max Friesen, a postdoctoral researcher in the lab of Whitehead Institute Founding Member Rudolf Jaenisch, studies the role of cell metabolism in type 2 diabetes (T2D). An increasingly prevalent disease that affects millions of people around the world, T2D is hard to study in the lab. This has made it very challenging for scientists to detail the cellular mechanisms through which it develops – and therefore to create effective therapeutics.

“It has always been very hard to model T2D, because metabolism differs greatly between species,” Friesen says. “That fact leads to complications when we use animal models to study this disease. Mice, for example, have much higher metabolism and faster heart rates than humans. As a result, researchers have developed many approaches that cure diabetes in mice but that fail in humans.” Nor do most in vitro culture systems—cells in a dish—effectively recapitulate the disease.

But, building on Jaenisch’s pioneering success in developing disease models derived from human stem cells, Friesen is working to create a much more accurate in vitro system for studying diabetes. His goal is to make human stem cell-derived tissues that function as they would in the human body, closely recapitulating what happens when an individual develops diabetes. Currently, Friesen is differentiating human stem cells into metabolic tissues such as liver and adipocytes (fat). He has improved current differentiation protocols by adapting these cells to a culture medium that is much closer to the environment they see in the human body. Serendipitously, the process also makes the cells responsive to insulin at levels that are present in the human bloodstream. “This serves as a great model of a healthy cell that we can then turn into a disease model by exposing the cell to diabetic hyperinsulinemia,” Friesen says.

These advances should enable him to gain a better understanding of how metabolic pathways – such as the insulin signaling pathway – function in a diabetic model versus a healthy control model. “My hope is that our new models will enable us to figure out how dietary insulin resistance develops, and then identify a therapeutic intervention that blocks that disease-causing process,” he explains. “It would be fantastic to help alleviate this growing global health burden.”

Seeing the whole person

Alumna, Margaret ‘Mo’ Okobi ’16, prepares for a career combining medicine and public health research to improve mental health care services in vulnerable communities

Leah Campbell | School of Science
March 22, 2022

With only one year left of medical school at Harvard, Mo Okobi ’16, took a leave of absence. She didn’t want to take a break as much as a “step back” — a moment to reassess and reframe what she was learning in school within the bigger picture of the U.S. healthcare system. During that year, Okobi earned her master’s in public health from the Harvard School of Public Health.

“It’s so important to keep a broader view of the trends in medicine,” she says, “to look at what medications and therapies we’re using and what is actually working for patients.”

When she applied to the MPH program, Okobi couldn’t have known that her year-long leave to study public health would overlap with a pandemic. Needless to say, it was good timing. Okobi’s joint training in public health and medicine has also uniquely positioned her for a career combining clinical practice and research around chronic mental illness.

Though she says she never expected to find herself on this path, Okobi’s experiences at MIT shaped her approach to medicine and her commitment to providing quality mental healthcare to underserved communities.

Okobi was interested in many aspects of medicine when she enrolled at MIT. She majored in biology, with a minor in chemistry, to explore her broad interests in STEM. She joined MIT Emergency Medical Services, a student-run, volunteer ambulance service, for the same reason.

Being a certified EMT, though, she says, was one of the most formative aspects of her MIT education. She describes it as an “intensive introduction to medicine,” cementing her excitement about her future as a doctor. It was also one of her first exposures to mental health care, and Okobi believes that mental health emergencies made up a large proportion, if not a majority, of their calls.

“MIT EMS was really my first opportunity to actually work with patients very closely and see them in their moments of need,” she says.

While deciding to become a doctor was easy, Okobi’s journey to research wasn’t so smooth. She participated in one official undergraduate research experience at MIT, admitting, “I kind of hated it.” Working in a basic science biology lab, Okobi realized that the bench wasn’t for her.

Fortunately, her mentor, Hazel Sive, a former MIT professor of biology and now Dean of the College of Science at Northeastern University, took the time to talk with Okobi and figure out where she’d thrive. Sive, a South African, connected Okobi with the MIT Africa Program, through which she spent a summer in Johannesburg with the South African National Health Laboratory Services.

That experience, which she describes as a “pivotal framing moment,” helped Okobi understand how a career combining clinical practice and research might look. She worked with scientists studying HIV transmission from mother to child, assessing the quality of testing and resource gaps across different provinces. Near the end of her internship, Okobi was able to go into an HIV clinic and do antibody testing.

“I’m looking at these numbers. I’m making all these graphs and writing this paper, but I’m also seeing the people,” she says. “I loved the clinical work. I loved meeting people and knowing their stories.”

At graduation, Okobi was recognized as a Ronald E. McNair Scholar, an award that goes every year to Black undergraduates who have excelled academically and contributed to the experience of students on campus from underrepresented groups. The award was established in honor of McNair, PhD ’77, an accomplished astronaut who received his doctorate in physics from MIT and tragically perished onboard the Challenger space shuttle in 1986.

Having found a passion for applied, public health research, Okobi spent a year before medical school at a healthcare data analytics startup called Aetion. Aetion uses data from hospitals and insurance companies to analyze healthcare trends like medication usage and clinical outcomes. For a self-described “numbers nerd” like Okobi, it was a great way to learn how healthcare studies are designed and see the big picture behind clinical decisions.

It was her second year of medical school, though, that focused Okobi’s health interests around psychiatry for marginalized populations. During her clinical rotations, she worked at Cambridge Health Alliance (CHA), a public, community-centered hospital. At CHA, she served many non-English speakers and MassHealth recipients and was able to do rotations in outpatient, inpatient, and emergency psychiatric settings.

“I really got to see it from all angles,” Okobi says of her psychiatry rotations. “I loved the practice of creating space for people to talk about their lives…. it’s about the medicine and mental illness, but it’s also about seeing the whole person.”

As for what’s next, Okobi will be heading west for her residency in psychiatry at the University of California San Francisco.

Her work with CHA convinced Okobi to apply for residencies in psychiatry. She’s primarily interested in acute care settings, particularly for those with severe, chronic mental illnesses like schizophrenia. For her, one of the joys of inpatient care is working closely with patients on a daily basis, to, as she describes it, “try to leave a positive, lasting impression on those first encountering psychiatric care.”

Yet, armed with her degree in public health, she’s also committing to combining her clinical practice with ongoing research. At Harvard, she organized a mental health survey for medical and dental students in her class to improve access to mental health resources. Since 2020, she’s been participating in research on psychiatric emergency care utilization with the Boston Emergency Services Team. In 2020, she returned to Aetion as a consultant with their new FDA-backed COVID-19 research group, to lead epidemiological studies examining COVID-19 risk factors.

Looking back, Okobi knows that it may seem like she had a clear professional plan from the get-go. But she stresses that that wasn’t her experience at all, and current students should understand that things have a way of working out if you’re open to trying things.

“My path was very much like ping pong, never knowing where I’m going next,” she says. “The uncertainty never really ends, but I take refuge in those moments of serendipity, when I find something or someone that excites me and challenges me to be a better version of myself.”

The model remodeler

A Picower Institute primer on ‘plasticity,’ the brain’s amazing ability to constantly adapt to and learn from experience

Picower Institute
March 17, 2022

Muscles and bones strengthen with exercise and the immune system ‘learns’ from vaccines or infections, but none of those changes match the versatility and flexibility your central nervous system shows in adapting to the world. The brain is a model remodeler. If it weren’t, you wouldn’t have learned how to read this and you wouldn’t remember it anyway.

The brain’s ability to change its cells, their circuit connections, and even its broader architectures in response to experience and activity, for instance to learn new rules and store memories, is called “plasticity.” The phenomenon explains how the brand-new brain of an infant can emerge from a womb and make increasingly refined sense of whatever arbitrary world it encounters – ranging from tuning its visual perception in the early months to getting an A in eighth-grade French. Plasticity becomes subtler during adulthood, but it never stops. It occurs via so many different mechanisms and at so many different scales and rates, it’s… mind-bending.

Plasticity’s indispensable role in allowing the brain to incorporate experience has made understanding exactly how it works – and what the mental health ramifications are when it doesn’t – the inspiration and research focus of several Picower Institute professors (and hundreds of colleagues). This site uses  the term so often in reports on both fundamental neuroscience and on disorders such as autism, it seemed high time to provide a primer. So here goes.

Beginning in the 1980s and 1990s, advances in neuroanatomy, genetics, molecular biology and imaging made it possible to not only observe, but even experimentally manipulate mechanisms of how the brain changes at scales including the individual connections between neurons, called synapses; across groups of synapses on each neuron; and in whole neural circuits. The potential to discover tangible physical mechanisms of these changes proved irresistible to Picower Institute scientists such as Mark BearTroy LittletonElly Nedivi and Mriganka Sur.

Bear got hooked by experiments in which by temporarily covering one eye of a young animal, scientists could weaken the eye’s connections to the brain just as their visual circuitry was still developing. Such “monocular deprivation” produced profound changes in brain anatomy and neuronal electrical activity as neurons rewired circuits to support the unobstructed eye rather than the one with weakened activity. 

“There was this enormous effect of experience on the physiology of the brain and a very clear anatomical basis for that,” Bear said. “It was pretty exhilarating.”

Littleton became inspired during graduate and medical school by new ways to identify genes whose protein products formed the components of synapses. To understand how synapses work was to understand how neurons communicate and therefore how the brain functions.

“Once we were able to think about the proteins that are required to make the whole engine work, we could figure out how you might rev it up and down to encode changes in the way the system might be working to increase or decrease information flow as a function of behavioral change,” Littleton said.

Built to rebuild

So what is the lay of the land for plasticity? Start with a neuron. Though there are thousands of types, a typical neuron will extend a vine-like axon to forge synapses on the root-like dendrites of other neurons. These dendrites may host thousands of synapses. Whenever neurons connect, they form circuits that can relay information across the brain via electrical and chemical signals. Most synapses are meant to increase the electrical excitement of the receiving neuron so that it will eventually pass a signal along, but other synapses modulate that process by inhibiting activity.

Hundreds of proteins are involved in building and operating every synapse, both on the “pre-synaptic” (axonal) side and the “post-synaptic” (dendritic) side of the connection. Some of these proteins contribute to the synapse’s structure. Some on the pre-synaptic side coordinate the release of chemicals called neurotransmitters from blobs called vesicles, while some on the postsynaptic side form or manage the receptors that receive those messages. Neurotransmitters may compel the receiving neuron to take in more ions (hence building up electric charge), but synapses aren’t just passive relay stations of current. They adjust in innumerable ways according to changing conditions, such as the amount of communication activity the host cells are experiencing. Across many synapses the pace and amount of neurotransmitter signaling can be frequently changed by either the presynaptic or postsynaptic side. And sometimes, especially early in life, synapses will appear or disappear altogether.

Moreover, plasticity doesn’t just occur at the level of the single synapse. Combinations of synapses along a section of dendrite can all change in coordination so that the way a neuron works within a circuit is altered. These numerous dimensions of plasticity help to explain how the brain can quickly and efficiently accomplish the physical implementation of something as complex as learning and memory, Nedivi said.

“You might think that when you learn something new it has nothing to do with individual synapses,” Nedivi said. “But in fact, the way that things like this happen is that individual synapses can change in strength or can be added and removed, and then it also matters which synapses, and how many synapses, and how they are organized on the dendrites, and how those changes are integrated and summated on the cell. These parameters will alter the cell’s response properties within its circuit and that affects how the circuit works and how it affects behavior.”

A 2018 study in Sur’s lab illustrated learning occurring at a neural circuit level. His lab trained mice on a task where they had to take a physical action based on a visual cue (e.g. drivers know that “green means go”). As mice played the game, the scientists monitored neural circuits in a region called the posterior parietal cortex where the brain converts vision into action. There, ensembles of neurons increased activity specifically in response the “go” cue. When the researchers then changed the game’s rules (i.e. “red means go”) the circuits switched to only respond to the new go cue. Plasticity had occurred en masse to implement learning.

Many mechanisms 

To carry out that rewiring, synapses can change in many ways. Littleton’s studies of synaptic protein components have revealed many examples of how they make plasticity happen. Working in the instructive model of the fruit fly, his lab is constantly making new findings that illustrate how changes in protein composition can modulate synaptic strength.

For instance, in a 2020 study his lab showed that synaptotagmin 7 limits neurotransmitter release by regulating the speed with which the supply of neurotransmitter-carrying vesicles becomes replenished. By manipulating expression of the protein’s gene, his lab was able to crank neurotransmitter release, and therefore synaptic strength, up or down like a radio volume dial. 

Other recent studies revealed how proteins influence the diversity of neural plasticity. At the synapses flies use to control muscles, “phasic” neurons release quick, big bursts of the neurotransmitter glutamate, while tonic ones steadily release a low amount. In 2020 Littleton’s lab showed that when phasic neurons are disrupted, tonic neurons will plasticly step up glutamate release, but phasic ones don’t return the favor when tonic ones are hindered. Then last year, his team showed that a major difference between the two neurons was their levels of a protein called tomosyn, which turns out to restrict glutamate release. Tonic ones have a lot but phasic ones have very little. Tonic neurons therefore can vary their glutamate release by reducing tomosyn expression, while phasic neurons lack that flexibility. 

Nedivi, too, looks at how neurons use their genes and the proteins they encode to implement plasticity. She tracks “structural plasticity” in the living mouse brain, where synapses don’t just strengthen or weaken, but come and go completely. She’s found that even in adult animal brains, inhibitory synapses will transiently appear or disappear to regulate the influence of more permanent excitatory synapses.

Nedivi has revealed how experience can make excitatory synapses permanent. After discovering that mice lacking a synaptic protein called CPG15 were slow learners, Nedivi hypothesized that it was because the protein helped cement circuit connections that implement learning. To test that, her lab exposed normal mice and others lacking CPG15 to stretches of time in the light, when they could gain visual experience, and the dark, where there was no visual experience. Using special microscopes to literally watch fledgling synapses come and go in response, they could compare protein levels in those synapses in normal mice and the ones without CPG15. They found that CPG15 helped experience make synapses stick around because upon exposure to increased activity, CPG15 recruited a structural protein called PSD95 to solidify the synapses. That explained why CPG15-lacking mice don’t learn as well: they lack that mechanism for experience and activity to stabilize their circuit connections. 

Another Sur Lab study in 2018 helped to show how multiple synapses sometimes change in concert to implement plasticity. Focusing on a visual cortex neuron whose job was to respond to locations within a mouse’s field of view, his team purposely changed which location it preferred by manipulating “spike-timing dependent plasticity.” Essentially right after they put a visual stimulus in a new location (rather than the neuron’s preferred one), they artificially excited the neuron. The reinforcement of this specifically timed excitement strengthened the synapse that received input about the new location. After about 100 repetitions, the neuron changed its preference to the new location. Not only did the corresponding synapse strengthen, but also the researchers saw a compensatory weakening among neighboring synapses (orchestrated by a protein called Arc). In this way, the neuron learned a new role and shifted the strength of several synapses along a dendrite to ensure that new focus.

Lest one think that plasticity is all about synapses or even dendrites, Nedivi has helped to show that it isn’t. For instance, her research has shown that amid monocular deprivation, inhibitory neurons go so far as to pare down their axons to enable circuit rewiring to occur. In 2020 her lab collaborated with Harvard scientists to show that to respond to changes in visual experience, some neurons will even adjust how well they insulate their axons with a fatty sheathing called myelin that promotes electrical conductance. The study added strong evidence that myelination also contributes to the brain’s adaptation to changing experience.

It’s not clear why the brain has evolved so many different ways to effect change (these examples are but a small sampling) but Nedivi points out a couple of advantages: robustness and versatility.

“Whenever you see what seems to you like redundancy it usually means it’s a really important process. You can’t afford to have just one way of doing it,” she said. “Also having multiple ways of doing things gives you more precision and flexibility and the ability to work over multiple time scales, too.”

Insights into illness

Another way to appreciate the importance of plasticity is to recognize its central role in neurodevelopmental diseases and conditions. Through their fundamental research into plasticity mechanisms, Bear, Littleton, Nedivi and Sur have all discovered how pivotal they are to breakdowns in brain health.

Beginning in the early 1990s, Bear led pioneering experiments showing that by multiple means, post-synaptic sensitivity could decline when receptors received only weak input, a plasticity called long-term depression (LTD). LTD explained how monocular deprivation weakens an occluded eye’s connections to the brain. Unfortunately, this occurs naturally in millions of children with visual impairment, resulting in a developmental vision disorder called amblyopia. But Bear’s research on plasticity, including mechanisms of LTD, has also revealed that plasticity itself is plastic (he calls that “metaplasticity”). That insight has allowed his lab to develop a potential new treatment in which by completely but temporarily suspending all input to the affected eye by anesthetizing the retina, the threshold for strengthening vs. weakening can be lowered such that when input resumes, it triggers a newly restorative connection.

Bear’s investigations of a specific form of LTD have also led to key discoveries about Fragile X syndrome, a genetic cause of autism and intellectual disability. He found that LTD can occur when stimulation of metabotropic glutamate receptor 5 (mGluR5) causes proteins to be synthesized at the dendrite, reducing post-synaptic sensitivity. A protein called FMRP is supposed to be a brake on this synthesis but mutation of the FMR1 gene in Fragile X causes loss of FMRP. That can exaggerate LTD in the hippocampus, a brain region crucial for memory and cognition. The insight has allowed Bear to advance drugs to clinical trials that inhibit MGlur5 activity to compensate for FMRP loss.

Littleton, too, has produced insight into autism by studying the consequences of mutation in the gene Shank3, which encodes a protein that helps to build developing synapses on the post-synaptic side. In a 2016 paper his team reported multiple problems in synapses when Shank was knocked out in fruit flies. Receptors for a key form of molecular signaling from the presynaptic side called Wnt failed to be internalized by the postsynaptic cell, meaning they could not influence the transcription of genes that promote maturation of the synapse as they normally would. A consequence of disrupted synaptic maturation is that a developing brain would struggle to complete the connections needed to efficiently encode experience and that may explain some of the cognitive and behavioral outcomes in Shank-associated autism. To set the stage for potential drug development, Littleton’s lab was able to demonstrate ways to bypass Wnt signaling that rescued synaptic development.

By studying plasticity proteins Sur’s lab, too, has discovered a potential way to help people with Rett syndrome, a severe autism-like disorder. The disease is caused by mutations in the gene MECP2. Sur’s lab showed that MECP2’s contribution to synaptic maturation comes via a protein called IGF1 that is reduced among people with Rett. That insight allowed them to show that treating Rett-model mice with extra IGF1 peptide or IGF1 corrected many defects of MECP2 mutation. Both treatment forms have advanced to clinical trials. Late last year IGF1 peptide was shown to be effective in a comprehensive phase 3 trial for Rett syndrome and is progressing toward FDA approval as the first-ever mechanism-based treatment for a neurodevelopmental disorder, Sur said. 

Nedivi’s plasticity studies, meanwhile, have yielded new insights into bipolar disorder. During years of fundamental studies, Nedivi discovered CPG2, a protein expressed in response to neural activity that helps regulate the number of glutamate receptors at excitatory synapses. The gene encoding CPG2 was recently identified as a risk gene for bipolar disorder. In a 2019 study her lab found that people with bipolar disorder indeed had reduced levels of CPG2 because of variations in the SYNE1 gene. When they cloned these variants into rats, they found they reduced the ability of CPG2 to locate in the dendritic “spines” that house excitatory synapses or decreased the proper cycling of glutamate receptors within synapses.

The brain’s ever-changing nature makes it both wonderful and perhaps vulnerable. Both to understand it and heal it, neuroscientists will eagerly continue studying its plasticity for a long time to come.

An ‘oracle’ for predicting the evolution of gene regulation

Researchers created a mathematical framework to examine the genome and detect signatures of natural selection, deciphering the evolutionary past and future of non-coding DNA.

Raleigh McElvery
March 9, 2022

Despite the sheer number of genes that each human cell contains, these so-called “coding” DNA sequences comprise just 1% of our entire genome. The remaining 99% is made up of “non-coding” DNA — which, unlike coding DNA, does not carry the instructions to build proteins.

One vital function of this non-coding DNA, also called “regulatory” DNA, is to help turn genes on and off, controlling how much (if any) of a protein is made. Over time, as cells replicate their DNA to grow and divide, mutations often crop up in these non-coding regions — sometimes tweaking their function and changing the way they control gene expression. Many of these mutations are trivial, and some are even beneficial. Occasionally, though, they can be associated with increased risk of common diseases, such as type 2 diabetes, or more life-threatening ones, including cancer.

To better understand the repercussions of such mutations, researchers have been hard at work on mathematical maps that allow them to look at an organism’s genome, predict which genes will be expressed, and determine how that expression will affect the organism’s observable traits. These maps, called fitness landscapes, were conceptualized roughly a century ago to understand how genetic makeup influences one common measure of organismal fitness in particular: reproductive success. Early fitness landscapes were very simple, often focusing on a limited number of mutations. Much richer data sets are now available, but researchers still require additional tools to characterize and visualize such complex data. This ability would not only facilitate a better understanding of how individual genes have evolved over time, but would also help to predict what sequence and expression changes might occur in the future.

In a new study published on March 9 in Nature, a team of scientists has developed a framework for studying the fitness landscapes of regulatory DNA. They created a neural network model that, when trained on hundreds of millions of experimental measurements, was capable of predicting how changes to these non-coding sequences in yeast affected gene expression. They also devised a unique way of representing the landscapes in two dimensions, making it easy to understand the past and forecast the future evolution of non-coding sequences in organisms beyond yeast — and even design custom gene expression patterns for gene therapies and industrial applications.

“We now have an ‘oracle’ that can be queried to ask: What if we tried all possible mutations of this sequence? Or, what new sequence should we design to give us a desired expression?” says Aviv Regev, a professor of biology at MIT (on leave), core member of the Broad Institute of Harvard and MIT (on leave), head of Genentech Research and Early Development, and the study’s senior author. “Scientists can now use the model for their own evolutionary question or scenario, and for other problems like making sequences that control gene expression in desired ways. I am also excited about the possibilities for machine learning researchers interested in interpretability; they can ask their questions in reverse, to better understand the underlying biology.”

Prior to this study, many researchers had simply trained their models on known mutations (or slight variations thereof) that exist in nature. However, Regev’s team wanted to go a step further by creating their own unbiased models capable of predicting an organism’s fitness and gene expression based on any possible DNA sequence — even sequences they’d never seen before. This would also enable researchers to use such models to engineer cells for pharmaceutical purposes, including new treatments for cancer and autoimmune disorders.

To accomplish this goal, Eeshit Dhaval Vaishnav, a graduate student at MIT and co-first author, Carl de Boer, now an assistant professor at the University of British Columbia, and their colleagues created a neural network model to predict gene expression. They trained it on a dataset generated by inserting millions of totally random non-coding DNA sequences into yeast, and observing how each random sequence affected gene expression. They focused on a particular subset of non-coding DNA sequences called promoters, which serve as binding sites for proteins that can switch nearby genes on or off.

“This work highlights what possibilities open up when we design new kinds of experiments to generate the right data to train models,” Regev says. “In the broader sense, I believe these kinds of approaches will be important for many problems — like understanding genetic variants in regulatory regions that confer disease risk in the human genome, but also for predicting the impact of combinations of mutations, or designing new molecules.”

Regev, Vaishnav, de Boer, and their coauthors went on to test their model’s predictive abilities in a variety of ways, in order to show how it could help demystify the evolutionary past — and possible future — of certain promoters. “Creating an accurate model was certainly an accomplishment, but, to me, it was really just a starting point,” Vaishnav explains.

First, to determine whether their model could help with synthetic biology applications like producing antibiotics, enzymes, and food, the researchers practiced using it to design promoters that could generate desired expression levels for any gene of interest. They then scoured other scientific papers to identify fundamental evolutionary questions, in order to see if their model could help answer them. The team even went so far as to feed their model a real-world population data set from one existing study, which contained genetic information from yeast strains around the world. In doing so, they were able to delineate thousands of years of past selection pressures that sculpted the genomes of today’s yeast.

But, in order to create a powerful tool that could probe any genome, the researchers knew they’d need to find a way to forecast the evolution of non-coding sequences even without such a comprehensive population data set. To address this goal, Vaishnav and his colleagues devised a computational technique that allowed them to plot the predictions from their framework onto a two-dimensional graph. This helped them show, in a remarkably simple manner, how any non-coding DNA sequence would affect gene expression and fitness, without needing to conduct any time-consuming experiments at the lab bench.

“One of the unsolved problems in fitness landscapes was that we didn’t have an approach for visualizing them in a way that meaningfully captured the evolutionary properties of sequences,” Vaishnav explains. “I really wanted to find a way to fill that gap, and contribute to the longstanding vision of creating a complete fitness landscape.”

Martin Taylor, a professor of genetics at the University of Edinburgh’s Medical Research Council Human Genetics Unit who was not involved in the research, says the study shows that artificial intelligence can not only predict the effect of regulatory DNA changes, but also reveal the underlying principles that govern millions of years of evolution.

Despite the fact that the model was trained on just a fraction of yeast regulatory DNA in a few growth conditions, he’s impressed that it’s capable of making such useful predictions about the evolution of gene regulation in mammals.

“There are obvious near-term applications, such as the custom design of regulatory DNA for yeast in brewing, baking, and biotechnology,” he explains. “But extensions of this work could also help identify disease mutations in human regulatory DNA that are currently difficult to find and largely overlooked in the clinic. This work suggests there is a bright future for AI models of gene regulation trained on richer, more complex, and more diverse data sets.”

Even before the study was formally published, Vaishnav began receiving queries from other researchers hoping to use the model to devise non-coding DNA sequences for use in gene therapies.

“People have been studying regulatory evolution and fitness landscapes for decades now,” Vaishnav says. “I think our framework will go a long way in answering fundamental, open questions about the evolution and evolvability of gene regulatory DNA — and even help us design biological sequences for exciting new applications.”

Whitehead Institute director Ruth Lehmann receives the 2022 Gruber Genetics Prize
Whitehead Institute
February 24, 2022

Whitehead Institute Director Ruth Lehmann has been awarded the 2022 Gruber Genetics Prize – one of the most prestigious recognitions in the field of genetics – along with fellow developmental biologists James Priess of the Fred Hutchinson Cancer Research Center and Geraldine Seydoux of the Johns Hopkins University School of Medicine.

The Prize was awarded for the trio’s independent, pioneering discoveries on the molecular mechanisms underlying the earliest stages of embryonic development. In announcing the award, the Gruber Foundation explained that, taken together, the scientists’ work has transformed the field of germ cell biology, uncovering answers to one of the most fundamental questions in genetics: how germ cells – the precursors of eggs and sperm – faithfully transmit genetic information across generations.

“As a result of their curiosity, innovation, and remarkable insights, each of these phenomenal scientists has played a pivotal role in unlocking the molecular mysteries of early embryonic development,” says Eric Olson, professor at UT Southwestern and member of the Gruber Prize selection advisory board. “It’s not an overstatement to say that their genetic findings regarding germ cells have helped to revolutionize modern developmental biology.”

“I am extraordinarily grateful to the Gruber Foundation for selecting me as a recipient of the Gruber Prize in Genetics,” says Lehmann, who is also a professor of biology at the Massachusetts Institute of Technology. “It is particularly delightful to share this award with James Preiss and Geraldine Seydoux, who are wonderfully insightful and creative scientists.

“I am also thrilled to be in the company of two Whitehead Institute colleagues who have received the Gruber Prize: Founding Member Rudolf Jaenisch, who won the inaugural Prize in 2001; and Founding Member and former Institute director Gerald Fink, who won it in 2010.”

Working primarily with the fruit fly Drosophila melanogaster, Lehmann made landmark discoveries regarding the composition, assembly and function of germplasm within the embryo. Her research has contributed to the first genetic framework for the specification of germ cell fate in any organism. She also helped uncover how oocyte mitochondria avoid transmitting mutations within their small genomes to offspring and how they associate with germplasm and primordial germ cells. Priess and Seydoux used a different model organism—the nematode Caenorhabditis elegans—in their research.

The Gruber Foundation established and awarded its first Genetics Prize in 2001. It was the world’s first major international prize devoted specifically to achievements in the realm of genetics research – and remains one of the most prestigious prizes in the field. It is awarded under the guidance of an international advisory board of distinguished scientists.

Advocating for vaccine equity

Postdoc Dig Bijay Mahat became a cancer researcher to improve healthcare in Nepal, but the COVID-19 pandemic exposed additional resource disparities.

Raleigh McElvery
February 17, 2022

When Dig Bijay Mahat arrived at MIT in 2017 to begin his postdoctoral studies, he had one very clear goal: to become an expert in cancer research and diagnostics so he could improve healthcare in Nepal, where he was born. In 2020, when the COVID-19 pandemic laid bare additional discrepancies in resource equity around the world, his goal did not waiver. But it did expand to fill a more immediate need — help Nepal find the best way to navigate widespread COVID testing requirements and vaccine rollouts.

Mahat was born in the western region of Nepal, where his family has owned a large swath of land for generations. Before Mahat was born, his grandfather passed away unexpectedly. And, as the eldest son, Mahat’s father assumed responsibility for his five of siblings at the age of 21. As a result, Mahat’s father missed his chance to pursue the education he’d envisioned. Perhaps because of this, he made it his mission to give Mahat the education he never received. However, no school was quite good enough, and he shuffled Mahat between nine different institutions before the age of 18.

While his father wished him all the success and prestige that would come with pursing a medical career, Mahat had other plans. Toward the end of high school, he became captivated by song writing, and even secretly used his school tuition money one semester to record an album. “It was a disastrous flop,” he now recalls with a smile.

Although his foray into the music industry provides comic relief today, at the time Mahat was dismayed to be back on the medical track. However, he did convince his father to let him go to the US for college. He ended up at Towson University in Maryland, living with his aunt and uncle and delivering pizzas to support his nuclear family back in Nepal. Some weeks, he clocked in over 100 hours of deliveries.

As a molecular biology, biochemistry, and bioinformatics major, he took every research opportunity he could get, and became enthralled by breast cancer research. Shortly thereafter, his mother was diagnosed with the same disease, which further strengthened his conviction to learn as much as he could in the US, and return to Nepal to help as many patients as he could.

“The state of cancer diagnostics is very poor in Nepal,” he explains. Patient biopsies must be sent to other countries such as India — a costly practice at the mercy of politics and travel restrictions. “The least we can do is become self-sufficient and provide these vital molecular diagnostics tools to our own people,” Mahat says.

He went on to earn his PhD in molecular biology and genetics from Cornell University, and by the fall of 2017 he had secured his dream job: a postdoctoral position in the lab of MIT Professor of Biology Susan Lindquist. Mahat had spent much of his time at Cornell studying a protein known as heat shock factor 1, and Lindquist had conducted seminal work showing that this same protein enables healthy cells to suddenly turn into cancer cells. Just as he had finalized his new apartment lease and was preparing to start his new job, Lindquist wrote from the hospital to tell him she had late-stage ovarian cancer, and suggested he complete his postdoctoral studies elsewhere.

Gutted, he scrambled to find another position, and built up the courage to contact MIT professor, Koch Institute member, and Nobel laureate Phil Sharp. Mahat put together a formal research proposal and presented it to Sharp. A few days later, he became the lab’s newest member.

“From the beginning, the things that struck me about Phil were his humility, his attention to experimental detail, and his inexplicable reservoir of insight,” Mahat says. “If I could carry even just some of that same humility with me for the rest of my life, I would be a good human being.”

In 2018, Mahat and Sharp filed a patent with the potential to revolutionize disease diagnostics. Widely-available single-cell sequencing technologies reveal the subset of RNAs inside a cell that build proteins. But Mahat and his colleagues found a way to take a snapshot of all the RNA inside a single cell that is being transcribed from DNA — including RNAs that will never become proteins. Because many ailments arise from mutations in the “non-coding” DNA that gives rise to this “non-coding” RNA, the researchers hope their new method will help expose the function of non-coding variants in diseases like diabetes, autoimmune disorders, neurological diseases, and cancer.

Mahat was still immersed in this research in early 2020 when the COVID-19 pandemic began to escalate. As case numbers soared around the world, it became clear to him that the wealth of COVID testing resources available on MIT’s campus — and throughout the US in general — dwarfed the means available to his family back in Nepal. Polymerase chain reaction (PCR) testing remains the most popular and accurate means to detect the virus in patient samples. While PCR machines are quite common in molecular biology labs across the US, the entire country of Nepal owned just a few at the start of the pandemic, according to Mahat.

“Digbijay was focused intensely on developing our novel single-cell technology when he became aware of Nepal’s challenges to control the COVID-19 pandemic,” Sharp recalls. “While continuing his research in the lab, he spent several months contacting leaders in pharmaceutical companies in the US and leaders in public health in Nepal to help arrange access to vaccines and rapid tests.”

Mahat was already in contact with the Nepali Ministry of Health and Population regarding the state of the country’s cancer diagnostics, and so the government called on him to advise their COVID testing efforts. Given the high cost and limited availability of PCR machines and reagents, Mahat began discussions with MIT spinoff Sherlock Biosciences, in order to bring an alternative testing technology to Nepal. These COVID tests, which were developed at the Broad Institute of MIT and Harvard, use the CRISPR/Cas9 system — rather than PCR — to detect the SARS-CoV2 virus that causes COVID-19, making them cheaper and more readily available. Sherlock Biosciences ultimately donated $100,000-worth of testing kits, supplemented by an additional $100,000 grant from the Open Philanthropy Project to help purchase the equipment necessary to implement the tests. In December of 2020, Mahat and his wife Rupa Shah flew to Nepal to set up a testing center using these new resources.

Although this required Mahat to briefly pause his MIT research, Sharp was supportive of these extracurricular pursuits. “We are very proud of Jay’s effective work benefiting the people of Nepal,” Sharp says.

Around the same time, Mahat reached out to Institute professor and Moderna co-founder Robert Langer to help initiate vaccine talks with the Nepali government. Through Sharp’s contacts, Mahat was also able to connect the government with Johnson & Johnson. In addition, Mahat, Sharp, and Emeritus Professor Uttam RajBhandary wrote a letter to MIT president Rafael Reif, who joined other university leadership in urging the Biden administration to donate vaccines to low-income countries.

Nepal ultimately received its COVID-19 vaccines through the COVAX program, co-led by the Coalition for Epidemic Preparedness Innovations, GAVI Alliance, and the World Health Organization. Today, the country has begun administering boosters. There were also some funds left over from the Open Philanthropy Project grant, which went toward sending Nepal several thousand PCR kits designed to distinguish between the delta and omicron variants. Professor Tyler Jacks, the Koch Institute director at that time, also connected Mahat with the company Thermo Fisher Scientific to secure additional PCR reagents.

Roshan Pokhrel, the Secretary of Nepal’s Ministry of Health and Population, met Mahat prior to the pandemic, and relied on his expertise to begin establishing Nepal’s National Cancer Institute (NCI) in 2020. “It was his cooperation and coordination that helped us set up NCI,” Pokhrel says. “Mr. Mahat’s continuous support during the first two waves of our COVID-19 vaccine distribution was also highly appreciated. During the recent omicron outbreak, his support in our public laboratory helped us to monitor the variant.”

Bhagawan Koirala, chairman of the Nepal Medical Council, participated in the vaccine talks that Mahat organized between Nepal’s Ministry of Health and Johnson & Johnson. Koirala says he was impressed by Mahat’s exceptional credentials and his modesty, as well as his desire to promote cancer research and diagnostics. As the chairman of the Kathmandu Institute of Child Health, Koirala hopes to engage Mahat’s expertise in the future to help advance pediatric cancer research in Nepal.

“We have spoken extensively about the policies regarding cancer diagnostics in Nepal,” Koirala says. “Dr. Mahat and I are eager to work with the government to introduce policies that will help develop local diagnostic capacity and discourage sending patient samples out of the country. This will save costs, ensure patient privacy, and improve quality of care and research.”

These days, Mahat is nothing short of a local celebrity in Nepal. Despite his current drive for ensuring vaccine equity, his ultimate goal is still to work with individuals like Koirala and Pokhrel to bring cancer treatment resources to the country. He not only envisions setting up his own research center there, but also inspiring young people to pursue careers in research. “Before me, no one in my entire village had pursued a scientific career, so if I could motivate even a few young kids to follow that path, it would be a win for me.”

But, he adds, he’s not ready to leave MIT just yet; he still has more to learn. “I feel privileged and honored to be part of this compassionate community,” he says. “I’m also proud — proud that we’ve been able to come together in this time of need.”

Sometimes science takes a village
Greta Friar | Whitehead Institute
February 17, 2022

Alexandra Navarro, a graduate student in Whitehead Institute Member Iain Cheeseman’s lab, was studying the gene for CENPR, a protein related to cell division—the Cheeseman lab’s research focus—when she came across something interesting: another molecule hidden in CENPR’s genetic code. The hidden molecule is a peptide only 37 amino acids long, too small to show up in most surveys of the cell. It gets created only when the genetic code for CENPR is translated from an offset start and stopping place—essentially, when a cell reads the instructions for making CENPR in a different way. The Cheeseman lab has become very interested in these sorts of hidden molecules, which they have found lurking in a number of other molecules’ genetic codes. Navarro began studying the peptide as a side project during slow periods in her main research on cell division proteins. However, as her research on the peptide progressed, Navarro eventually found herself unsure of how to proceed. CENPR belongs in the centromere, a part of the cell necessary for cell division, but the alternative peptide ends up in the Golgi, a structure that helps to modify molecules and prep them for delivery to different destinations. In other words, the peptide had nothing to do with the part of the cell that Navarro and Cheeseman typically study.

Usually when Navarro comes across something outside of her area of expertise, she will consult with her lab mates, others in Whitehead Institute, or nearby collaborators. However, none of her usual collaborators’ research focuses on the Golgi, so this time Cheeseman suggested that Navarro share what they had found and ask for input from as wide a circle of researchers as possible—on the internet. Often, researchers guard their work in progress carefully, reluctant to share it lest they be scooped, which means someone else publishes a paper on the same topic first. In the competitive world of academic research, where publishing papers is a key part of getting jobs, tenure, and future funding, the specter of scooping can loom large. But science is also an inherently collaborative practice, with scientists contributing droplets of discovery to a shared pool of knowledge, so that new findings can be built upon what came before. Cheeseman is a board member of ASAPbio (Accelerating Science and Publication in biology), a nonprofit that promotes open communication, the use of preprints, and transparent peer review in the life sciences. Researchers like Cheeseman believe that if science adopts more transparent and collaborative practices, such as more frequently and widely sharing research in progress, this will benefit both the people involved and the quality of the science, and will speed up the search for discoveries with the potential to positively impact humankind. But how helpful are such “open science” practices in reality? Navarro and Cheeseman had the perfect opportunity to find out.

The power of preprints

Navarro and Cheeseman wrote up what they knew so far–they had found a hidden peptide that localizes exclusively to the Golgi, and it stays there throughout the cell cycle–as a “preprint in progress,” an incomplete draft of a paper that acknowledges there is more to come. In December 2020, they posted the preprint in progress to bioRxiv, a website that serves as a repository for biology preprints, or papers that have not yet

been published. The site was inspired by arXiv, a similar repository launched in 1991 to provide free and easy access to research in math, physics, computer science, and similar fields. arXiv has become a central hub for research in these fields, with an average of 10-15,000 submissions and 30 million downloads per month. The biology fields were slower to create such a hub: BioRxiv launched in 2013. In December, 2021, it received around 3,000 submissions and 2.3 million downloads.

Navarro and Cheeseman’s decision to post a preprint in progress to bioRvix is not common practice, but a lot of researchers have started posting preprints that resemble the final paper closer to publication. Some journals even require it. This type of early sharing has many benefits: contrary to the fear that sharing research before publication will lead to scooping, it allows researchers to stake a claim sooner by making their work public record pre-publication. Preprints enable researchers to show off their most current work during the narrow windows of the academic job cycle. This can be particularly crucial for early career researchers whose biggest project to date—such as graduate thesis work—is still in publication limbo. Preprints also allow new ideas and knowledge to get out into the world sooner, the better to inspire other researchers. Findings that seemed minor at first have provided the key insight for someone else’s major discovery throughout biology’s history. The sooner research is shared, the sooner it can be built upon to develop important advances, like new medicines or a better model of how a disease spreads.

Navarro and Cheeseman weren’t expecting their discovery to have that kind of major impact, but they knew the peptide could be useful to researchers studying the Golgi. The peptide is small and doesn’t disrupt any functions in the cell. Researchers can attach fluorescent proteins to it that make the Golgi glow in imaging. These traits make the peptide a useful potential tool. Since Navarro and Cheeseman posted the preprint, multiple researchers have reached out about using the peptide.

However, the main goal of posting a preprint in progress, as opposed to a polished preprint, is to ask for input to further the research. The morning after the researchers put their preprint on bioRxiv, Cheeseman shared it on Twitter and asked for feedback. Other researchers soon shared the tweet further, and responses started flooding in. Some researchers simply commented that they found the project interesting, which was reassuring for Cheeseman and Navarro.

“It was nice to see that we weren’t the only ones who thought this thing we found was really cool,” Navarro says. “It gave me a lot of motivation to keep moving on with this project.”

Then, some researchers had specific questions and ideas. The topic that seemed of greatest interest was how the peptide ends up at the Golgi, followed by where exactly in the Golgi it ends up. Researchers suggested online tools that might help predict answers to these questions. They proposed different mechanisms that might be involved.

Navarro used these suggestions to design a new series of experiments, in order to better characterize how the peptide associates with the Golgi. She found out that the peptide attaches to the Golgi’s outer-facing membrane. She started developing an understanding of which of peptide’s 37 amino acids were necessary for Golgi localization, and so was able to narrow in on a 14-amino acid sequence within the peptide that was sufficient for this localization.

Her next question was what specific mechanisms were driving the peptide’s Golgi localization. Navarro had a good lead for one mechanism: the evidence and outside input suggested that after the peptide was created, it likely underwent a modification that gave it a sticky tag to anchor it to the Golgi. What would be the best experiments to confirm this mechanism and determine the other mechanisms involved? Navarro and Cheeseman decided it was time to check back in with the crowd online.

Narrowing in on answers

Navarro and Cheeseman updated their preprint with their new findings, and invited further feedback. This time, they had a specific ask: how to test whether the peptide has the modification they suspected. They received suggestions: a probe, an inhibitor. They also received some unexpected feedback that took them in a new direction. Harmit Malik, professor and associate director of the basic sciences division at Fred Hutch, studies the evolutionary changes that occur in genes. Malik found the peptide interesting enough to dig into its evolutionary history across primates. He emailed Cheeseman and Navarro his findings. Versions of the peptide existed in many primates, and some of the variations between species affected where the peptide ended up. This was a rich new vein of inquiry for Navarro to follow in order to pinpoint exactly which parts of the sequence were necessary for Golgi localization, and the researchers might never have come across it if they had not sought input online.

Guided by the latest set of suggestions, Navarro resumed work on the project. She found evidence that the peptide does undergo the suspected modification. She winnowed down to a 10-amino acid sequence within the peptide that appears to be the minimal sequence necessary for this type of Golgi localization. Navarro and Cheeseman rewrote the paper, adding the discovery of a minimal Golgi targeting sequence—basically a postal code that marks a molecule’s destination as the Golgi. They posted a third version of the preprint in September, 2021. This time, Cheeseman did not ask Twitter for feedback: the paper may undergo more changes, but it now contains a complete research story.

The changing face of science

Based on their experience, would Cheeseman and Navarro recommend sharing preprints in progress? The answer is a resounding yes—if the project is a good fit. Both agree that for projects like this, where the subject is outside the expertise of a researcher’s usual circle of collaborators, asking the wider scientific community for help can be extremely valuable.

“I often share my research with other people at Whitehead Institute, and other cell division researchers at conferences, but this process allowed me to share it with people who work in different scientific areas, with whom I would not normally engage,” Navarro says.

Cheeseman hopes that sharing hubs like bioRxiv will develop ways for even larger and more diverse groups of scientists to connect.

If researchers are hesitant to use an open science approach, Cheeseman and Navarro recommend testing the waters by starting with a lower stakes project. In this case, Navarro’s Golgi paper was a side project, something of personal interest but not integral to her career. Having had a positive experience using an open approach on this project, Cheeseman and Navarro agree they would be comfortable using such an approach again in the future.

“I wouldn’t suggest sharing a preprint in progress for every paper, but I think constructive opportunities are more plentiful than researchers may realize,” Cheeseman says.

In general, Cheeseman thinks, the biology field needs to re-envision how its science gets shared.

“The idea that one size fits all, that everything needs to be a multi-figure paper in a high impact journal, is just not compatible with the way that people do research,” Cheeseman says. “We need to get flexible and explore and value scholarship in every form.”

As for the peptide paper? Regardless of where it ends up, Cheeseman and Navarro consider their open science experiment a success. By sharing their research and asking for input, they gained insights, research tools, and points of view that took the project from a curious finding to a rich understanding of the mechanisms behind Golgi localization. Their early realization that the peptide functions outside of their region of expertise could have been a dead end. But by being open about what they were working on and what sort of guidance they needed, the researchers were able to overcome that hurdle and decode their mystery peptide, with a little help from the wider scientific community.

Whitehead Institute Member Pulin Li named an Allen Distinguished Investigator
Merrill Meadow | Whitehead Institute
February 9, 2022

Whitehead Institute Member Pulin Li has been selected by The Paul G. Allen Frontiers Group to be an Allen Distinguished Investigator. The Allen Distinguished Investigator program backs creative, early-stage research projects in biology and medical research that would not otherwise be supported by traditional research funding programs. Each Allen Distinguished Investigator award provides three years of research funding.

Li, who is also an assistant professor of biology and the Eugene Bell Career Development Professor of Tissue Engineering at Massachusetts Institute of Technology, studies how circuits of genes within individual cells enable multicellular functions and phenomena such as the patterns of varied cell types that comprise a tissue. Her lab combines approaches from synthetic biology, developmental biology, biophysics, and systems biology to quantitatively understand how cells communicate to produce those phenomena. The work could lead to ways to program stem cells to form tissues for regenerative medicine.

“I am very grateful for this generous support ,” Li says. “The Frontiers Group’s commitment to early-stage investigations is welcome by scientists who are trying to open new paths to discovery.”

Li’s project seeks to advance the field of synthetic developmental biology through improving the process researchers use to create small groups of cells that develop certain functions of organs. Known as organoids, these tissues enable researchers to learn more about how organs develop and function in both healthy and diseased states; and they could be used for rapid and accurate preclinical drug testing.

“All organs in our body are ecosystems of different cell types that constantly talk to each other and regulate each other’s fates, and the challenge researchers face is creating organoids that reflect this multifaceted interaction,” Li explains. “Organoids that include a more complex and complete suite of tissues may prove to function more like real organs. In the project supported by the Allen Distinguished Investigator award, my lab seeks to improve the development of organoids by introducing a type of supportive tissue known as the stroma.”

Most organs are made of epithelial cells juxtaposed with the stroma’s connective tissue. Within the stroma, mesenchymal cells help to orchestrate tissue formation and the spatial organization of other cell types. The versatile function of mesenchymal cells critically depends on their extraordinary capability to produce an array of molecules that can stimulate other cell types.

As a result, each population of mesenchymal cells has distinct capability to support the development of other cell types, control organ shapes, respond to tissue injury, and regulate inflammation.

“Despite the important function of mesenchymal cells,” Li says, “they are mostly missing in the organoids that researchers have thus far developed. Our goal is to engineer diverse populations of human mesenchymal cells and  reconstitute their spatial relationship and communication with other cell types in the stroma.

“Ultimately, we believe, these synthetically engineered stroma will help unleash the full potential of organoids as useful tools for studying organ formation and physiology.”

The Paul G. Allen Frontiers Group was founded in 2016 by the late philanthropist Paul G. Allen to explore the landscape of bioscience and to identify and foster ideas that will change the world. Its Allen Distinguished Investigators program advances frontier explorations with exceptional creativity and potential impact.

New high-throughput method greatly expands view of how mutations impact cells

Broad scientists have developed a new approach for studying the functional effects of the millions of mutations associated with cancer and other diseases

Tom Ulrich | Broad Institute
January 27, 2022

There are millions of mutations and other genetic variations in cancer. Understanding which of these mutations is an impactful tumor “driver” compared to an innocuous “passenger”, and what each of the drivers does to the cancer cell, however, has been a challenging undertaking. Many studies rely on bespoke, time-consuming, gene-specific approaches that provide one-dimensional views into a given mutation’s broader functional impacts. Alternatively, computational predictions can provide functional insights, but those findings must then be confirmed through experiments.

Now, in a report published in Nature Biotechnology, a research team at the Broad Institute of MIT and Harvard has unveiled a massive-scale, high resolution method for functionally assessing large numbers of protein-coding mutations simultaneously, one that returns rich phenotypic information and which could potentially be used to study any mutation in any gene in cancer and perhaps other diseases. Their results, gained through proof-of-concept experiments with cancer cell lines, also show that individual mutations can have a spectrum of effects not only on their impacted genes but also on molecular pathways and cell state as a whole, and add nuance to the long-accepted practice of dividing cancer mutations into so-called “drivers” and “passengers.”

“When you look at the genetic data from patients’ tumors, you see that the majority of cancer-associated mutations are actually quite rare, which means we have few insights into what these mutations do,” said Jesse Boehm of the Broad’s Cancer Program, who was co-senior author of the study with Aviv Regev, a Broad core institute member now at Genentech, a member of the Roche Group. “For cancer precision medicine to become a reality, we need a firm understanding of the function of each mutation, but a major challenge has been defining an experimental approach that could be implemented in the lab at the scale required. This new method may be the tool we need.”

The new method, called single-cell expression-based variant impact phenotyping (sc-eVIP), builds on Perturb-seq — an approach developed in 2016 by Regev and colleagues for manipulating genes and exploring the consequences of those manipulations using high-throughput single-cell RNA sequencing —  and eVIP, a method also developed in 2016 by Boehm and colleagues for profiling cancer variants at low scale using RNA measurements. While Perturb-seq assays originally relied on CRISPR to introduce mutations into cells, the sc-eVIP team adopted an overexpression-based approach, engineering DNA-barcoded gene constructs for each mutation of interest and introducing them into pools of cells in such a way that the cells expressed the mutated genes at higher-than-normal levels.

By then recording each perturbed cell’s expression profile using single cell RNA sequencing, the team could both identify which mutation a given cell carried (based on the constructs’ unique barcodes) and examine the mutation’s broader impact on the cell’s overall expression state. This approach provides a highly detailed view of a mutation’s impact on a variety of molecular pathways and circuits, and does not need to be adapted for each new gene studied.

“In a sense, we’re using the cell as a biosensor,” said Oana Ursu, a postdoctoral fellow in the Regev lab, formerly within the Broad’s Klarman Cell Observatory and now at Genentech, and co-first author of the study with JT Neal, a senior group leader in the Broad’s Cancer Program. “By looking at the expression changes that take place when we overexpress a mutated gene, we can learn whether it has a meaningful impact. But also, we can compare and categorize variants based on the changes they trigger, and look for patterns in the biology they affect.”

“Most of the technologies developed for interpreting coding variants up to now have been very scalable, but have had relatively simple readouts like cell viability or maybe looked at a single trait. Their information content has been low, and it takes a lot of work to optimize them,” said Neal. “With sc-eVIP, we’ve engineered a comprehensive approach that’s high throughput and information-rich, which could be a real boon for large-scale variant-to-function studies.”

To test sc-eVIP’s potential, the team chose to study TP53 — the most commonly mutated gene in cancer — and KRAS — which encodes a key oncogene responsible for abnormal growth of many cancers. Neal, Ursu, and their collaborators generated constructs containing 200 known TP53 and KRAS mutations (including cancer-associated mutations and control mutations known to leave gene function unaffected) and introduced them into 300,000 lung cancer cells, and captured each cell’s individual expression profile. Based on those profiles, the team categorized each mutation as either “wildtype-like” (that is, effectively functionally indistinguishable from the unmutated gene) or “putatively impactful,” from there further defining mutations based on whether they reduced or enhanced the gene’s function.

The profiles also revealed each mutation’s broader impact on cell state, based on how the activity of a variety of pathways changed across single cells. For instance, the sc-eVIP data revealed KRAS mutations that fall along a continuum in how they impact cell state at the population level, from having no impact to influencing subtle shifts in cellular abundances to causing outright activation or repression of key pathways in a majority of cells. These findings suggest that different mutations within the same gene can influence cell state along a spectrum of impact.

“The cancer community has long embraced a binary conceptual framework of ‘driver’ mutations, ones that promote cancer development and progression, versus ‘passenger’ mutations, which are completely inert and just happened to arise along the way,” Boehm noted. “These initial findings suggest that biologically those categories are likely overly simplistic, that there’s actually a continuum of functional impact from inert to completely tumorigenic.”

While the team focused on cancer-associated genes and mutations for this study, they noted that sc-eVIP is gene-agnostic, highly scalable, and that using single cell RNA sequencing as a readout offers an efficient and generalizable approach to producing rich phenotypic data. They also calculated that it should be possible to thoroughly characterize most mutations with only 20 to a few hundred cells. Based on those numbers, it may be possible with sc-eVIP to generate a first-draft functional map of more than 2 million variants in approximately 200 known cancer genes with 71 million cells.

“If we can map where every cancer-associated variant fits on the continuum of impact in a variety of cancers and cell types,” Boehm said, “we’ll have a much better grasp of how the interplay of variants affects cell state, which in turn affects cancer development, growth, and response. Such knowledge would represent a true advance toward cancer precision medicine.”

Support for this study came from the National Cancer Institute, the National Human Genome Research Institute, the Mark Foundation for Cancer Research, the Howard Hughes Medical Institute, the Broadnext10 and Variant to Function programs and the Klarman Cell Observatory at the Broad Institute, and other sources.

Paper(s) cited:

Ursu O, Neal JT, et al. Massively parallel phenotyping of coding variants in cancer with Perturb-seqNature Biotechnology. Online January 20, 2022. DOI:10.1038/s41587-021-01160-7.

Blending machine learning and biology to predict cell fates and other changes
Greta Friar | Whitehead Institute
February 1, 2022

Imagine a ball thrown in the air: it curves up, then down, tracing an arc to a point on the ground some distance away. The path of the ball can be described with a simple mathematical equation, and if you know the equation, you can figure out where the ball is going to land. Biological systems tend to be harder to forecast, but Whitehead Institute Member Jonathan Weissman, postdoc in his lab Xiaojie Qiu, and collaborators at the University of Pittsburgh School of Medicine are working on making the path taken by cells as predictable as the arc of a ball. Rather than looking at how cells move through space, they are considering how cells change with time.

Weissman, Qiu, and collaborators Jianhua Xing, professor of computational and systems biology at the University of Pittsburgh School of Medicine, and Xing lab graduate student Yan Zhang have built a machine learning framework that can define the mathematical equations describing a cell’s trajectory from one state to another, such as its development from a stem cell into one of several different types of mature cell. The framework, called dynamo, can also be used to figure out the underlying mechanisms—the specific cocktail of gene activity—driving changes in the cell. Researchers could potentially use these insights to manipulate cells into taking one path instead of another, a common goal in biomedical research and regenerative medicine.  

The researchers describe dynamo in a paper published in the journal Cell on February 1. They explain the framework’s many analytical capabilities and use it to help understand mechanisms of human blood cell production, such as why one type of blood cell forms first (appears more rapidly than others).

“Our goal is to move towards a more quantitative version of single cell biology,” Qiu says. “We want to be able to map how a cell changes in relation to the interplay of regulatory genes as accurately as an astronomer can chart a planet’s movement in relation to gravity, and then we want to understand and be able to control those changes.”

How to map a cell’s future journey

 Dynamo uses data from many individual cells to come up with its equations. The main information that it requires is how the expression of different genes in a cell changes from moment to moment. The researchers estimate this by looking at changes in the amount of RNA over time, because RNA is a measurable product of gene expression. In the same way that knowing the starting position and velocity of a ball is necessary to understand the arc it will follow, researchers use the starting levels of RNAs and how those RNA levels are changing to predict the path of the cell. However, calculating changes in the amount of RNA from single cell sequencing data is challenging, because sequencing only measures RNA once. Researchers must then use clues like RNA-being-made at the time of sequencing and equations for RNA turnover to estimate how RNA levels were changing. Qiu and colleagues had to improve on previous methods in several ways in order to get clean enough measurements for dynamo to work. In particular, they used a recently developed experimental method that tags new RNA to distinguish it from old RNA, and combined this with sophisticated mathematical modeling, to overcome limitations of older estimation approaches.

The researchers’ next challenge was to move from observing cells at discrete points in time to a continuous picture of how cells change. The difference is like switching from a map showing only landmarks to a map that shows the uninterrupted landscape, making it possible to trace the paths between landmarks. Led by Qiu and Zhang, the group used machine learning to reveal continuous functions that define these spaces. 

“There have been tremendous advances in methods for broadly profiling transcriptomes and other ‘omic’ information with single-cell resolution. The analytical tools for exploring these data, however, to date have been descriptive instead of predictive. With a continuous function, you can start to do things that weren’t possible with just accurately sampled cells at different states. For example, you can ask: if I changed one transcription factor, how is it going to change the expression of the other genes?” says Weissman, who is also a professor of biology at the Massachusetts Institute of Technology (MIT), a member of the Koch Institute for Integrative Biology Research at MIT, and an investigator of the Howard Hughes Medical Institute.

Dynamo can visualize these functions by turning them into math-based maps. The terrain of each map is determined by factors like the relative expression of key genes. A cell’s starting place on the map is determined by its current gene expression dynamics. Once you know where the cell starts, you can trace the path from that spot to find out where the cell will end up.

The researchers confirmed dynamo’s cell fate predictions by testing it against cloned cells–cells that share the same genetics and ancestry. One of two nearly-identical clones would be sequenced while the other clone went on to differentiate. Dynamo’s predictions for what would have happened to each sequenced cell matched what happened to its clone.

Moving from math to biological insight and non-trivial predictions

With a continuous function for a cell’s path over time determined, dynamo can then gain insights into the underlying biological mechanisms. Calculating derivatives of the function provides a wealth of information, for example by allowing researchers to determine the functional relationships between genes—whether and how they regulate each other. Calculating acceleration can show that a gene’s expression is growing or shrinking quickly even when its current level is low, and can be used to reveal which genes play key roles in determining a cell’s fate very early in the cell’s trajectory. The researchers tested their tools on blood cells, which have a large and branching differentiation tree. Together with blood cell expert Vijay Sankaran of Boston Children’s Hospital, the Dana-Farber Cancer Institute, Harvard Medical School, and Broad Institute of MIT and Harvard, and Eric Lander of Broad Institute, they found that dynamo accurately mapped blood cell differentiation and confirmed a recent finding that one type of blood cell, megakaryocytes, forms earlier than others. Dynamo also discovered the mechanism behind this early differentiation: the gene that drives megakaryocyte differentiation, FLI1, can self-activate, and because of this is present at relatively high levels early on in progenitor cells. This predisposes the progenitors to differentiate into megakaryocytes first.

The researchers hope that dynamo could not only help them understand how cells transition from one state to another, but also guide researchers in controlling this. To this end, dynamo includes tools to simulate how cells will change based on different manipulations, and a method to find the most efficient path from one cell state to another. These tools provide a powerful framework for researchers to predict how to optimally reprogram any cell type to another, a fundamental challenge in stem cell biology and regenerative medicine, as well as to generate hypotheses of how other genetic changes will alter cells’ fate. There are a variety of possible applications.

“If we devise a set of equations that can describe how genes within a cell regulate each other, we can computationally describe how to transform terminally differentiated cells into stem cells, or predict how a cancer cell may respond to various combinations of drugs that would be impractical to test experimentally,” Xing says.

Dynamo’s computational modeling can be used to predict the most likely path that a cell will follow when reprogramming one cell type to another, as well as the path that a cell will take after specific genetic manipulations. 

Dynamo moves beyond merely descriptive and statistical analyses of single cell sequencing data to derive a predictive theory of cell fate transitions. The dynamo toolset can provide deep insights into how cells change over time, hopefully making cells’ trajectories as predictable for researchers as the arc of a ball, and therefore also as easy to change as switching up a pitch.