From Poop to Protection: Satellite Discoveries Help Save Antarctic Penguins and Advance Wildlife Monitoring

AWARDEES: Christian Che-Castaldo, Heather Joan Lynch, Mathew Schwaller

FEDERAL FUNDING AGENCIES: National Aeronautics and Space Administration, National Science Foundation

Where there’s poop, there are penguins — that was the logic of a scientist who noticed that bright pink penguin poop appeared on satellite images. The idea set in motion a 40-year mission to track penguin populations via satellite imagery, leading to the discovery of 1.5 million previously undocumented Adélie penguins and a whole new way to track wildlife. Monitoring the rapidly warming Southern Ocean ecosystem gives us a glimpse into future climate change impacts, and satellite monitoring has allowed researchers to track animal populations in areas that are too difficult or dangerous to reach in person, saving federal dollars by making field research expeditions more targeted and efficient.

Mathew Schwaller

How to Pick up Poop

Mathew Schwaller always knew he wanted to be a scientist. A graduate student at the University of Michigan in the 1980s, Schwaller was studying remote sensing, which uses satellites to scan the earth’s landscape. He thought it was a wonderful way to look at the world, expanding the understanding of the earth’s surface all the way from space.

Schwaller’s dissertation advisor, Bill Benninghoff, was an Antarctic buff who kept pebbles from a penguin colony and a stuffed Adélie penguin in his office. It was Benninghoff who approached Schwaller about trying to study Adélie penguins from satellite images — something that no one had attempted before. Schwaller decided it was worth a shot.

Adélie penguins (Heather Lynch)

Standing less than two feet tall, Adélies sport the classic penguin tuxedo: black body and head with a white belly. Adélies form densely packed colonies to breed and raise their chicks, leaving behind bright pinkish-red poop that stands out on the Antarctic landscape, which is largely rocks, ice, and liquid water. The characteristic guano color comes from the birds’ diet of krill, tiny reddish aquatic crustaceans that look like shrimp. Since individual penguins were too small for satellites to capture, Schwaller decided to use the bright guano stains as a proxy for penguin colonies.

In 1983, Schwaller started collecting the measurements of Antarctic materials like penguins, rocks, and ice to simulate what the satellite could “see.” He developed a basic algorithm that distinguished the guano from its surroundings. This type of algorithm drew a circle around the penguin colonies — it “looked” at a pixel and asked, “Does this fit within the circle or not? Is it pink poop or not?”

An Eye in the Sky

The satellite series known as Landsat launched in 1972 as a collaboration between NASA and the U.S. Geological Survey to track how the earth’s landscape changed over time. On the first satellite launch (Landsat 1), scientists used a radiometer, an instrument similar to a modern cell phone camera, to process frequencies of light coming through the lens and interpret those values into color images. That radiometer only had a resolution of 90 meters (about the size of a football field), but by Landsat 4, an updated instrument called the Thematic Mapper scanned more components of the spectrum of light with a resolution of 30 meters, three times the initial resolution.

Schwaller collects data (Mathew Schwaller)

In 1986, funded by NASA and the National Science Foundation, Schwaller took in-person radiometric measurements in Antarctica near the McMurdo Antarctic Research Station and laid out a plan for a continent-wide Adélie penguin survey. Unfortunately, he hit a roadblock. At that time, Landsat data was recorded onto physical data storage tapes. To cover the entire Antarctic continent would require a stack of tapes over 100 feet tall, totaling $600,000 in tape costs alone. Plus, researchers would have to load the tapes onto a computer to read them individually. “I mean, how did we ever get anything done?” asks Schwaller today.

So Schwaller put his idea on ice while working a NASA job with other research priorities. But his interest in penguins didn’t fade. In 2008, USGS committed to free and open access to Landsat data, and Schwaller saw his chance to do the type of research he had been envisioning for decades. Accessing the data was game-changing, says Schwaller; “If I had a soap box to stand on, I would say, make that data free and release it to the people.”

Additionally, by 2008 computer processing power had increased dramatically — data that required use of a computer the size of a room now just required a laptop. The data from a stack of tapes now fit easily on a thumb drive.

Schwaller re-started his research on nights and weekends, and by 2010, he had a working algorithm to identify penguin colonies. “I was basically an amateur when it came to the penguins,” says Schwaller, “but the root of amateurism was that I really loved it.”

Heather Lynch

Assembling the Team

While Schwaller was working on his algorithm, a researcher named Heather Lynch was studying physics. But about three years into a Ph.D. in experimental physics at Harvard University, she started to hear about climate change. She attended one of Al Gore’s “Inconvenient Truth” PowerPoint presentations — long before it became the award-winning documentary film most people are familiar with — and though Lynch loved her physics research, she felt called to study environmental science.

Lynch transferred to the ecology and evolution research track but decided to use satellite remote sensing, using the same computational and mathematical methods that she already knewwas familiar with. After graduating, Lynch was introduced to the study of penguins in her postdoc lab.

Once Lynch started her research lab at Stony Brook University, she decided to combine her previous research areas and use satellite imagery to study penguin populations. One name that kept popping up in the scientific journal articles she read was Mathew Schwaller, who had laid the groundwork for satellite surveys of penguins. The papers were out there, but Schwaller had vanished from the penguin mapping scene. Lynch heard through the grapevine that he was still interested in penguin research, and the pair connected in 2014.

Christian Che-Castaldo

“It would be like meeting Shakespeare, or somebody that you've heard about that was just a name on paper,” says Lynch. “It was very exciting for me.”

Lynch also reached out to a colleague she’d met during her postdoc, Christian Che-Castaldo. He was an economist-turned-ecologist who had a penchant for both statistics and nature. Says Lynch, “Anybody who knows Chris would say that he is the most detail-oriented person on the face of the planet.” That was exactly the type of researcher Lynch was looking for.

The three scientists had complementary specialties — Schwaller and Che-Castaldo focused on algorithms and modeling the Landsat imagery, while Lynch had experience in the biology of penguins and the environment, as well as interpreting high-resolution satellite imagery.

The Known Knowns, Known Unknowns, and the Unknown Unknowns

Monitoring animal populations is hard, especially in a remote, harsh environment like Antarctica. Every year, Adélie penguins show up to their breeding site, where they hatch their chicks. Historically, that is when researchers could travel down to Antarctica in person and count the penguins. Given that there are hundreds of colony sites in Antarctica, and traveling to the Antarctic is expensive and sometimes dangerous, historical data on penguin populations is quite sparse. Lynch recalls a book in her office that contained all the data previously collected on penguin populations. It was a bit of a mess — there wasn’t even a standardized naming or location system.

Lynch envisioned wrenching all this penguin data into the 21st century via an online database. She wanted to document every Adélie penguin colony in Antarctica and its population over time. Supported by an NSF career award and a NASA grant, Lynch and her collaborators jumped in, and quickly their ambitions grew from documenting just the Adélie penguins to all the penguin species living in Antarctica.

A rather straightforward task quickly ballooned. The data took years to assemble because of how unstructured and disorganized the system for tracking penguins had been. Lynch compares it to an archaeological dig, cleaning dirt from bones with a toothbrush. But what resulted was a Mapping Application for Penguin Populations of Protected Dynamics, or MAPPPD, an invaluable resource for Antarctic researchers that is being continuously updated with new population counts.

MAPPPD website

Still, Lynch was manually processing high-resolution commercial satellite data, scanning for penguin colonies one image at a time. Because the interpretation was so challenging — Lynch describes it like trying to make sense of an ultrasound — it wasn’t easy to delegate among a larger number of analysts. This method worked, but it wasn’t efficient. If the researchers wanted to survey the entire Antarctic continent year after year, they needed to consider other options.  

Discovery in the Danger Islands

In the meantime, Schwaller and Lynch continued to lean into the Landsat satellite imagery that had started the whole effort. In 2014, Schwaller identified what appeared to be several huge colonies of Adélie penguins nesting on the Danger Islands in Antarctica. Initially fearing the algorithm had made a mistake, Lynch re-examined these same islands with higher resolution image and realized that these really were penguin colonies, and possibly some of the largest that were known to exist. The islands, which sit in a cluster east of the northern tip of Antarctica and stretch across only about 9 miles of ocean, are aptly named. Field research on the islands is difficult, and the last in-person ground count of Adélies was in 1996. “It’s thrilling when you see something like that pop up,” Schwaller remembers.

Lynch counts Gentoo penguins (Casey Youngflesh)

With proof of concept in hand, the researchers launched an expedition to the Danger Islands region and beat the odds; they managed to get in on a small boat to survey each of the islands up close. By combining the satellite imagery with ground and drone-based surveys, the team added over 1.5 million Adélie penguins to the global census, more Adélie penguins than the rest of the Antarctic Peninsula region combined. The Danger Islands were not considered biologically important, but due to the discovery of the colonies and their ecological value, they are now protected as an Antarctic Specially Protected Area and have been incorporated into a proposed Marine Protected Area for the western Antarctic Peninsula.

More discoveries followed in the years since. Researchers found Gentoo penguins moving into areas newly uncovered by melting glaciers, and satellites were used to complete a global Chinstrap penguin population assessment. Before satellites, “Entire populations of those birds could disappear, and you wouldn’t even know it,” worries Schwaller. But it’s one thing to know that monitoring is technical possible and an entirely different matter to build a system that can do it autonomously, at scale, and use the data in a way that is meaningful for conservation and policy. That’s where data science comes in.

Data Science to the Rescue

It was a radical idea, Che-Castaldo reflects, to try to monitor an animal’s population continuously on a global scale. But that was the goal: Build models that could depict fluctuations in penguin populations across Antarctica.

The piecemeal data was a hurdle once again. One site might have had two counts over the past 30 years, whereas another site had 25 counts over the same period. This made it impossible to reconstruct the abundance in a larger region because researchers never had accurate counts to sum up the total over a larger area. Che-Castaldo and Lynch spent years trying to build a model that could uncover the relationship between environmental factors and the growth rate of penguin populations. They used a modeling approach called Bayesian hierarchical modeling, which would allow them to use information from a site that had more data to get closer to an accurate number for a site with less data. This approach provided an estimate of every site’s population in every year, allowing regional population estimates that were more useful for policymakers. The team built increasingly sophisticated statistical models, ultimately linking about 270 Adélie penguin colony sites. This was a big undertaking, and, explains Che-Castaldo, only in the last 20 to 25 years have computers become fast enough to create a model like this.

Building uncertainty into the model was another issue Che-Castaldo ran into. Uncertainty gives scientists a more robust picture of data quality. For example, if a team of researchers is counting the number of eggs at a penguin colony while the weather is good, and the researchers arrive at the same number of eggs, they can be very certain about the count. But if heavy snow limits visibility and buries the nests, the researchers are more likely to miscount, so they’re less certain the number is accurate.

The result was an enormously detailed picture of how Adélie penguin populations had changed over the last 40 years. When Che-Castaldo and the team published a scientific article on his model, Lynch tried to print out the supplementary materials for proofreading. “My computer said, ‘printing page 1 of 620,’” she recalls. “It was the Cadillac of models.”

Ultimately, the team knew the model would never be perfect because they could never fill in all the Swiss cheese-like holes of the data with the kind of accuracy they wanted. But the model was useful — if they had population counts from the years 2010 and 2013, the model could make a good guess at what happened in 2011 and 2012.

The team hasn’t stopped perfecting their work. In 2017, they ran a competition with a data science company to see who could build the best predictive model of penguin population dynamics. It was an eye-opening experience, partly because some of the best models were actually quite simple.

The team, led by Che-Castaldo and Schwaller, are also working on layering the entire 50-year history of Landsat images of the Antarctic, one on top of the other, with the goal of following population trends at each colony over time. And Che-Castaldo is working on new ways to build the uncertainty in observing penguins from space into the statistics of the population model himself, much like he did for the uncertainty in counting penguins from the ground.

Sky-High Science, Grounded Impacts

The Antarctic is a worthwhile ecosystem to keep an eye on — it’s a canary in a coal mine, of sorts. The climate is warming more rapidly at the poles, so monitoring its changes provides a glimpse into what could happen in other regions of the globe. Researchers have also detected bird flu on the outer islands of Antarctica, an important bellwether for public health researchers working to track potential zoonotic disease outbreaks.

The satellite population monitoring techniques established by Schwaller, Lynch, and Che-Castaldo have spread. High-resolution satellite imagery has developed to the point where one pixel is about the size of a piece of printer paper, so sometimes a proxy (like a guano stain) isn’t even necessary — the body of an animal is enough. Scientists are now tracking walruses, seals, African elephants, and even cows in pastures using this technology. Lynch was recently approached about collaborating on a research project tracking mammals in Kazakhstan near Chernobyl, where scientists cannot physically visit due to radiation levels.

Despite all the progress in satellite and computing and computer vision, monitoring populations via satellite imagery isn’t going to replace field work for researchers anytime soon. Lynch views it as an opportunity to enhance research capacity, plus increase safety and efficiency. She estimates that her trip to survey Elephant Island in Antarctica, where there were about two dozen colonies, cost around $600,000. It required a dedicated research vessel for about 20 days, which runs $10-15,000 per day, plus personnel, equipment, and a litany of other costs. And that’s just one expedition. Lynch said in a previous interview:

“We’re far from a point where satellites are going to make field work irrelevant. Instead, it has made fieldwork more efficient. We can plan expeditions to target colonies of high interest, and satellites have made expeditions much safer because we know so much more about what to expect. There is a nice synergy between satellite-based surveys and field surveys that I expect will be the status quo for a long time.” (NASA)

The number of earth-observing satellites operating will likely only grow in the future. The entire remote sensing field is expanding rapidly, transforming the way ecologists study the planet and all that lives on it. The idea hatched by Schwaller 40 years ago was ahead of its time, but its time has now come. And in the meantime, Lynch, Schwaller, and Che-Castaldo are keeping their eyes on the Antarctic for more prospective discoveries.

Schwaller at a penguin colony (Mathew Schwaller)

 By Gwendolyn Bogard

How We Think: Brain-Inspired Models of Human Cognition Contribute to the Foundations of Today’s Artificial Intelligence

AWARDEES: Geoffrey Hinton, James L. McClelland, David E. Rumelhart

FEDERAL FUNDING AGENCIES: Department of Defense, National Institutes of Health, National Science Foundation

Decades before artificial intelligence emerged as the platform for innovation that it is today, David Rumelhart, James McClelland, and Geoffrey Hinton were exploring a new model to explain human cognition. Dissatisfied with the prevailing symbolic theory of cognition, David Rumelhart began to articulate the need for a new approach to modeling cognition in the mid-1970s, teaming up with McClelland with support from the National Science Foundation to create a model of human perception that employed a new set of foundational ideas. At around the same time, Don Norman, an early leader in the field of cognitive science, obtained funding from the Sloan Foundation to bring together an interdisciplinary group of junior scientists, including Hinton, with backgrounds in computer science, physics, and neuroscience. Rumelhart, McClelland, and Hinton led the development of the parallel distributed processing framework, also known as PDP, in the early-1980s, focusing on how networks of simple processing units, inspired by the properties of neurons in the brain, could give rise to human cognitive abilities. While many had dismissed the use of neural networks as a basis for building models of cognition in the 1960s and 1970s, the PDP group revived interest in the approach. Skeptics critiqued the new models too, and had only limited success in enabling effective artificially intelligent systems until the 2010s, when massive increases in the amount of available data and computer power enabled Hinton and others to achieve breakthroughs leading to an explosion of new technological advancements and applications.

Understanding Neural Networks

Long before the parallel distributed processing framework could be realized, scientists were already exploring and considering the biological structure of neurons in the brain and how interconnected networks of neurons might underly human cognition and perception. Human brain cells, called neurons, form a complex, highly interconnected network of units that send electrical signals to each other, communicating via specialized connections called synapses. Researchers built upon these foundational observations to develop theories about how the strengths of connections in the brain could be adapted to create layered networks that could perform complicated tasks like recognizing objects. Modeling was also a critical component of demonstrating how a biological neural network could work, and researchers used simulations of adaptive neural networks on computers to explore these ideas, which drew inspiration from the neural circuitry of the brain.

In 1958, Frank Rosenblatt developed the Perceptron learning procedure which he implemented on a 5-ton computer the size of a room. The Perceptron could be fed a series of cards with markings on the left or right. After 50 trials, the computer taught itself to distinguish cards marked on the left from cards marked on the right. Rosenblatt called the Perceptron “the first machine capable of having an original idea.” While Rosenblatt’s vision was prophetic, his model was limited. Although it relied upon several layers of neuron-like units, it had only one layer of connections that could “learn.” In 1969, Marvin Minsky and Seymour Papert published a book, Perceptrons, arguing that there were fundamental limitations to what tasks could be learned using a single layer of adaptive connections, and questioning whether neural networks could ever prove useful in carrying out truly intelligent computations.

In part because of Minsky and Papert’s influence, the dominant approach in artificial intelligence and cognitive psychology in the 1950s through the 1970s was focused on symbolic processing systems. In the symbolic approach, processes were often thought to be modular and compared to computer programs – sequential ordered lists of rules that, when applied to some symbolic input (for example, the present tense of a verb), would produce a desired output (for example for the verb’s past tense, the rules were: ‘if it ends in e, add d; otherwise, add ed’). The structure of the neural networks in the brain, on which everyone agreed the mind was implemented, was thought to be essentially irrelevant to understanding cognitive function. But by the late 1970s, it was becoming apparent that models built on these assumptions were failing to capture many basic and fundamental aspects of human cognition, spurring the search for new theoretical alternatives.

David E. Rumelhart

The PDP Research Group

In the late 1960s, after earning his Ph.D. at Stanford University, David Rumelhart, a mathematical psychologist by training, joined the psychology department at the University of California, San Diego. Rumelhart was interested in building explicit computational models of human cognition and explored models within the symbolic paradigm in the early 1970s. Soon, however, he found the classic symbolic framework for understanding human thought process to be unsatisfactory. Starting in the mid-1970s, he wrote several papers aimed at addressing the shortcomings of the symbolic approach, beginning to explore alternatives that might overcome their limitations. Don Norman, the senior scientist in the lab at the time, recognized Rumelhart’s potential and helped support his efforts to pursue alternative approaches.

In 1974, James McClelland joined UCSD, and it was there that McClelland and Rumelhart discovered their mutual interest in going beyond the symbols and rules of the prevailing paradigm. Like Rumelhart, McClelland felt that the symbolic theory fell short of capturing many aspects of cognition, particularly the role of context in perception and language understanding. For example, context lets us see the very same line set of 3-line segments as two different letters when these segments occur in different contexts, as shown in the image below.  The symbolic models of the early days of cognitive psychology could not explain such findings, since each letter had to be identified individually before information about possible words could be accessed.

In the late 1970s, Rumelhart and McClelland each received grants from the National Science Foundation, allowing them to focus on capturing the influence of word context on the perception of letters. McClelland recalls, “[Dave and I] had coffee together and he asked me what I was working on. I told him I was trying to build a model inspired by his paper on context effects to capture findings I had explored as a graduate student. And he said, “that's funny, I'm trying to build a model of some of my own related findings!” I think we were both excited to join forces, combining his experience as a modeler with my background as an experimental scientist. I was particularly impressed by Dave’s ability to take vague ideas and turn them into models that made them explicit, so that their implications could be fully explored.” Their joint work was an early instance of a neural network model, leading them both in many new directions.

James McClelland

Rumelhart, McClelland, and others began experimenting and publishing ideas to help strengthen the case for a new framework. McClelland’s early research looked at how people recognized words and represented categories, and how these processes might be modeled using neural networks. One of the early models Rumelhart developed was focused on modeling how people typed on a keyboard (they often prepare to type many successive letters simultaneously). “Ironically, he was a terrible typist,” recalls his son Peter, who often found himself to be an early “test subject” for Rumelhart’s research ideas.

Rumelhart and McClelland’s early modeling work led them to consider how learning occurs in a neural network. Rumelhart’s son Karl recalls, “He was curious to learn how people learn” and that was a driving force behind his research. “In a neural network, it is the connections between the neuron-like processing units that determine the computations that the system performs,” explains McClelland.  “We both became fascinated with understanding more about how the right connections become established – or in other words, how the brain learns.” Together Rumelhart and McClelland developed models showing how a neural network could begin to explain a child’s ability to learn the rules of language or to form representations capturing people’s memory for the members of a category.

Geoffrey Hinton

Meanwhile, in 1978, Rumelhart and McClelland were joined at UCSD by Geoffrey Hinton, a visiting scholar who brought new perspectives to the group after completing his Ph.D. in artificial intelligence. Hinton recalls, “After I wrote my thesis, I had dropped out of academia.” He explored other career pathways for a time but then an advertisement for a program at UCSD caught his eye. “I applied and they rejected me!” A while later, Hinton had accepted a different postdoc position, but within two hours of accepting that job, he got a call with an offer for the UCSD position. He quickly withdrew his acceptance of the other position in favor of the UCSD opportunity. “That was one of the best decisions I ever made,” he says.

Hinton’s visiting scholar position ended in 1980, but he returned for 6 months in 1982, which ended up being an intensive period of mapping out the plan for a book that would capture the key ideas for a new framework. “When Geoff came back, we decided on the first three chapters, and each of us wrote one of them,” McClelland says. They named this new approach the “parallel distributed processing” framework and started the PDP Research Group. Several others, including physicists, neurobiologists, and mathematicians also joined the group. Francis Crick, the Nobel-Prize-winning co-discoverer of the structure of DNA, had become interested in how the brain supports visual perception, and he also participated in the group’s meetings.

The Parallel Distributed Processing Framework

The parallel distributed processing framework, or PDP for short, describes how cognitive processes can occur within a network of simple, interconnected, neuron-like processing units. The framework has three basic principles: (1) processing is distributed across the units within the network; (2) learning occurs through changes in the strengths of the connections between the units, and these changes depend on the propagation of signals between  the units; and (3) memories are not stored explicitly like files in a computer but are reconstructions of past experiences, using the learned connection strengths between the neuron-like units. “One way to think of a neural network,” suggests McClelland, “is to think of it as a kind of community, where collective outcomes depend on everyone working together in communication with each other. One key breakthrough was to make the units and the signals they can send to each other as simple as possible. The intelligence cannot be found in any of the parts – it emerges from the interactions of the components with each other.”

Parallel Distributed Processing Books, Volumes 1 & 2

The mid to late 1980s was a pivotal time for development of the PDP framework. The team of researchers received funding support from the Department of Defense Office of Naval Research and from a private foundation to continue their discovery. In 1986, McClelland and Rumelhart published the two-volume “Parallel Distributed Processing” book, which would become a central text in the field describing a mathematically explicit, algorithmic formalization of the PDP framework of cognition. These two volumes spurred much of the cognitive science community to develop, explore, and test new computational models of phenomena in learning, memory, language, and cognitive development. Introductory chapters laid out the broad reasons for interest in these kinds of models as ways of capturing human cognitive abilities. Rumelhart laid out the framework and described what a model neuron looks like, the mathematical equations for how inputs produce activity in these neurons, and basic ideas about connections. His sons, Karl and Peter, recall that their father rarely would elect to put his name first on a publication — "it was a testament to the team culture he believed in cultivating, and which flourished in the group.” But Rumelhart did place his name first on several key chapters and assumed the role of first author overall, a sign of the importance he gave to this work. “We all agreed that this was fully justified,” says McClelland, “given the depth of Dave’s insights and the seminal thinking he brought to the effort to understand both information processing and learning in neural networks.”

The rest of the book helped frame future directions or elaborations of these foundational ideas and the applications of them. Hinton, who was originally going to be the third editor of the book, had pivoted in 1985 to pursuing another model he believed was a better theory for how the brain worked. “I said the future is actually in Boltzmann machines so I'm dropping out of editing these books. That was a big mistake,” Hinton says. While not an editor of the book, Hinton remained a significant influence on the book’s overall development and contributed to several conceptual ideas covered in it.

Origins of the Learning Algorithm That Powers Today’s AI Systems

As the PDP framework was being developed, Rumelhart and Hinton were both interested in addressing the limitations of the existing models for learning, or adjusting connections, in neural networks. One of the ideas they explored was backpropagation, a learning algorithm that makes it possible to train connections across the layers of a deep (i.e. a multi-layer) neural network, overcoming the one-layer limitation of Rosenblatt’s Perceptron. Ideas related to backpropagation were explored by several researchers, but it was Rumelhart, accompanied by Hinton and mathematician Ron Williams, who systematically developed the algorithm and applied it to address many of the challenges Minsky and Papert had posed for neural networks in Perceptrons. The use of the term in neural networks was proposed in 1986 in a landmark Nature paper, “Learning Representations by Back-propagating Errors,” co-authored by Rumelhart, Hinton, and Williams. The paper detailed how backpropagation worked and how it could be used to efficiently train multi-layer neural networks by adjusting the connections between neurons to minimize errors.

The co-authors thought backpropagation would emerge as an effective application of PDP and neural networks. “We thought it would solve everything. And we were a bit puzzled about why it didn’t solve everything,” Hinton recalls. Rumelhart had even received advice to patent the algorithm he developed to maintain proprietary usage. While McClelland and his collaborators relied on backpropagation to address many acquired human abilities, such as reading, and the gradual emergence of children’s understanding of the physical and natural world during the first decade of life, backpropagation failed to take off as an effective model for artificial intelligence. The computers of the 1980s lacked the computational power needed to handle the extensive calculations required by large networks, and early, small networks could only be trained on small datasets, which turned out to be insufficient for addressing realistic applications. However, as successive generations of computers became faster and more powerful, backpropagation would have an opportunity to re-emerge thanks to the groundwork laid by the Nature paper.

Decades Later…A Breakthrough

By the late 1980s, the initial PDP research group members had physically dispersed – Rumelhart joined the faculty at Stanford University, McClelland at Carnegie Mellon University, and Hinton at the University of Toronto – but they each maintained some level of collaboration in the field while also exploring their own research interests.

Given the lack of success with backpropagation to build artificial neural networks, Hinton and others moved on to exploring alternative algorithms that might overcome some of its apparent limitations. But as computational power increased and larger data sets became available early in the new millennium, Hinton revisited the use of backpropagation. In 2012, he and two of his students, using backpropagation on Nvidia’s highly parallelized processors called “graphical processing units”, trained a large neural network with a large amount of data, and were able to achieve a big jump in accuracy on the computer vision problem of image classification. “[Other] AI systems got 25 percent errors, and we got 15 percent errors. It was a huge improvement,” Hinton recalls.

By the mid-2010s, AI research was surging, driven by the success of ever-larger scale AI systems relying on backpropagation in artificial neural networks and trained on larger data sets. The essential idea underlying backpropagation lies at the heart of today’s AI systems, including systems that can recognize and synthesize images, understand and produce speech, and are beginning to capture some aspects of our most cherished human understanding and reasoning abilities.

Long Lasting Impacts

The work done by Rumelhart, McClelland, and Hinton began with the simple curiosity to find an alternative framework that could more completely explain human cognitive functions in the brain. Yet, this basic research laid the groundwork for a revolution in machine learning and artificial intelligence. While modeling and tinkering with applications of the framework was always a part of the work, they couldn’t have foreseen their research underpinning the technologies being developed by trillion-dollar companies today. The PDP framework has profoundly influenced artificial intelligence by demonstrating how neural networks can learn complex patterns and representations through distributed processing and error correction, paving the way for modern deep learning techniques, and improving our understanding of how learning and cognition can be modeled computationally. The PDP book published in 1986 has been cited over 30,000 times and is often regarded as “the bible” of neural network based computational modeling. The book’s core proposals are now standard starting assumptions in many domains of research.

In 1998, David Rumelhart retired from Stanford when the symptoms of Pick’s disease, an Alzheimer's-like disorder, became disabling. In 2011, he passed away from complications with the disease, but the impact of his work – from the people he collaborated with and trained to the early technological breakthroughs in artificial intelligence – lives on today. In 2001, the David E. Rumelhart prize was conceived by Robert J. Glushko, a former student of Rumelhart’s, to honor outstanding contributions to the theoretical foundations of human cognition. The first recipient was Geoffrey Hinton in 2001 and, after chairing the selection committee for several years, James McClelland received the prize in 2010. The Rumelhart Prize honors Rumelhart, the prize recipients, and the broader community of cognitive scientists striving to develop a scientific understanding of minds and mental abilities, drawing insights from a wide range of academic disciplines.

The PDP framework continues to provide a foundation for models of human cognitive abilities, and the effects of brain disorders such as Pick’s disease on these abilities. The framework helped to form the basis for the modern computational approaches that underpin technologies, such as ChatGPT and Bing, and that exceed human abilities in cognitively demanding games like chess and Go. In space, NASA has used artificial neural networks to program the Mars rover so it can learn and adapt to unknown terrain on its own. The framework is beginning to be used to develop systems that can help humans learn and may even help delay the progression of cognitive decline in neural disorders like dementia.

While the impacts of this research have been profound, we shouldn’t think that the resulting technologies fully capture all aspects of human intelligence or solve all of society’s problems. The machines we have today don’t depict our full human cognitive abilities, and one concern is that backpropagation may not fully capture the actual learning algorithms used by the brain. “One key limitation,” says McClelland, “is that people get by with far less training data than AI systems trained with backpropagation.”

While it is not perfect, backpropagation has allowed us to understand a lot about human behavior and can continue to help us explore and advance our ability to build machines that have truly human-like intelligence. Indeed, backpropagation may have the potential to allow large-scale artificial systems to learn more than a human could ever learn, eventually outsmarting humans with potentially profound or even catastrophic consequences. Hinton and McClelland both agree that as a society we should oversee AI technologies to limit potential negative outcomes. At the same time, the exploration of new ideas for capturing intelligence should continue to receive support from governmental and non-profit organizations. Our understanding of intelligence remains incomplete, and research will continue to unlock new possibilities and new forms of understanding for the next breakthrough.

By Meredith Asbury

It’s a Family Affair: The Resurgence of the Red-Cockaded Woodpecker

AWARDEE: Jeff Walters

FEDERAL FUNDING AGENCIES: Department of Defense, National Science Foundation, USDA Forest Service, U.S. Fish and Wildlife Service

Baby birds are known for leaving the nest – just not the ones in Jeff Walters’ world.

That’s because Walters is an expert in the red-cockaded woodpecker, an endangered species in which the babies often stick close to home after they’re grown. Walters’ research on the birds backed up his hypothesis that the homebody birds had an evolutionary advantage. What he didn’t expect at the start was that his work would lead to a whole new way to protect the birds and help grow their numbers.

Red-cockaded woodpecker (522 Productions)

Quality Through Quantity

Though Walters is now known for conservation work, it wasn’t always in his plans. He grew up as a nature-loving kid in Ohio, Illinois, and West Virginia, and earned his Ph.D. in biology at the University of Chicago in 1980. “My passion was to watch animals and to quantify things,” he says. Walters recalls this was around the time the field of behavioral ecology began to shift from largely qualitative to more quantitative research, which suited his interests perfectly. Bird behavior, in particular, fascinated him, and after a dashed attempt at studying sanderlings, he developed an interest in red-cockaded woodpeckers because of the unusual way they cared for their offspring: a practice called “cooperative breeding.”

Cooperative breeding is a rare practice in which animals breed in family groups – meaning that, in addition to the male and female parents, some of the young stay at home with the family and help raise their younger siblings, as opposed to leaving the nest and starting their own families, as most birds do.

“There were a lot of questions about altruism in animals,” Walters says about his early years in the field. “Are animals sacrificing their own reproduction to help other individuals? It was about trying to explain something that didn’t seem to follow natural selection. Why are they staying at home, and why are they doing the helping?”

No Place Like Home

Walters took a position at North Carolina State University, where he developed an interest in the red-cockaded woodpeckers, which are primarily found in the southeastern United States. Their natural habitat spans millions of acres of public and private land, but at that time, many biologists who followed the species were worried about its survival. The red-cockaded woodpecker was among the first birds protected under the Endangered Species Act, which became law in 1973.

Walters’ first National Science Foundation grant involved exploring the red-cockaded woodpeckers’ social system. Specifically, he was interested in measuring how many birds left their families versus how many stayed, and what happened to them later in their life. It was through that first grant that he observed how long it takes the woodpeckers to create a tree cavity to form their nests. He also learned that the birds who stayed home actually had better outcomes than those who left. The question was why?

Jeff Walters

One thing that struck Walters during those initial studies was that the birds did better when they took the time to find a good breeding opportunity by waiting at home. What made some of the best homes for new offspring were actually old cavities created by previous red-cockaded woodpeckers. Creating a new cavity from scratch could take the birds several years, so finding existing ones means the potential for many more offspring over their life spans. This challenge was among the many factors that likely contributed to the red-cockaded woodpeckers’ decline as a species. Cavities could only be made in old trees with lots of hardwood in the middle. These same old trees that the birds favored, including the longleaf pine, were being cut down, and whole territories were being lost, Walters recalls. He began to think of the population as being defined by family groups, rather than individual birds. And he came up with an idea.

From the Forest, a New Plan

Walters thought that if land managers created cavities for the birds, it would help the woodpeckers immensely by making it quicker and more efficient for them to establish new territories and family groups. (This approach, for the record, does not cause serious damage to the trees.) He hoped to test this theory working directly with forest managers, but he was new to the field, and the feedback he received was far from encouraging. They told him it was the “dumbest idea they had ever heard,” Walters recollects.

Luckily, his basic research grants were more successful. Walters was able to apply to the NSF to further investigate the woodpeckers’ collective breeding patterns, specifically whether it was the presence of cavities that helped make the territories high quality. “When they were waiting at home in a familiar territory protected by their family, we knew they that had a better rate of survival. The idea was to make new cavities, and if we were right, we would get new family groups. We tested 20 new sites with artificial cavities and 20 control sites, and we got new groups in 19 out of 20. At the time, it was the largest growth of a red-cockaded woodpecker population ever seen.”

Walters published his theory in a 1991 paper proposing a new management paradigm for red-cockaded woodpeckers. It involved “cavity management”: making cavities and replacing lost ones in existing woodpecker territories. (There are other elements to protecting the birds as well, particularly controlled burns conducted by forest managers.) “People were killing snakes and other predators, focusing on the enemies of the species, and we felt they didn’t need to do that. The problem was all about the cavities,” he says. It was the NSF study and the subsequent paper that got Walters involved in conservation. He began advising federal forestry officials in the southern region on the relevant science. The correlation between woodpecker population increases and the creation and protection of cavities was “very clear,” Walters says. “There aren’t many stories like this in behavioral ecology. There’s a direct path from basic research findings to recovery.”

Still, a lot had to happen to translate Walters’ research into practice. In the 1990s, there was so much conflict about how to manage the red-cockaded woodpeckers, stakeholders started calling it the “woodpecker wars.” It took more than a decade for Walters’ plan to become a widespread practice, but in 2003, the U.S. Fish & Wildlife Service finalized a recovery plan that incorporated the cavity management. USFWS has also orchestrated “safe harbor” agreements with private landowners to incentivize species protection.

Growing Data Sets and Building Partnerships

One of Walters’ former students, Caren Cooper, remembers Walters as an inspiring mentor with a gift for keeping track of things. “He has decades of data sets on the birds that are pristine,” she says. Cooper met Walters at a science talk when she was a high school student and ended up becoming his Ph.D. student; she is now a forestry and environmental resources professor at NC State. “What I find interesting about Jeff’s story is the number of different agencies and stakeholders involved, which Jeff has navigated very well,” she adds. “He’s not an extrovert, yet it requires a lot of people skills to work with the USFWS, all the different state agencies, all the different military branches and agencies, and also the neighbors, which could be individuals with large holdings or timber companies or nature conservancies.”

Indeed, perhaps unexpectedly, much of Walters’ funding following the NSF grants for basic research into the woodpeckers’ breeding habits has come from the Department of Defense. Walters has worked with military bases around the southeastern United States on their woodpecker management efforts, including the Camp Lejeune Marine Base in North Carolina, the Fort Liberty Army Base in North Carolina, and the Eglin Air Force Base in Florida. He has also received support from the USDA Forest Service and USFWS. Beyond the military activities and trainings that are a hallmark of military bases, bases sometimes work with the USFWS to help protect the surrounding lands where they have training exercises.

Jeff Walters

“Something That has Real Implications”

A recent analysis that Walters co-developed examined the species over a roughly 20-year period and found widespread increases in red-cockaded woodpecker populations. The new practices have been so striking that the federal government has proposed downlisting the woodpeckers from “endangered” to “threatened.” According to The Nature Conservancy, the birds “numbered less than 10,000 individuals in the mid-1990s, and today, there are between 18,000 and 19,000 individuals.”

“This is an example where basic research, without any applied goal, can turn out to produce a paradigm shift on the applied side,” says Walters. “The paradigm shifts that come out of basic research: you can’t predict them. You never know what you’re going to get, but you could learn something that has real applications.”

Walters, now at Virginia Tech, has published more than 150 peer-reviewed papers and book chapters related to conservation, behavioral ecology, and population biology. He has traveled the world studying various species of birds, but the red-cockaded woodpecker remains close to his heart. In recognition of his work, he has won several awards, including the American Ornithologists’ Union’s highest research honor, the Elliott Coues Award, in 2002, for his work with the woodpeckers. The award citation stated, “To no small extent, whatever success is achieved in the conservation of this remarkable species will be due to Walters’ insightful and wide-ranging work.”

By Erin Heath

Sketch to Concept: Unraveling the Invention of Nanopore Sequencing

AWARDEES: David Deamer, Mark Akeson, and Daniel Branton

FEDERAL FUNDING AGENCIES: National Institutes of Health, National Science Foundation, Defense Advanced Research Projects Agency, National Aeronautics and Space Administration

David Deamer was driving along a forested road in Oregon in 1989, thinking about his research at the University of California, Davis. Deamer was investigating how to improve the understanding of DNA, the key chemical in the genes of all living organisms. DNA is a long, flexible string-link material formed by the linkage of each of the four smaller chemicals called bases that are abbreviated as A, C, G, and T. These bases are chemically attached to each other in many different sequences such as … CGATTCACCCATATG... Determining the order or sequence of bases in DNA is enormously important; just as the story in a novel is carried by the sequence in which any of the 26 letters in the English alphabet appears on a page, so too does the sequence to which A, C, G, and T bases are linked to each other. The order carries the blueprint, or genes, of all living organisms.

Deamer recalled that the four bases of DNA are all slightly different in size, and that realization unlocked a novel idea about how he could determine the sequence of bases in DNA. He immediately pulled to the side of the road and scribbled down the original concept that would eventually become nanopore sequencing, a method that has enabled some of the most significant advances in the field of genomics. But the road to success took years of research, persistence, and a bit of serendipitous fortune to prove many skeptics wrong.

David Deamer’s original notebook sketch, 1989 (David Deamer)

Deamer wrote in his notebook that day, “DNA will be driven through a small channel…the channel will be carrying a current. As each base passes through, a change in the current will occur. Because the bases are of different sizes, the current change will be proportional, thereby providing an indication of which base it is.” He then continued his drive. For two years, the sketch was just an idea scribbled in his notebook.

Federal Investment in DNA Sequencing Technology

In the mid-to-late 1980s, discussions were just starting to ramp up about the feasibility of sequencing human DNA. Prior to this time period, DNA sequencing was quite limited and depended on slow, tedious methodologies. By 1990, the Human Genome Project, a large international collaborative project to generate the sequence of all 3.2 billion letters in the human genome, was launched.

Early on, the National Center for Human Genome Research (NCHGR) at the National Institutes of Health (NIH), which in 1997 became the National Human Genome Research Institute, was focused on both sequencing the human genome and advancing the technology involved with sequencing. The latter goal meant expanding granting practices to identify and fund high-risk but high-payoff proposals. Nanopore sequencing was one of these high-risk but high-payoff proposals. Until NCHGR was created in 1992, this type of granting practice was entirely outside the norm at NIH.

Two years before the human genome project reached completion, the federal government had invested $500 million into sequencing the first human genome by 2001 and there was great interest in reducing the sequencing cost. The Advanced Sequencing Technology Program (ASTP) at NCHGR focused on funding proposals to improve sequencing technology, which reduced sequencing costs to $1,000 per human genome. Providing funding to develop a dramatic improvement in technology was inherently risky but ultimately achieved a very high payoff. The team in this story received early funding from the National Science Foundation and DARPA as well as continued funding from NASA and NIH’s sequencing programs.

Daniel Branton

How Two Team Members Met

In the 1960s, Daniel Branton was an assistant professor in the Botany department at UC Berkeley. Deamer was a UC Berkeley postdoc working one floor above Branton’s lab. At the time, Branton was exploring a new technique called freeze-fracturing to understand how proteins interact with cell membranes. Most cell biologists did not accept Branton’s interpretation of the microscope images, which he believed showed the freeze-fracturing process to split biological membranes into two sheets. Deamer was intrigued by Branton’s interpretation and joined him in the lab. The results of this work were published as a cover article in Science and significantly advanced our understanding of the structures of cell membranes. It was also the beginning of a lasting friendship that later initiated their research on nanopore sequencing.

David Deamer

Fast forward to 1991. Branton, now a member of the biology faculty at Harvard University, was invited by Deamer to visit UC Davis and give a series of lectures. At the time of his visit, Branton had been contemplating the current methods for DNA sequencing. He thought to himself, “There must be a better way of doing this.” Could the individual bases be identified by measuring the force required to pull each base in a single strand of DNA through the interface between water and air? During their discussions, Deamer shared his idea, scribbled in a notebook from a couple of years earlier, which outlined using an electrical voltage to pull a strand of DNA through a small pore. Branton thought Deamer’s idea was much better than his own and they began collaborating on the nanopore sequencing concept.

Illustration of a nanopore

Serendipity and Skepticism

After they decided to explore what Deamer had fortuitously sketched, they moved forward to identify the best channel, or “pore,” through which to move the DNA. They identified a protein called haemolysin as possibly having a pore wide enough to accommodate a single strand of DNA. In 1993, Deamer met with John Kasianowicz at the National Institute of Standards and Technology (NIST) to test a haemolysin nanopore. “Nanopore” refers to the size of the pore, which is approximately 2 nanometers in diameter (for comparison, a sheet of paper is about 100,000 nanometers thick!). In Deamer’s notebook, he had written, “the thickness of the membrane must be very thin…the channel must be of the dimensions of DNA in cross-section, approx. 1-2 nanometers.” The early experiments Deamer and Kasianowicz performed showed positive confirmation of Deamer’s sketched idea — that a voltage applied across the pore could draw a single strand of DNA through the channel. This finding was a pivotal moment in their work.

The National Science Foundation (NSF) awarded the team $50,000 in 1994 to help sustain the team’s experiments for a year, and then it was time to publish what they had learned. But both Nature and Science rejected their submission. “Nobody could believe it,” Deamer recalls. The scientific community was still skeptical of their findings. Thanks in part to Branton being a member of the National Academy of Sciences, their paper was accepted and published in the Proceedings of the National Academy of Sciences (PNAS) in 1996. The paper would turn out to be a seminal, decisive contribution cited thousands of times.

Concurrently in 1994, Deamer had moved from UC Davis to UC Santa Cruz, and around the same time, he and Branton decided to pursue a patent for their idea. Having a patent would be important if their nanopore idea was going to be transformed into a commercial product. Translating a research idea into something usable for scientists, clinics, or hospitals is a huge undertaking, requiring a large and risky financial commitment from outside investors. Their work was not ready for that next step quite yet, but having the patent would prepare them for what they hoped would be future opportunities to catalyze their work.

Deamer and Branton submitted their idea to Harvard’s patent office and quickly learned another Harvard faculty member, George Church, a well-known geneticist, had a similar idea that involved driving double-stranded DNA through a nanoscopic channel. The three professors – Branton, Church, and Deamer – decided they would all be listed as inventors. This ended up being very fortuitous because Harvard’s patent office expressed skepticism about pursuing a patent for such a risky idea as nanopore sequencing. Church gave their idea more credibility by arguing that if three professors at different institutions had a similar idea, the idea must surely have some merit. Church’s argument helped convince the patent office to proceed with an application.

Mark Akeson

Nanopore sequencing continued gaining momentum: the team received NIH and DARPA grants in 1997 and were awarded patent rights in 1998. And in 1996, the PNAS paper convinced a junior colleague, Mark Akeson, to join the collaboration. Akeson, who had previously trained with Deamer at UC Davis, was a post-doctoral fellow at NIH in Bethesda, Maryland. When Deamer initiated one of his early experiments with Kasianowicz, he visited Akeson, who recalls driving Deamer to NIST thinking (along with many others at the time) “This idea is nuts!” But once Akeson read the pivotal PNAS paper, he decided to leave his stable position at NIH and moved across the country to rejoin Deamer. It was a risky move, but for Akeson the chance to work on a revolutionary sequencing concept won the day.

By 1997, Akeson, Branton, and Deamer (along with collaborators Kasianowicz and staff scientist Eric Brandin at Harvard University) took their work to the next level. Now that they had a working pore, they needed to test if it could distinguish between the A, G, T, and C bases. If it couldn’t make any distinction whatsoever, the nanopore sequencing concept might never work. In 1999, research led by Akeson demonstrated that the nanopore could distinguish between stretches of C versus A bases composing individual RNA molecules. Akeson and Deamer, along with graduate students Wenonah Vercoutere and Stephen Winters-Hilt, subsequently published a paper in 2001 that showed individual A-T or G-C base pairs could be distinguished at the end of single DNA molecules suspended in a nanopore. This level of nanoscopic precision “finally convinced a bunch of skeptical people that we knew what we were doing,” Deamer recalls.

Controlling the DNA in the Nanopore Sensor

The collaborators had shown that long segments of two different bases could be distinguished as they passed through a protein nanopore, but sequencing would require distinguishing between individual bases, not segments, within each DNA strand. This was not possible in 1999 because strands of DNA and RNA (driven by a strong electric field) rocketed through the nanopore at roughly one microsecond per base (one-millionth of a second) – too fast to read single bases. One possible solution was to use a processive enzyme (a ‘molecular motor’) to reduce the translocation speed by approximately a thousand-fold. In the mid-2000s, Akeson’s group at UC Santa Cruz and a group led by Hagan Bayley in Oxford, UK, found that several DNA ‘motors’ could bind and slow DNA movement through nanopores, but only for one or two bases before slipping away. According to Akeson, “This result was frustrating and concerning, because the strength of the electric force acting on DNA in the nanopore was unknown, so it was a real possibility that no molecular motor could bind DNA strongly enough to allow long strings of bases to be sequenced.”

Fortunately, their perseverance paid off. In 2010, the UC Santa Cruz group was able to secure a molecular motor to translocate DNA strands back and forth through the alpha-haemolysin nanopore at the length of one base in DNA. More than 500 DNA molecules in series were moved one-by-one in precise order through the nanopore under robotic control.

Subsequent Work and Continuing Impact

In 2007, Deamer, Akeson, Branton, and his Harvard collaborator, physicist Jene Golovchenko, were approached by Oxford Nanopore Technologies (ONT) to license patents related to nanopore sequencing with the hope of commercializing the technology. All four gave permission, and within seven years, ONT developed and brought to market the MinION, a $1,000 pocket-sized sequencing device. Unlike traditional sequencing approaches, which need time-consuming computer analysis after sequencing is completed, nanopore sequencing produces direct, real-time sequence reads as soon as the sequencing process begins. Furthermore, nanopore sequencing can read longer strands of DNA than most other devices and has been used virtually anywhere, including the top of an arctic iceberg, a remote settlement in Africa where electrical power is unreliable, and on board the International Space Station.

Euah Ashley and John Gorzynski performing nanopore sequencing at Stanford University Hospital, 2022

Astronaut Kate Rubin sequencing DNA with a MinION on the International Space Station (NASA)

In 2022, this combination of speed and read length helped deliver human genomes for intensive care patients at Stanford University Hospital. One of the sequencing tasks set a speed record of five hours for a human genome, and another was credited for saving a young boy’s life by identifying a genetic mutation causing heart disease.

The work to develop nanopore sequencing into what it is today took over 30 years of commitment to pursuing basic scientific understandings. This eventually led to the advancement of a commercially viable product. Nanopore sequencing has been widely used for pathogen analysis in outbreak surveillance of infectious diseases such as tuberculosis, Ebola, Zika, and dengue fever, among others. Many of the sequenced COVID-19 genomes globally used nanopore sequencing, making it crucial in the fight to end the pandemic. Additional applications of the technology continue to be unearthed, and it was all possible thanks to an idea scribbled in a notebook.

By Meredith Asbury

Raising Chickens, Elevating Scientists

AWARDEE: Paul Siegel

FEDERAL FUNDING AGENCIES: Department of Agriculture, National Science Foundation

Imagine a single science experiment – run by a single scientist – for 65 years and counting. Paul Siegel doesn’t have to imagine it. He lives it. Siegel, 90, still visits nearly every day the Virginia Tech lab where he began his seminal work with chickens in 1957. That’s when Siegel began breeding two lines of chickens, one high-weight and one low-weight; those lines continue today, along with another longtime set of lines related to immunity. This work is well-known to poultry scientists throughout the world and serves as a foundation for modern methods of raising and breeding chickens, a major global food source. The impact of Siegel’s work on humans, rather than chickens, is perhaps his most profound contribution: he has trained and mentored hundreds of students throughout his distinguished career.

Paul Siegel with his high- and low-weight chickens (John McCormick, Virginia Tech)

The Chicken Guy

Siegel, a self-proclaimed “chicken guy,” grew up on a 32-acre farm in Connecticut, where his family grew tobacco and raised chickens. He was fascinated by the chickens and began observing their breeding – seeing, for example, what feather colors emerged from the offspring of different pairings. In 1949, he was recognized in a U.S. Department of Agriculture (USDA) “Chicken of Tomorrow” contest for “outstanding achievement in breeding and development of superior meat-type chickens.”

When he was 10 years old, he met a poultry geneticist at the University of Connecticut, Dr. Walter Collins, whose job sounded a lot like what Siegel was doing for fun on the farm. Siegel was hooked. “I asked him how to do it as a career, and he said you need a Ph.D. I didn’t even know what a Ph.D. was,” he recalls.

That changed after he studied first at Collins’ university and later at Kansas State University, where he earned his doctorate and then went on the job market. His first – and last – job interview was at Virginia Tech. After joining the faculty in 1957, he has been coming to campus ever since.

Siegel as a young researcher

Diverging Lines

Shortly after beginning his role at Virginia Tech, Siegel initiated a pedigreed growth selection study, selecting birds for either high or low body weight when the birds were eight weeks of age. (Meat-type chickens are marketed at a fixed weight, Siegel says. At the time, they were marketed at twelve weeks of age, and eight weeks seemed absurd. Today, it is six weeks or less.) With some restrictions, birds with the highest weights were used as parents of the next generation for the high-weight selected line, whereas those with the lowest weights were used for the low-weight selected line. The biology of the lines is remarkable – today the average weight of the birds in the high-weight selected line is nearly twelve times the average weight of those in the low-weight selected line. Even more remarkable is that Siegel has selectively bred and maintained the lines continuously for more than 65 generations (years).

Several years later, he initiated another long-running experiment, this time involving a chicken’s immune response, where he separated chickens into high- and low-immune function lines.

Early support for the lines came from Hatch Act funds, a federal agricultural funding program that also involves state matching funds, and USDA regional research funds, now called USDA Multistate Funds, which have offered critical long-running support. Siegel has also received grants from the USDA’s National Institute of Food and Agriculture and the National Science Foundation. In addition, he has worked with the U.S.-Israel Binational Agricultural Research and Development Fund.

 “He never gave up on those birds,” says Mary Delany, Professor Emerita of Developmental Genetics at the University of California, Davis. “He can fight the fight. He had to continue to convince the university of their value.” Virginia Tech even named its poultry research center after Siegel in 2010, and Siegel continues to serve as University Distinguished Professor Emeritus there.

A Global Food Source

In the mid-1950s, when Siegel began his work, chicken was not the food staple it is today. Throughout much of American history, chicken was considered a meal fit only for Sunday dinners or special occasions. Due in part to changes in the way farmers raised their flocks, consumers’ food storage practices, and cultural norms, chickens are now a global food powerhouse. From the commercial companies that cultivate broilers (chickens raised for their meat) and layers (chickens raised for their eggs) to the resurgence of backyard breeding, and to native and indigenous stocks around the world, chickens are a cost-effective source of protein.

Back then, a chicken raised on a farm would require more than three pounds of feed for every pound of its own weight, according to Bob Taylor, Professor of Animal and Nutritional Sciences at West Virginia University. Today, though a chicken can grow to twice the size, it eats less than two pounds of feed per pound of its weight. Siegel’s work was integral to this change.

It comes down to something called the resource allocation model, Taylor explains. “The best way I can describe it is if you have a fully electric car: the electricity and battery go to run the motor or motors that drive the wheels, but there are other things in that car that draw on that electricity and take away from the ability to run the motor, like the heater, A/C, radio, phone charger. All of those things draw away from the ability of the battery to run the motor. Not that they all can’t occur at the same time, but rather, the way they are used determines how long a motor will run to drive the car.”

Similarly, Siegel and his long-running experiments have shown how different factors in a chicken – such as changes in body weight or immunity – can affect other parts of a chicken’s life, such as how much feed it eats, how much physical strength it has, how robust its immune function is, or how well it breeds with other chickens. Understanding this balance has helped chicken producers in a growing industry.

Siegel as a child on his family’s farm

Siegel’s curiosity about resource allocation stemmed from his early years. “There are finite resources, and I knew from growing up on a farm that there are only so many chickens and so many eggs. Which way is it going to go? What are the tradeoffs? You have to be able to understand the whole interplay.” He also credits his upbringing with honing his abilities to take care of his chickens and maintain the lines smoothly over so many years. “It’s mostly instinct,” he points out. “You can be a natural athlete, but you still need a sense of what to do. Luck favors the prepared.”

Layers of Learning

While Siegel’s experiment continued unabated for decades, it contributed to scientific breakthroughs as human scientific understanding advanced. His famous lines have been featured in publications such as National Geographic and Nature; in the latter, he co-authored a paper identifying key genetic markers in chickens. “His early work was understanding growth and immune response,” says Delany. “Then it evolved to the metabolism. Science kept changing, and soon you could get to the cellular and molecular level. Then there’s a genetic layer and an environmental layer. You could study these lines for a more holistic approach to health in chickens, on obesity, eating behavior and disorders, bone health, and the humane treatment of animals. More recently, the studies have turned to inputs related to climate changes and how animals respond to changes in those inputs.”

According to the Food and Agriculture Organization (FAO) of the United Nations, “world poultry meat production soared from 9 to 133 million tonnes (metric tons) between 1961 and 2020,” representing a 4.67 percent annual increase over 60 years. “Certainly, some of that change can be attributed to scale-up across the poultry industry,” Taylor says. “I think that Dr. Siegel’s genetic selection studies helped make this change achievable. Food production efficiency is already important, and it will become more so as the world experiences an increasing need for food production to meet the demand created by population growth.”

Taylor estimates that as of 2021, the wholesale value of poultry products worldwide reached around 90 billion dollars. Though it’s hard to directly quantify Siegel’s contributions, Taylor says, “Even if his impact was only 1 percent, a conservative estimate, that’s 900 million dollars. I would call that pretty significant.”

What stands out to Delany in addition to Siegel’s contributions to the field is his love for his research subjects. “There’s an animal at the end of the DNA strand,” she says. “Paul understood them on a cellular level, their metabolism, their DNA – but he never forgot about the animal.”

Siegel and Lab Specialist Christa Honaker (Jane W. Graham)

 The Individual is the Impact

Siegel is most proud of the impact he’s had on his students and trainees. He estimates he’s served as a major advisor for about 50 graduate students and hundreds of undergraduate and graduate students. “I was so lucky to work with smart people,” he says.

Nick Anthony studied under Siegel – an experience that inspired him to start his own lines at the University of Arkansas, where he spent many years before going into the poultry industry.

It was exciting – and exacting – work. A student of Siegel’s had to be constantly on alert. “We had to simultaneously watch the pens and count the number of times the male mounted the female and how many times successful mating took place. It taught the students to be observant. You introduce a lot of chaos when you introduce a high-mating bird. It taught me to be a better bird biologist and bird behaviorist, and to focus on what that bird is telling you.”

Now at Cobb-Vantress, a global poultry company, Anthony has conversations informed by Siegel’s work. For example, the poultry industry used to routinely administer antibiotics to chickens to enhance growth and protect them from disease. Now, with modern knowledge of how this practice can contribute to antibiotic resistance, chicken producers and purchasers are increasingly moving away from it. “If you can select for disease resistance, you can reduce the number of antibiotics in the program,” he says. “It just shows how relevant he was, how ahead of the game.”

Notably, Siegel was “an early leader in what we might call the diversity movement,” says Taylor. “He had grad students of various backgrounds – ethnic, religious, gender, the whole thing. You name a category, and they were likely represented among his students.” This sentiment is echoed by several of Siegel’s other colleagues and collaborators.

Siegel has also made it a practice to send eggs to scientists who have wanted to work with them in their own experiments. “If they wanted to study the birds, they could have the birds,” says Delany. “He is very generous in spirit.”

One of those scientists is Leif Andersson at Sweden’s Uppsala University. Working together and with a team of collaborators, Andersson and Siegel conducted a buzzworthy study indicating that evolution can happen 15 times faster than previously believed. Siegel’s lines played a pivotal role.

Indeed, the impact of the lines has been felt worldwide, says Anthony. “Hundreds of researchers have benefited from his program,” he says. Siegel can name several countries where he has collaborated on cooperative research involving direct use or knowledge gained from his lines, including Sweden, Israel, China, Malaysia, Peru, Brazil, France, Italy, England, Turkey, Ghana, Hungary, Japan, Germany, and the Netherlands.

It was also common for grad students from abroad to come to Virginia for brief periods to work with the lines. Often, they would stay at Siegel’s house, which became affectionately known as the “Siegel Hilton.” Recalls Siegel, “It was a win-win, because our children became exposed to other cultures and foods.”

Siegel (522 Productions)

 A Basket of Golden Eggs

Siegel’s work has been recognized by multiple agricultural organizations, including in the International Poultry Hall of Fame. To date, he has over 600 scientific journal publications on topics as wide-ranging as immune function, reproductive biology, feed efficiency, single genes and genome evolution, neurobiology, muscle and adipose biology, nutrition and metabolism, stress biology, and bone health. His long-term selected lines have contributed to the molecular, cellular, tissue, and organismal levels of scientific understandings of the genetics and inheritance mechanisms in numerous fields – a basket of golden eggs.

Beyond his peers in agricultural science and genomics, Siegel has received accolades in the broader scientific community, including election as an honorary fellow of AAAS (which houses the Golden Goose Award). He has served as president of the Poultry Science Association, Animal Behavior Society, Virginia Academy of Science, and the American Institute of Biological Sciences.

A bobblehead of Siegel and his chickens, which sits on Siegel’s desk

As to what motivated him to carry out a 65-year experiment – and counting – it was, in large part, the excitement of the unknown. “I never knew what the next generation [of chickens] was going to give me,” Siegel says. “If I knew what I was doing, it wouldn’t be science. That’s the joy of science.”

However, he says, “I see individuals as my impact” – meaning his work as a teacher, mentor, colleague and collaborator. Siegel is now retired, but for the past 20 years, he has continued to visit his lab nearly every day. He wouldn’t have it any other way: “I’m very, very fortunate.”

By Erin Heath

Agrobacterium: Nature’s Genetic Engineer, Hidden Within Plant Tumors

AWARDEE: Mary-Dell Chilton

FEDERAL FUNDING AGENCIES: Department of Agriculture, Department of Energy, National Institutes of Health, National Science Foundation

The soil beneath our feet teems with life. Beetles scuttle along the surface and burrow down to lay their eggs next to earthworm tunnels, within intricate fungal networks. But at the microscopic level, even more biodiversity emerges — just one teaspoonful of topsoil may contain on the order of a billion bacteria. At any given time, these bacteria are working to decompose organic matter, return nitrogen to the soil, and produce substances that bind soil particles together, improving soil structure. Over millions of years, bacteria have also evolved savvy survival strategies, some of which involve hijacking plants’ systems for their own benefit.

Agrobacterium tumefaciens is one of these bacteria. When it encounters a plant wound — caused perhaps by human pruning, grafting, frost injury, or insect feeding — Agrobacterium can slip into the wound site and cause plant cells to divide uncontrollably, creating a tumorous growth. Agrobacterium can utilize this as a nutrient source while most other bacteria cannot.

Agrobacterium tumefaciens cells (Jing Xu, Indiana University)

Crown galls on a grapevine (Thomas Todaro, Michigan State University)

Agrobacterium isn’t inherently good or bad — it’s just trying to survive — but humans have historically been less than pleased with its effects because the tumors (also known as crown galls) disfigure the plant and interfere with crop growth. The galls can intercept nutrients and water, weakening plants and making them more susceptible to harsh weather conditions and diseases, even killing young plants. In the early 1900s, researchers discovered that Agrobacterium was the culprit behind crown galls on grapevines, which had negative effects on the wine industry. This prompted a search for the mechanism of crown gall formation in the hope of thwarting what humans considered a threat to crops.

Mary-Dell Chilton (Syngenta)

The Seed of an Idea

In the late 1960s, researcher Mary-Dell Chilton was completing a postdoctoral fellowship at the University of Washington (UW) in Seattle. She was intrigued by bacterial transformation, the process by which certain bacteria correct mutations inside their DNA.

She learned a variety of techniques to study bacterial DNA; for example, if she mixed error-free bacterial DNA with bacteria containing mutated DNA and spread the mixture on a surface where the mutant could not grow, a few of the formerly mutant bacteria would grow. This meant the mutant bacteria had somehow been transformed into an error-free version. The original error-free DNA molecules could seemingly enter the mutant bacteria, find the right place to go, and repair the mutant DNA. Importantly, this process of correction (known as “transformation”) would only happen if there was a very good match, or “homology,” between the donor DNA and the DNA of the recipient bacteria. Chilton went beyond this technique and even designed an experiment to measure the percent of mismatch between mutant DNA and error-free DNA for several genes of interest.

Chilton photographing wildflowers with her son Andrew Chilton and late husband Mark Chilton (Syngenta)

After completing the postdoctoral fellowship and caring for her newborn second son for three months, Chilton got a job at UW. She became keenly interested in current publications about Agrobacterium transformation of tobacco plants; some researchers claimed they’d found evidence that Agrobacterium DNA was transferred into plant cells in tobacco crown galls. However, while reviewing these studies, Chilton and her students noticed that important controls had not been performed for the DNA hybridization experiments — the evidence did not support the hypothesis that Agrobacterium was transferring DNA into plant cells.

A Serendipitous Subversion

Chilton, leading a team of researchers funded by the National Institutes of Health’s National Cancer Institute, set out to test properly the claim that Agrobacterium was transferring part of its DNA into plant DNA, a process known as “hybridizing.” Since she’d seen no evidence that this sort of genetic exchange could happen without homology, she anticipated a negative outcome. At first, when Chilton’s team ran the control experiments missing from previous studies, they found no evidence of Agrobacterium DNA transferring to plant DNA.

The key phrase there is that they found no evidence—the methods they were using simply weren’t powerful enough. Indeed, with 20:20 hindsight, there were technical problems with all the experiments looking for Agrobacterium-plant hybridization, but the most fundamental issue was that everyone, Chilton included, was looking for the wrong kind of DNA.

Bacterial cell containing DNA in chromosomal and plasmid form

Quietly, in Jeff Schell and Marc van Montagu’s lab at the University of Ghent in Belgium, an unsung hero of this story made a breakthrough. While investigating Agrobacterium replication, postdoc Ivo Zaenen accidentally stumbled upon giant circular DNA molecules within Agrobacterium. Bacteria often store DNA in tiny circular molecules called plasmids, but these were much larger than expected. What’s more, they were present in virulent Agrobacterium (strains of Agrobacterium that caused crown gall growth) but not avirulent (strains that did not cause crown gall growth). There was soon enough evidence to recognize these large circular molecules were indeed plasmids, but BIG ones, mega plasmids — aptly named tumor-inducing (Ti) plasmids.

Prior studies hadn’t found detectable Agrobacterium DNA in plants because they needed this last clue: Agrobacterium keeps some of its DNA in these circular molecules. When Chilton’s team looked for specific pieces of the Ti plasmid in the DNA of transformed plant cells, rather than DNA from the entire bacterium or the entire plasmid, they found conclusively that part of the plasmid was indeed in the gall cells. That was the component of the Agrobacterium DNA causing the tumorous crown galls to grow on plants. More and more researchers confirmed the finding using various methods, shifting the entire research community’s conception of how DNA transfer can occur.

Mary-Dell Chilton (second from top right) with fellow researchers Don Merlo, Martin Drummond, Eugene Nester, Daniela Sciaky, Alice Montoya, and Milton Gordon (Esther M. Zimmer Lederberg Memorial Website)

Chilton recalls with delight proving herself wrong:

“I had a tape with the radioactivity measurements from our experiment and was performing the calculations at my kitchen table after the kids had gone to bed. I said, ‘My God, the DNA is there!’ Before that experiment, I was sure that you could not get bacterial genes to recombine with plant genes—there is no homology between the two. I just absolutely did not believe it. It went against everything I had ever learned. But in the process of trying to prove the idea wrong, I proved it was indeed right. It is important to remember that even if the evidence is incorrect, that does not make the idea wrong!”

Chilton examining tobacco plants at Washington University in St. Louis (Syngenta)

Tiny Bacterium, Huge Breakthrough

Chilton didn’t stop there. After Washington University in St. Louis hired her as an associate professor of biology, she started exploring ways to take advantage of Agrobacterium’s natural ability to add DNA to plant cells. With grants from the Department of Energy and National Science Foundation, along with private funding, Chilton’s team built on the techniques she’d used before to create the world’s first transgenic tobacco plant. She recalls the irony of giving cancer to tobacco plants, but that wasn’t the ultimate goal; later, she figured out a way to disarm the Agrobacterium Ti plasmid so it did not cause tumorous galls to grow but maintained its ability to transfer genes to plants. Later, the National Institute of Food and Agriculture funded research that improved transformation efficiency and expanded to other types of field crops based on the findings of Chilton’s team.

Patent for the regeneration of plants containing genetically-engineered T-DNA (Syngenta)

 Chilton speaks often about the many researchers with whom she worked directly and indirectly, as well as collaboration with international researchers, who were essential in the journey toward these breakthroughs. One such researcher, Professor Andrew Binns of University of Pennsylvania, was the member of Chilton’s team who coaxed transformed tobacco cells to grow, produce shoots and roots, and finally set fertile seeds. Participants in this collaborative effort forged a bond of trust and fellowship that has strengthened over the decades.

 The company CIBA-Geigy (now Syngenta) recruited Chilton in 1983 to launch its Agricultural Biotechnology Research Unit and conduct a research program that would produce genetically modified seed. By the mid-1990s, the first transgenic crops were cultivated and made commercially available.

Reaping the Rewards of Research

Chilton’s outside-the-box thinking and willingness to follow the evidence — even when she was proving her own beliefs wrong — didn’t just facilitate the understanding of this bacterial gene transfer mechanism that existed for millions of years; she repurposed the natural gene transfer ability of the bacterium into a technique now ubiquitous across biotechnology, now known as Agrobacterium-mediated transformation (AMT).

Chilton receiving the USDA Hall of Heroes Plaque in 2015 (Bob Nichols, USDA)

AMT’s environmental and economic impacts on agriculture have been massive. The incredibly powerful technique can introduce a DNA sequence that causes a plant to produce a protein that kills one type of pest when the pest tries to eat the plant, but does not harm any other non-targeted insects and animals. Genetically modified cotton with pest-resistant traits, called Bt cotton, has contributed to a significant decrease in insecticides applied (66% between 1994 and 2019), which has in turn decreased costs for farmers and lessened environmental impacts, including bioaccumulation, water contamination, and deaths of non-pest insects. Bt cotton and corn have also increased yields (by mitigating losses to pests) and profit margins for U.S. farmers compared to conventional seeds.

The AMT technique is so useful, in fact, that it is still used to deliver components of CRISPR/Cas9 — the new and very powerful gene editing tool — into plants, and researchers are now using Agrobacterium T-DNA sequences to study epigenetics, how environmental factors impact gene expression.

The magnitude of development in this field sparked by Chilton’s work is remarkable. By following her fascination with science, Chilton took the seed of a federally funded experiment on a soil bacterium and cultivated an entirely new field of biotechnology research, which continues to grow, with the promise of more economic benefits and scientific advances to come.

By Gwendolyn Bogard

How a Lab Incident Led to Better Eye Surgery for Millions of People

AWARDEES: Tibor Juhasz, Ron Kurtz, Detao Du, Gérard Mourou, and Donna Strickland

FEDERAL FUNDING AGENCIES: Department of Energy, National Science Foundation

Nearly 30 years ago, a graduate student at the University of Michigan's Center for Ultrafast Optical Science (CUOS) experienced an accidental laser injury to his eye. Fortunately, his vision was not severely affected; however, the evident exact and perfectly circular damage produced by the laser led to an exciting collaboration. Eight years later, that collaboration developed a bladeless approach to corrective eye surgery. The new procedure, also known as bladeless LASIK, uses a femtosecond laser rather than a precision scalpel to cut into the human cornea before it is reshaped to improve the patient’s vision.

Since 2002, 24 million people have benefited from the bladeless LASIK approach, which limits complications and broadens the pool of eligible patients. It is now considered the standard in the field.

It Only Takes a “Pulse”

In 1953, a 2012 Golden Goose Awardee, Charles H. Townes, built the first microwave amplification by stimulated emission of radiation, maser for short, and thereby introduced the world to the principle of laser technology. While he encountered many doubters who saw little value in the technology, the scientific community began to explore and advance ways to create more intense pulses.

Gérard Mourou in the LLE lab at the University of Rochester

The U.S. Department of Energy defined the field of ultrafast science as “the study of processes in atoms, molecules, or materials that occur in millionths of a billionth of a second or faster.” By the 1980s, the field had reached what seemed to be the upper limits of intensity. However, in 1985, Gérard Mourou, a French physicist and professor at the University of Rochester, was experimenting with even higher intensity laser pulses, looking for ways to increase intensity without rebuilding the laser. Mourou’s team was part of Rochester’s Laboratory for Laser Energetics (LLE), which was funded by the U.S. Department of Energy.

The eureka moment came, Mourou said, when he was riding a ski lift. As he saw the chairs on the lift spread and bunch, he realized he could do the same with the laser pulses. He hypothesized that spacing out and boosting the pulses before bringing them back together would result in higher-intensity laser pulses. Donna Strickland, one of only a handful of women pursuing a Ph.D. in physics at Rochester at the time, joined the Laboratory for Laser Energetics (LLE) and Strickland was assigned to Mourou’s ultrafast science group to help test his theory.

Donna Strickland (University of Waterloo)

Together, they developed chirped pulse amplification, or CPA, an optical technique that produces short, intense laser pulses that can vaporize precise points without causing collateral damage to the surrounding material. This discovery earned Mourou and Strickland half of the 2018 Nobel Prize in Physics. This development led to a dramatic escalation of peak intensities in laser technology, launching several new areas of research and practical applications.

In the Blink of an Eye

Mourou brought the CPA technique to the University of Michigan when he founded the Center for Ultrafast Optical Science (CUOS), a National Science Foundation Science and Technology Center from 1991 to 2001. At the time, they were using the technology to understand chemical reactions and to freeze those reactions – medical applications were far down the list of possibilities.

Detao Du

At CUOS, Dr. Mourou had several graduate students working in his lab including Detao Du, who joined the lab in 1993. Du recalls being assigned projects in the lab but also having the freedom to explore findings that caught his interest. During his quest to answer complex research questions, there was an incident that he tried not to dwell on. Du says he “must have been tired” one evening and accidentally lifted his goggles while aligning the mirrors of a femtosecond laser in the lab. This infrared laser emits burst of laser energy at an extremely fast rate. And thus, inadvertently making him vulnerable to some of the most powerful controlled bursts of energy produced in a laboratory at the time. Du says he caught a stray beam (not the main laser beam) and saw a flash. His instincts told him that although he experienced no visible damage to his eye, he needed to be checked out by a doctor to make sure there was no underlying damage.

Ron Kurtz

Du was examined at to the University of Michigan’s Kellogg Eye Center. That is where Ron Kurtz, a second-year medical resident, was called in to assess the injury. During the evaluation, Kurtz observed that the laser left a series of pinpoint laser burns in the center of his retina, very precise and perfectly circular, unlike other lasers in clinical use at that time. After observing the injury, Kurtz met with Mourou’s team, primarily to promote laser safety in the lab, but also to learn whether the lasers they were developing could be clinically useful. This collaboration would shed light on important observations regarding the damage thresholds for tissue and other materials using femtosecond laser pulses. 

A Serendipitous Encounter

After a little over a year of conducting experiments, Ron Kurtz and Detao Du presented their findings at an optics conference in 1994. There they met Tibor Juhasz, a research scientist at the University of California, Irvine who happened to be investigating femtosecond laser applications in ophthalmology. Juhasz had also been a postdoctoral researcher in Mourou’s lab at the University of Rochester several years earlier in 1987. Soon after the conference he received an early morning phone call from his former mentor, inviting him to come to the University of Michigan to help develop femtosecond lasers for use in the eye.

By that time, the University of Michigan had also come to understand the potential of the work to have both societal impact and economic value and committed university research funds to help expand and accelerate the work. Additional funding came from NIH and NSF through the Small Business Innovation Research (SBIR) program. Together, Kurtz, Juhasz, and the CUOS lab contributed to developing of a novel approach to corrective eye surgery that left the surrounding tissue untouched.

Tibor Juhasz

By 1997, Kurtz and Juhasz founded IntraLase, a spinoff company that focused on commercializing the bladeless LASIK technique for refractive surgical procedures (also known as corrective eye surgery). Though there were several potential applications their startup could explore, they decided to focus on the cornea of the eye which seemed the most promising. The procedure they developed, uses a femtosecond laser instead of the manual blade that had previously been used to cut into the cornea before reshaping it to improve the patient’s focus.

By 2001, they launched the world’s first commercially available femtosecond laser after several years of successful clinical studies and eventual FDA clearance. The startup received initial seed funding from the Enterprise Development Fund of Ann Arbor and additional support from Small Business Innovation Research (SBIR) funding provided by both NIH and NSF as well as the university and its technology transfer office. In 2004, IntraLase went public and earned $84 million before it was acquired in 2007 for $808 million by Advanced Medical Optics. Later, IntraLase became the property of Johnson & Johnson Vision. To this day, IntraLase is considered one of the most successful startups incubated at the University of Michigan, according to Crain’s Detroit Business.

Setting the Standard and Emerging Applications

The femtosecond laser approach eventually replaced the mechanical blade and is now even catching up to the excimer laser, which has long been the standard laser for reshaping the eye. Due to the precision of the cut from the femtosecond laser, these bladeless approaches typically present fewer risks for complications compared to previous generations of procedures. Since its introduction, the number of procedures has grown to 2 to 3 million each year, with more than 30 million people having benefited from the technology to date. The development and widespread acceptance of a femtosecond laser for use in eye surgeries has led to laser vision correction being one of the most regularly performed surgeries in the country (and the world).

As a result of the lab incident and research done in the CUOS lab, many of those researchers, including Ron Kurtz and Tibor Juhasz, have continued to explore other potential applications of the femtosecond laser, particularly for common ophthalmic conditions and diseases including glaucoma and cataracts. In 2008, the two teamed up again to develop femtosecond laser cataract surgery; today, this technology is used in about 15 percent of cataract surgeries. Currently, a California start-up led by Juhasz, ViaLase, is developing new methods to treat glaucoma more effectively by using femtosecond lasers.

Thanks to their work, many of us are seeing a little bit more clearly these days.

By Meredith Asbury

Foldscopes and Frugal Science: Paper Microscopes Make Science Globally Accessible

AWARDEES: Manu Prakash and Jim Cybulski

FEDERAL FUNDING AGENCIES: National Science Foundation, National Institutes of Health

Picture a microscope. You are likely imagining a heavy metal base, a viewfinder tube to squint through, and knobs on the side to bring a tiny specimen into focus. An essential tool for science for over 400 years, microscopes have identified disease-causing bacteria, revealed the building blocks of living organisms, and introduced children to the joys of science. But in certain areas of the world, barriers to transport, training, and maintenance can make even standard microscopes inaccessible.

Foldscope (Foldscope Instruments, Inc.)

Manu Prakash and Jim Cybulski’s response to the problem is the Foldscope, a paper microscope that can achieve powerful magnification and costs less than $1 in parts. It has been a little over a decade since the Foldscope’s inception, but 1.8 million have already been distributed in over 160 countries, dramatically increasing accessibility to science. Foldscopes have been used for everything from identifying agricultural pests to STEM education in refugee camps. But how was a several-thousand-dollar scientific instrument reimagined in paper form? Prakash and Cybulski found the answer through curiosity and play.

The Journey Begins

Manu Prakash grew up in the northern Indian village of Rampur. Aside from time spent on rigid, exam-focused schoolwork, where students rarely (if ever) got hands-on experience with scientific instruments, Prakash remembers spending hours upon hours tinkering in nature. “No one told me, ‘This is science,’” says Prakash. He thought constructing a “microscope” from his older brother’s glasses lens was just play. Today, Prakash still traces his predilection for curiosity-driven scientific exploration to informal learning outside of the classroom.

Manu Prakash, Stanford University (Manu Prakash)

 Cut to 2011; after years of studying science and engineering, culminating in a Ph.D. in Applied Physics, Prakash was hired as a professor at Stanford University. While he waited for his physical lab to be set up, Prakash decided to get a jump on fieldwork; he traveled to Thailand and India to study infectious disease diagnostics.

The World Health Organization estimates that 241 million people were diagnosed with malaria in 2020. However, there are far more suspected cases each year, so more testing capacity is sorely needed. Early malaria diagnosis is crucial; it allows doctors to intervene and stop disease progression, reduces deaths, and limits transmission, so increasing malaria diagnosis accessibility saves lives.

Prakash found one potential key to the lack of diagnostic capacity in an unexpected place: a lab in the middle of a Thai rainforest where a $50,000 microscope was locked away in a room. Out of the tests for malaria, examination of blood samples under a microscope is considered the most reliable. So why was this essential instrument sitting unused? The problem had multiple dimensions. First, microscopes are bulky and difficult to transport to remote locations. Second, even when a microscope is available, access to training, repairs, or maintenance may be out of reach. And third, microscopes are expensive and delicate, so trained lab technicians may still feel anxious about using them in areas where budgets are tight.

Prakash began wondering, what would a cheap, hardy microscope that could be widely distributed look like? A self-proclaimed doodler, Prakash sketched in his journal the concept of a microscope that could be printed like a newspaper.

Impact-Oriented Scientists

Jim Cybulski, Foldscope Instruments (Jim Cybulski)

Back at Stanford University, graduate student Jim Cybulski was looking for a Ph.D. project that was impactful outside the lab. He had a background in engineering but hadn’t quite found the balance between innovation and impact within the world of academia and industry. That all changed when he met with Prakash, newly returned from field work travels. Cybulski recalls feeling an immediate resonance with Prakash. They had both grown up in non-wealthy, rural households (Cybulski grew up in Northeastern Pennsylvania), and they both liked tinkering.

When Prakash told Cybulski about his vision of an inexpensive, accessible microscope, Cybulski immediately understood the potential; he saw the opportunity to create something with great utility.

How to Make a Microscope

At their most basic, microscopes have an eyepiece (the tube you look through), a lens (the magnification), a component that holds a specimen for viewing (which pans from side to side to explore different areas of the specimen), a mechanism to bring a specimen into focus (usually by moving it physically closer or farther from the lens), and a light source (to illuminate the specimen).

Cybulski explains that scientific tool developers usually choose one of two supposedly non-compatible priorities: the need to make the tool cheap and just good enough to get the job done, or the need to make a high performing tool regardless of cost. Cybulski and Prakash rejected that binary. They wanted their microscope to be low-cost (made for less than $1), high performing (able to “see” a malaria parasite), and accessible (capable of large-scale worldwide distribution).

Goal 1: Make it Low Cost

Early sketch of the Foldscope (Foldscope Instruments, Inc.)

True to their doodling and tinkering roots, Prakash and Cybulski sketched designs on paper taken from the printer in the corner of the lab and fiddled with a matchbook. The matchbook had what they were looking for, a cheap but sturdy structure, but it was too small. When Cybulski held up the prototype, he poked himself in the eye with his thumbs. The next iteration was slightly wider and made of file folders they found in the lab. To keep costs low, Prakash and Cybulski decided their microscope should accommodate standard microscope slides — 3-inches by 1-inch thin glass rectangles upon which a specimen is placed for viewing — which scientists typically have on hand.

Foldscope layout scheme (Foldscope Instruments, Inc.)

The other big consideration was the material they should use to construct the microscope. The more Prakash and Cybulski considered it, the more it became clear — what about the material they had been using all along? Paper was cheap and could be folded and cut incredibly precisely, a fact long since established by origami, the Japanese art of folding paper.

With a design finally in sight, they decided on an appropriate name: the Foldscope.

Despite their enthusiasm, the concept was tough to convey, especially when funding was on the line. Prakash remembers stapling a Foldscope to a grant application to help explain the invention. That made their existing funding all the more vital — Prakash’s lab received support from the National Institutes of Health (NIH), and Cybulski received the NIH Fogarty Institute Global Health Equity Scholars Fellowship. Both sources were foundational to the early research that went into the Foldscope’s creation.

Goal 2: Make it High Performing

Over time, Prakash and Cybulski graduated from X-ACTO knives to laser cutters. They wanted to make this tool applicable for scientific data collection. Unlike traditional microscopes which often have multiple lenses and mirrors that enhance magnification, the Foldscope’s only source of magnification is a ball lens, a cheap, tiny glass sphere. Prakash and Cybulski set out to determine the theoretical limit to the resolution achievable by the Foldscope.

In a Foldscope, a piece of black plastic carrier tape holds the lens and serves as the optical aperture, the hole light shines through to illuminate and resolve an image of the specimen. Cybulski and Prakash found that the placement and size of that aperture are crucial. Since a ball lens is a sphere, the light gets distorted around the edges, so to avoid a fuzzy image, the best path for light is straight through the middle. The aperture needs to block all light except through the center path. On the other hand, you can only make that aperture hole so small before no light gets through at all.

Malaria in red blood cells, projection of Foldscope image (Foldscope Instruments, Inc.)

Cybulski and Prakash mathematically calculated the very best theoretical aperture size and built Foldscopes using those calculations. Those Foldscopes achieved vastly improved image quality and resolution compared to other ball  lens microscopes with sub-optimal apertures. The final magnification was about 140x, the magnification necessary to see a malaria parasite in a cell.

Goal 3: Make it Globally Accessible

Today, Prakash and Cybulski describe their approach as “frugal science,” a philosophy that has grown out of a simple but formidable goal — science should be made accessible to all. Even after the Foldscope had proved its utility in the lab, Prakash and Cybulski did not consider their work complete. They needed feedback. Input from a wide variety of communities would transform the Foldscope from an instrument that worked in theory to one that people would actually use. To get Foldscopes out into the world, Prakash and Cybulski needed to scale up manufacturing.

The team decided to make 10,000 Foldscopes and distribute the instruments for free to anyone who wanted one. Cybulski says this seemed like a big goal at the time, but the demand surpassed anything they had imagined; somehow, the number grew to 50,000 Foldscopes, then 75,000. A new NSF grant supported them along the way — Prakash emphasizes the importance of funding at that stage; without it, he says, they might never have gotten the Foldscope into the world.

The researchers sent out the first batch of free Foldscopes and started the iterative process of feedback and improvements. The Foldscopes traveled across the globe to people with all kinds of backgrounds. One surprising demographic of Foldscope requesters? Grandparents who wanted the instrument for their grandkids. Prakash and Cybulski were delighted.

Faces of Foldscopes

Giant moth scales as seen through a Foldscope (Manu Prakash)

Over the next several years, thousands more Foldscopes traveled around the world. True to the instrument’s initial conception, the Foldscope has been used to diagnose malaria, but the sheer range of subsequent applications could never have been foreseen. Foldscope users have explored the microscopic diversity of the Amazon rainforest, monitored agricultural pests, identified animal pathogens, surveilled mosquito populations, and mapped plankton for fisheries management. Over 400 scientific papers have been published using data collected with Foldscopes, including the discovery of two new species.

Cybulski recalls a project in Nigeria that used Foldscopes to detect fake drugs. In some parts of sub-Saharan Africa, over half of the drugs sold in pharmacies are substandard or counterfeit. By crushing a pill into powder and viewing it under a Foldscope, real drugs appear as uniform particles, a signature of the manufacturing process, but fake drugs look like powder. Having access to a Foldscope has empowered people to check the safety of their medications.

Students look through Foldscopes at a workshop (Foldscope Instruments, Inc.)

A key goal of Foldscopes is to impart a sense of agency. Like a pencil, notes Prakash, anyone can pick it up and use it, even if they do not know how to write. The colorful designs on the Foldscope are not accidental; the instrument is designed to feel approachable, like an arts and crafts project. Only after some tinkering do users realize they have stumbled onto something powerful. That is the key to the power of a scientific tool, says Prakash. It has nothing to do with cost or complexity; it is about being accessible at the right moment. Then, someone can work on a problem where they see urgency.

 However, in science, we often attribute a higher value to shiny, complex instruments, and Cybulski remembers many encounters with people hesitant to interact with the Foldscope because of the perception that microscopes are expensive and delicate. Cybulski recounts meeting a doctor in India, who, when handed the Foldscope, reached out with shaking hands, worried that he would break it. To combat that perception, Prakash and Cybulski have built infrastructure for training and public engagement workshops around the globe. The programs have multiplied; today, 1,200 projects are in progress in India alone. In the United States, Foldscope programs have been implemented in public libraries and STEM education for juvenile incarcerated populations.

Prakash demonstrates the Foldscope for President Obama (Foldscope Instruments, Inc.)

The community of Foldscope users is not limited by country. On Microcosmos, an online forum set up by Prakash and Cybulski, users from around the world post images captured by hooking their Foldscopes up to a phone camera. Rafikh Shaikh, a Foldscope workshop leader and PhD student in India, says that it is clear Prakash and Cybulski thought about building community since the Foldscope’s inception. NIH staff scientist Lakshminarayan Iyer has been a part of the Microcosmos community for years. He says the greatest impact comes from connecting with others outside of his typical circles. Iyer can share his work with scientists around the world (who use free online translators to transcend language barriers) or advise a child in the U.S. trying to look at the crystals in ice cream.

Muhamed Abbas, an Iraqi Foldscope community member who leads public science engagement projects, explains, “Foldscope is not just a tool for me … and it’s beyond a community of people trying something new. I see myself as part of a global mission to make science more accessible and inspire the next generation to … maintain their curiosity to use science as a tool to find solutions for challenges our communities face all around the world.”

Eyes on the Future

Even after distributing over 1.8 million Foldscopes in 160 countries, Prakash and Cybulski still see work to be done. There are two billion children around the globe that could benefit from a Foldscope. To Prakash and Cybulski, it is a mistake to assume that cutting-edge tools can only be available to a few; rather, access is a choice we make. The key is starting with accessibility in mind from the start.

The two designers have not stopped imagining and innovating, either. As the head of what is now the company Foldscope Instruments, Inc., Cybulski is designing and manufacturing a new generation of more capable Foldscopes, as well as an app that can help Foldscope users get better images and identify specimens. Prakash’s research team at Stanford has developed even more frugal science instruments. The Paperfuge, a paper centrifuge, can spin at high speeds and separate pure plasma from a sample of blood in a minute and a half. Another tool, the Octopi, is a low-cost autonomous microscope that can identify malaria in blood samples.

Despite the immense success and wide-ranging impact of their work, Prakash and Cybulski speak candidly about how tough it was initially to find support for such a silly-sounding idea. They weren’t just thinking outside the box — they cut up the box and used it as construction material. And that is why federal funding for early-stage research is crucial, even if (and especially if) the ideas are odd-sounding. Future impact is magnified when innovative research ideas are supported early on.

Students look through Foldscopes at a workshop (Foldscope Instruments, Inc.)

By Gwendolyn Bogard

Tiny Snail, Big Impact: Cone Snail Venom Eases Pain and Injects New Energy into Neuroscience

AWARDEES: Craig T. Clark, Lourdes J. Cruz, J. Michael McIntosh, and Baldomero Marquez Olivera

FEDERAL FUNDING AGENCIES: Department of Defense, National Institutes of Health

When Baldomero “Toto” Olivera and Lourdes “Luly” Cruz started studying cone snails in the 1970s, they didn’t know that their work would change the course of their careers. They didn’t know they were embarking on a decade- and globe-spanning partnership, or that with the help of two undergraduates—Craig Clark and Michael McIntosh—they would discover the raw material for a non-opioid pain reliever and a powerful new tool for studying the central nervous system, all hidden in the cone snail’s potent venom.

They were just trying to solve a supply chain problem.

Becoming Scientists

Lourdes Cruz (Lourdes Cruz)

 Lourdes Cruz was perhaps always destined to be a scientist. The daughter of a chemist and a dentist, whose older sister was also a chemist, she attended the University of the Philippines, where she met, but never had a class with, Baldomero Olivera. Like many of her classmates, Cruz then attended graduate school in the United States before returning to the Philippines. Eventually, after a stint at a research institute focused on rice, she found her way back to the University of the Philippines, where she began working with her former classmate, Olivera. Affectionally called “Toto” by his colleagues, Olivera grew up in the Philippines and San Francisco, where his father served as a press attaché for the Philippine consulate. It was in San Francisco, in second grade, that he met the first of many influential teachers who would push him toward his eventual career.

“There was a teacher in second grade—Miss Uhler, I still remember her name—she did a scientific experiment, and after that, I was hooked,” Olivera says. “I never wanted to be anything else but a scientist.”

Baldomero “Toto” Olivera

Now in his 80s, Olivera still remembers many of his early teachers by name. In high school Olivera took up shell collecting, an easy hobby to reach for in a country of over 7,000 islands. Another of his teachers encouraged him to take a more scientific approach to his hobby. He did not need much persuading. From the very beginning, Olivera was interested in cone snails. He even wanted to study the animals for his undergraduate thesis, but his advisory committee dissuaded him. “They said you won’t be able to do anything with that,” Olivera recalls.

Fast forward about a decade to the early 1970s. Cruz has just joined the Biochemistry Department. Olivera is spending part of each year in the Philippines while also working on DNA synthesis at Kansas State University (as a matter of fact, Olivera was one of the researchers who discovered DNA ligase, a key enzyme used for DNA replication and repair). But DNA research in the Philippines was impeded by a lack of equipment and bureaucratic barriers to accessing supplies. Cruz says the supply issues were consistent enough to spawn a running joke: “‘By the time it comes, you’ve forgotten why you bought it!’ So we thought, let’s think of a project where we will have an advantage.”

Olivera remembered the colorful snail shells that had fascinated him as a child. Cone snails are abundant in the Philippines, and they could start researching the nature of their venom with limited equipment. It seemed like a worthwhile side project.

There are as many as a thousand different species of cone snails (University of Utah)

The Shell Collectors

The umbrella term “cone snail” refers to a massive variety of venomous animals—up to as many as a thousand different species. Named for their conical shells and prized by shell collectors, cone snails hunt in a variety of ways and have specialized diets. Some cone snails eat worms, some eat other mollusks, some eat fish. But how does such a slow-moving animal catch something as fast as a fish? One common strategy involves a snail burying itself in the sand and extending its proboscis, much like a fishing line. When the tip of the proboscis touches fish skin, the snail ejects a hypodermic needle-like structure from the end of the line, harpooning the fish and allowing paralytic venom to flow through the hollow proboscis. Then the snail reels in the fish and swallows it whole.

Olivera knew much of this from his shell collecting days. More importantly, he knew shell collectors, and getting to know them became a big part of the job: “During lunchtime, we would go visit the shops and talk to the shell dealers,” Cruz says. “It was fun. They would show us everything they had and connect us with suppliers.”

Although cone snails, like other mollusks with beautiful shells, are still threatened by the global shell trade, the industry is somewhat more regulated now than it was in the 1970s. Cruz recalls being amazed at how many shells were being collected at the time. It was a big business and required some tricky negotiation. For the rarest and most expensive species, the team would essentially borrow the animal for a while before returning the shell to the supplier to be sold. The lab was within walking distance of Manila Bay, and Cruz recalls walking to the beach to collect other snails for the snail-eating cones.

Some cone snails, especially some of the larger species, pose a threat to humans. One species of great importance to Olivera’s lab, Conus geographus, is also commonly known as the cigarette cone, reputedly because if a person is stung by it, they will only have time to smoke a single cigarette before they die. When asked how the team handled such dangerous creatures, Cruz suggests they are quite shy, more inclined to retreat into their shells than to sting you. “People who have been stung and died are generally people who have held them for a long time,” she suggests, recalling once interviewing a spear fisher who saw a snail while pursuing fish. Hoping to capture both, he put the snail under the elastic of his trunk and was stung when the snail emerged. He lived to tell the tale, but only after a three-week hospital stay. “We were very careful!” Cruz says.

A cone snail eats a paralyzed fish (University of Utah)

Bit by bit, the team got to know the mysterious animals. The snails are nocturnal, and the team would stay up late just watching them. “We gave them names,” Cruz recalls, “and you could see that they had different personalities.” Although there were certainly bumps: one snail, for example, went from Gerald to Geraldine when she turned out to be female. A bigger hiccup came from putting the wrong species in a tank together, leading to lots of full mollusk-eating cone snails and lots of empty shells. “After that, we put the mollusk-eaters in separate aquaria,” Cruz says with a laugh.

One early discovery concerned the nature of the venom itself. At the time, the researchers did not even know if the toxin was a carbohydrate or a protein. Earlier work by other researchers had functionally ruled out a protein because the compound did not appear to be digested by protease, a kind of enzyme that breaks down proteins. But the Manila lab tested the venom in different conditions, including a longer incubation period, and that worked. What explains the difference?

“We realized that most of the bioactive ingredients in the venom are very small proteins—peptides,” Olivera says. “In fact, they’re unusually small—that was one of the surprises.” While the bioactive component in snake venom is about 50 amino acids in length, the first components of the purified cone snail venom were only about 13 amino acids in length. Moreover, they were very tightly bound to their target, which explains why they were so difficult for the protease to digest.  

Discovering that the venom’s bioactive compounds were peptides was the first of many “ah ha!” moments in the duo’s research. The methods for isolating protein have been very well-developed, and the team was able to rapidly learn a lot about the protein’s various components. Unfortunately, progress on the project was complicated by the imposition of martial law in the Philippines. Olivera changed jobs, and he and his family settled in Utah while Cruz stayed in Manila. She recalls being in the lab and listening to the radio fuzz out on the night martial law was declared. Still, the two could collaborate at a distance, and several years later, Cruz got a position alongside Olivera at the University of Utah. Now it was her turn to go back and forth.

Although it sometimes led to logistical difficulties, the multi-national nature of the research was a huge part of its success. Thanks to their access to cone snails, the lab in the Philippines was able to provide a tremendous amount of crude venom to work with, while the Utah lab gave them access to leading-edge technology to analyze that material. “I would say, in retrospect, that’s what allowed us to move faster, given the constraints,” says Olivera.

An injection of funding from the National Institute of General Medical Sciences (NIGMS), part of the National Institutes of Health (NIH), further catalyzed the project by allowing the team to access more sophisticated equipment. “It wasn’t that easy to analyze small amounts of proteins at the time,” says Olivera. “But the fact that we finally got an NIH grant, that I would say was what really allowed us to discover things that were unexpected.”

That, plus the timely interventions of two undergraduate students.

Craig Clark (Lisa Roa)

A Breakthrough

Cruz recalls Craig Clark, an undergraduate student in biology who worked in the lab, as being particularly curious about the brain. That curiosity led him to discover the technique that changed the course of their research. At the time, the team injected different fractions—that is, different combinations of components—of the venom into the skin of lab mice. But they were not finding anything remarkable. Then Clark had a different idea.

Craig wanted to see what would happen if he injected the venom into the brain, but Olivera was not so sure. “I was skeptical,” he recalls, “I really was.” But Clark was determined, and Olivera and Cruz encouraged experimentation in the lab. After learning how to do cranial injections from colleagues at the college of medicine, Clark ran his first experiment. The results were astonishing.

Right away, the team observed a huge range of dramatically different, sometimes even opposing, behavioral effects in the mice. Once injected, some fractions of the toxin caused hyper-excitability, while others made mice lethargic. Some caused shaking, scratching, or falling over. The results were so unexpected and specific that the team knew right away that something special was happening.

“When Craig developed that assay, and we saw these mice doing all these different things, I think that’s the moment when we realized this wasn’t just a bunch of paralytic toxins,” Olivera recalls. “There’s something much more complicated and wonderful here.” The team’s working hypothesis was that each venom component was targeting highly specific areas in the brain, either activating or inhibiting the messages they carried. But understanding that mechanism was a gradual process, and one particular venom fraction held the key.

J. Michael McIntosh

Tiny Snail, Big Impact

J. Michael McIntosh joined the team at just 18 years old, about six months after the lab started using Clark’s brain injection technique. “Michael [McIntosh] is a very methodical guy,” Olivera recalls, “And he wanted to purify a very specific thing.” That thing? The “shaker” peptide, or the fraction of venom that made mice shake. Over time, working shoulder-to-shoulder with Cruz, Clark, and Olivera, McIntosh purified the peptide and identified its chemical structure. They called it the omega-conotoxin.

Over the next few decades, and in partnership with many other collaborators, the team learned more about the omega-conotoxin. One of their first and most important discoveries was that it is paralytic to fish and frogs but not to mice. To understand why that’s a big deal, it helps to know how the body carries signals to the brain.

Lourdes Cruz working in the University of Utah lab (Lourdes Cruz)

Imagine you want to move your little finger. Your brain sends a signal—move my legs, please, Something is trying to eat you!—along a nerve, which then transfers to the muscle with the help of message-delivery devices called calcium channels. By working with frogs, the team and their collaborators proved that the omega-conotoxin was blocking those calcium channels, which meant the message—move your legs, please!—was not getting through. So why wasn’t the omega-conotoxin paralyzing the mice?

The answer is a trick of evolution. It turns out that we mammals have the same calcium channel that the omega-conotoxin can use to cause paralysis in frogs, but we use it differently. Whereas frogs and fish use it to control muscle movement, we use it to feel pain. Over several decades, this realization led to the development of the omega-conotoxin into a potent pain reliever, ziconotide, commercially known as Prialt. Interestingly, unlike most drugs derived from animal products, which accrue enhancements and modifications on their way to human medicine, ziconotide is a perfect synthetic copy of the omega-conotoxin, nothing added or taken away.

Patients who take Prialt are equipped with a programmable and refillable pump, about the size of a hockey puck, which is implanted into the abdominal cavity and only needs to be refilled every few months. While this necessarily limits the medication’s reach, it can be used as an alternative treatment to many more common opioid pain relievers. This alternative has been life-changing for the patients who use it, many of whom are battling cancer, AIDs, or other chronic conditions and have been unable to manage severe pain in any other way. It is a stunning, thoroughly unexpected result from what McIntosh called “a discovery project.” “We weren’t aiming to develop a pain compound,” he says.

The Prialt pump (center and right photos) operates in a very similar way to the pump the cone snail uses to paralyze fish (left photo) (University of Utah)

But that’s not the end of the story. Because of their ability to cause such specific behavioral effects in pre-clinical models, conotoxins have helped researchers develop new ways to map the body’s nervous system. When McIntosh discovered the omega-conotoxin, calcium channels were not very well understood. As Olivera points out, the lab’s insights regarding conotoxins launched a major initiative in neuroscience. Using conotoxins as probes, some neuroscientists are mapping the brain’s calcium channels and learning ever more about what they do. Other scientists are exploring the possibility of using conotoxins to treat a surprisingly wide range of illnesses, including addictions, epilepsy and diabetes. 

McIntosh and Olivera in the office

Today, Olivera and McIntosh are still working together. After leaving the University of Utah to study medicine and psychiatry, McIntosh found himself continually drawn back to the cone snail research. Eventually, he found his way back to the lab permanently. He now works alongside Olivera in addition to serving as the medical director of Primary Care Mental Health Integration at Salt Lake City’s veteran’s hospital.

Cruz receiving the title of National Scientist of the Philippines from President Gloria Arroyo in 2006

Among other honors, Cruz is the recipient of the L'Oréal-UNESCO For Women in Science Awards. In 2006, she was named a National Scientists of the Philippines, the highest award accorded to Filipino scientists by the Philippine government. And though she has now retired from lab work, Cruz remains a tireless advocate for sustainability and community. She currently leads the Future Earth Philippines Program (FEPP), a scientific endeavor aimed at strengthening the country’s resilience and national sustainability as part of the global Future Earth network.

Tragically, Craig Clark passed away suddenly in 1994. But his wild idea to inject venom into the brain is still paying dividends in medicine and neuroscience, and all the members of the team consider that eureka moment as one of the most important in a project that was filled with unexpected flashes of serendipity and innovation.

By Haylie Swenson

2021: Making mRNA

AWARDEES: Dr. Katalin Karikó, Dr. Drew Weissman

FEDERAL FUNDING AGENCIES: National Institutes of Health

Katalin Karikó. Photo credit: Bela Francia

Katalin Karikó. Photo credit: Bela Francia

In 1997, the photocopier at the University of Pennsylvania (Penn) Department of Medicine was a hot commodity. Unlike today, when online scientific articles are available at our fingertips, back then articles were only accessible in physical scientific journals. If a researcher wanted to read a study, they would have to make the trek to the department office to scan and print a separate hard copy. At Penn, new faculty member Drew Weissman found himself jockeying for space with one researcher in particular, a scientist named Katalin Karikó.

Drew Weissman. Photo by family friend

Drew Weissman. Photo by family friend

Today, over two decades and many research hours later, Weissman likens his research partnership with Karikó to an old Reese’s peanut butter cup commercial — “You got chocolate in my peanut butter!” The two complement each other. Karikó, now a Senior Vice President at the pharmaceutical company BioNTech, is exuberant; she lights up when she describes her research. Weissman, who still leads a lab as the Roberts Family Professor of Vaccine Research at the Penn School of Medicine, presents a more serious exterior. Karikó says he isn’t into small talk, joking that he’s limited to a certain number of words per day.

Often, while waiting for the photocopier to spit out the desired pages — or fixing a dreaded paper jam — Karikó and Weissman dove into discussing research. Weissman explained that he wanted to make a vaccine for HIV. Karikó, a self-described “RNA-pusher” at the time, told him she could make any mRNA he wanted to test. Neither knew that the partnership originating from these conversations would go on to pull a research area from the edge of obscurity and transform it into a source for lifesaving vaccines and therapies.

The Molecule that Started it All

Oddly enough, RNA — the hero of this story — is a molecule that plays a crucial part in a cellular process that bears striking resemblance to photocopying. RNA, or ribonucleic acid, is a molecule closely related to DNA, the molecule that encodes all genetic instructions for living things, including recipes for proteins integral to a functioning body. However, DNA and its encoded instructions are trapped within a holding container, the nucleus, while proteins are manufactured outside the nucleus in a completely different part of the cell. That’s where messenger RNA (mRNA) comes in. mRNA is a special type of RNA that makes a copy of DNA’s protein-making instructions and delivers them to the factory of the cell, which uses the copied instructions to produce proteins.

 The Research Begins

Karikó began her research career studying RNA in Hungary, where she completed her PhD in Biochemistry. She studied a molecule known as “2-5A,” which jump starts a process that interferes with viral replication. She continued her research in Hungary until her postdoctoral fellowship ran out of money, and she sought a new position in the United States.

The move was anything but easy — at the time, the Hungarian government allowed citizens to take the equivalent of just $100 out of the country, not enough to start a new life somewhere else. Karikó and her husband sold their car on the black market and sewed the proceeds into their two-year-old daughter Susan’s teddy bear, which they carried to the U.S.

Katalin Karikó in 1985

Katalin Karikó in 1985

Several years later, Karikó began working at Penn as a research assistant-professor with Dr. Elliot Barnathan, a cardiologist, focusing on mRNA. At the time, RNA was not an attractive research area — the molecule is notoriously tricky to work with because it quickly disintegrates in both laboratories and cells. The researchers’ grant proposals were repeatedly rejected because others didn’t see their work as worthwhile. Yet Barnathan and Karikó persisted, and eventually got their breakthrough. By delivering mRNA into cells, they could induce the cells to make a protein that the cells did not typically manufacture. “I felt like a god,” remembers Karikó. However, the celebration was short lived. Barnathan left Penn for a job at a biotech firm in 1997, leaving Karikó with insufficient grant support to stay at the university, unless she could secure another lab to work in.

Undeterred, Karikó found places to continue her research; she notes her gratitude for her colleague David Langer, a neurosurgeon who convinced his department’s chairman to provide lab space and a salary for her. Karikó found refuge — and joy — in the lab. It would be years until she saw large returns on her work, but her next research partner would catalyze the process.

 Meanwhile, in Immunology…

Drew Weissman identified his interest in basic scientific research — the beginning research stages that lay a foundation for future applications — early in his academic career. By the time he received his MD PhD in Immunology and Microbiology from Boston University, he had focused in on dendritic cells, which recognize foreign bacteria and viruses, then initiate immune reactions to fight off the invaders.

In 1990, Weissman began a fellowship at the National Institutes of Health (NIH), where he worked in Dr. Anthony Fauci’s lab. Fauci was not yet a household name but did carry weight in the field, as he was the director of the National Institute of Allergy and Infectious Diseases (NIAID), where he led a lab that studied the immunology of HIV. Weissman developed his own section within Fauci’s lab where he had a wide reign to pursue various research interests, other than vaccine research, which wasn’t performed in the lab.

 “Of course,” Weissman says, “That meant when I got my own lab at the University of Pennsylvania, I wanted to study vaccines.” That decision primed him for precisely the basic research that would translate to a massive impact on the COVID-19 pandemic.

 Chocolate and Peanut Butter

When Karikó and Weissman teamed up, both researchers remember the work itself as what kept them collaborating. “It was stimulating,” says Karikó, and that was important because in the beginning, it was just Weissman and her in the lab. Karikó remembers reading papers at 3 a.m. and receiving an email from Drew, who was also awake. She felt solidarity with her research partner, both sleep-deprived but determined to press on.

By its nature, Karikó and Weissman’s research was cross-disciplinary. Karikó’s specialty in the molecular biology of RNA complemented Weissman’s expertise in vaccinology and immunology. Karikó produced the RNA while Weissman grew and worked with the cells. They inserted mRNA into dendritic cells and found that the dendritic cells were incredibly good at taking in mRNA, reading it, and producing protein based on the mRNA code. But when they tried to translate the technique to live mice, it didn’t work. The mice got sick; they looked lethargic and stopped eating.

But Karikó and Weissman saw progress in each experiment. “We were getting interesting results, so they just kept leading us on," Weissman says. Though it was tough to find funding — private investors also showed a lack of interest — Drew had grants to support his lab, and they were able to scrape together funding (largely from NIH) to continue their research.

Weissman’s experience in immunology led them to an explanation for the mice’s poor reaction — a phenomenon previously only thought to be caused by DNA. Research had shown that when foreign DNA is introduced into the body, the immune system flags it as an intruder and triggers a serious inflammatory response. The first time Weissman added mRNA to dendritic cells, he saw an incredible level of inflammation, and his experiments showed that mRNA triggered the same immunogenic effect in mice, causing a dangerous inflammatory reaction.

Along with the excitement at figuring out the reason for the roadblock, Karikó recalls a sense of dread. She had dedicated a decade to developing mRNA with the hope of creating a therapy, the whole time unaware that the mRNA she produced had dangerous effects. Was it all for nothing?

Weissman and Karikó in 2015

Weissman and Karikó in 2015

The Ol’ Switcheroo

Karikó’s experience synthesizing mRNA enabled her to make structural changes to the molecule, so she and Weissman wondered if they could alter the mRNA just enough that the immune system wouldn’t recognize it as an intruder. They zeroed in on nucleosides, the building blocks of RNA, which, coincidentally, Karikó had worked on during her PhD research. They also found that another form of RNA, transfer RNA (tRNA), did not cause an immune reaction like synthesized mRNA does. By subbing tRNA modified nucleosides for mRNA nucleosides (one at a time), they could potentially identify the source of mRNA’s dangerous immune response.

First, Karikó had to acquire the molecules, which presented some limitations. These molecules aren’t available at the local grocery store; scientists acquire materials from chemical supply companies, and Karikó found that only ten different types were available to order. She got all of them, crossed her fingers, and hoped that one would be right. Of the ten Karikó tested, only three were viable, leaving her with low odds that one of them would be the key to circumventing the mRNA immune response.

Astoundingly, one of the three did exactly that: a molecule found in tRNA called pseudouridine, which is a slightly modified version of the mRNA molecule uridine. When Karikó tested the mRNA with pseudouridine, the immune system’s alarm bells were silent. And not only that — inserting the mRNA into cells translated to a tremendous amount of protein production. That one change opened the door to a huge number of therapeutic applications. Karikó and Weissman’s long hours and late nights had paid off.

In a reversal of his usual role as the skeptical and grounded half of the duo, Weissman was sure the discovery would be a big deal, and invitations to give scientific talks would soon pour in. But when he and Karikó published their findings in 2005, it barely registered as a blip on the scientific community’s radar. They later found out that an editor of the scientific journal had even needed to advocate for the paper to be published because other editors did not see the merit of the discovery.

Yet Karikó and Weissman knew the potential. They patented the modified nucleosides and even started their own small biotech company, RNARx. In 2012, Karikó and Weissman showed that their modified mRNA was not only safe to inject in animal models, but it could make them healthier than before. Translation to humans seemed possible, particularly as a platform for vaccines. Due to its quick disintigration, mRNA has the potential to compel cells to produce a preview of a virus without sticking around long enough to disrupt cellular processes.

The scientific community began to recognize mRNA’s merit as research on a Zika vaccine showed promise, and later, applications of mRNA vaccines against other pathogens like the influenza virus and HIV were successful in animal models. However, funding for vaccine research is tough to generate because monetary return on investment is usually small, so it was slow going. In 2020, however, everything changed.

RNA Responders

Suddenly, funding for vaccines was abundant as the world scrambled to combat the 2020 COVID-19 pandemic. Research was fast-tracked. Weissman’s lab began working on a vaccine the day the virus’ genetic sequence was released. Evidence of mRNA-based vaccines’ effectiveness began to emerge, and though exciting, the results were not surprising to Karikó and Weissman.

The validation was a long time in coming. Each celebrated in their own way — Karikó with Goobers chocolate-covered peanuts and Weissman with takeout from an Italian restaurant. In December 2020, two mRNA vaccines — made by a Pfizer-BioNTech partnership and Moderna — were the first SARS-CoV-2 vaccines to receive emergency use authorization from the U.S. Food and Drug Administration (FDA), the fastest vaccines to ever be developed.

Karikó and Weissman are quick to emphasize that their research is built on a foundation of the work of scientists before them. mRNA was discovered in 1961, and the invention of the PCR technique in the 1980s, which generates millions of copies of DNA or RNA from a tiny sample, allowed Karikó to synthesize cleaner, better RNA a decade later.

The NIH grants that funded Karikó and Weissman’s research were also essential to the development of mRNA therapeutics. However, it is important to acknowledge the magnitude of strain that the initial lack of funding placed on the researchers. While Karikó and Weissman’s determination and clearsightedness unlocked the potential held by mRNA, their struggle to obtain funding is emblematic of countless researchers who hold the promise to make lifesaving discoveries but find themselves without the resources to do so. Robust federal funding can help deliver on that promise.

Today, Karikó and Weissman are not resting on their laurels. Weissman’s lab at Penn is already looking toward the next pandemic, investigating a pan-coronavirus vaccine that will protect against coronaviruses that appear in the future and developing a gene therapy that will allow widespread treatment for a variety of diseases. Karikó, currently overseeing mRNA work at BioNTech, also sees applications of mRNA technology in treatment for many diseases, including multiple sclerosis and sickle cell disease. Back in their conversations by the photocopier, neither could have foreseen the magnitude of payoff their work would generate. Now, in the second year of the COVID-19 pandemic, and in the midst of a surge of mRNA therapeutics research, it is clear that this technology — which has already saved millions of lives — will save many more lives in the years to come.

 By Gwendolyn Bogard

2021: The Fast and the Curious

AWARDEES: Stephen Checkoway, Tadayoshi Kohno, Karl Koscher, and Stefan Savage

FEDERAL FUNDING AGENCIES: National Science Foundation

Once upon a time, on a sunny day at the University of Washington’s (UW) campus, a graduate student typed a few lines of code into a laptop. Nearby, a car started. Its doors unlocked. And just like that, with no need for a key to unlock the car or start the ignition, another graduate student was able to get in and drive away. It could have been a scene from a heist movie. But those graduate students weren’t trying to steal the car they were hacking into, and neither were their partners at the University of California, San Diego (UCSD), 1,200 miles away. They were trying to make it—and millions of cars just like it—safer.

Led by Tadayoshi “Yoshi” Kohno and Stefan Savage along with lead senior PhD students Stephen Checkoway and Karl Koscher, a team of researchers from the UCSD and UW proved that internet-connected vehicles could have their critical functions (including the engine, lights, and brakes) overridden by a remote attacker via a range of digital pathways. Their trailblazing work, published in a pair of landmark papers in 2010 and 2011, led to a revolution in how automakers, the federal government, and other stakeholders approached automotive security.

UCSD North and UW South

All heists need a mastermind, and this one began with two: Tadayoshi Kohno and Stefan Savage.

Tadayoshi Kohno. Image credit: Mark Stone/University of Washington

Tadayoshi Kohno. Image credit: Mark Stone/University of Washington

A seasoned karate practitioner and instructor who grew up in Boulder, CO, Kohno sees similarities between the martial arts discipline and computer security, both of which are about pitting two sides against each other. “For almost as long as I can remember, I’ve been interested in computer security and cryptography and the cat-and-mouse game between the adversary and the defender,” he says.

Stefan Savage. Image credit: Alex Matthews/UC San Diego Qualcomm Institute

Stefan Savage. Image credit: Alex Matthews/UC San Diego Qualcomm Institute

Savage’s road to computer security took quite a few more twists and turns. Born in Paris and raised in Manhattan, Savage studied history in college before becoming involved in computer science. He credits his humanities background, particularly its emphasis on writing persuasively and defending ideas, for giving him the tools that later helped him to succeed in the sciences. He tells his students that, along with performing good research, understanding how to tell a story and frame a problem are just as key for developing careers.

Before graduating and taking a position at the University of Washington, Kohno was a PhD student at the University of California, San Diego. Savage did the opposite, attending graduate school at UW before taking a faculty position at UCSD. “UW and UCSD have this great tradition of exchanging students and faculty,” says Savage. The two computer science departments are so enmeshed that UCSD students sometimes jokingly refer to the University of Washington as “UCSD North,” while UW students refer to the school in San Diego as “UW South.” In fact, many other members of the team have various connections with both schools.  

Kohno wasn’t Savage’s student at UCSD, but they knew each other, and they would occasionally talk about computer security-related topics. One area of interest to both of them was the increasing connectivity of cars. At the time, OnStar was prioritizing direct-to-consumer marketing, and Savage and Kohno discussed how it might be fun to look into the telematics system. But they were both busy with other projects and pushed the idea into a future “someday.”

Motion’s Eleven: Assembling the Team

Karl Koscher. Image credit: Tara Brown Photography / UW Alumni Association

Karl Koscher. Image credit: Tara Brown Photography / UW Alumni Association

A few years went by. Kohno accepted a faculty position at UW and new students joined his lab, including Karl Koscher. Now a research scientist with UW’s Security and Privacy Research Lab, Koscher brought a wealth of experience with embedded systems (computers that don’t necessarily announce themselves as computers, like those in a car) to the team. According to his teammates, he also has a time-saving superpower: “When you’re looking for bugs and vulnerabilities, there are all kinds of techniques, but they take time,” says Savage. “And there is a thing that Karl has that very few people have, which is this incredible intuition about where to start hunting for bugs."

Stephen Checkoway. Image credit: Rosen-Jones Photography

Stephen Checkoway. Image credit: Rosen-Jones Photography

Another key member of the team at UCSD, Stephen Checkoway, now an assistant professor at Oberlin College, had just finished a grueling project with another car hacking researcher, Hovav Shacham, investigating the vulnerability of voting machines to hacking. Checkoway was exhausted, but when Savage approached him about the project, he ultimately couldn’t say no. “It’s pretty easy to talk me into research,” he laughs.   

In a coincidence guaranteed to please fans of heist movies, the ultimate team was composed of eleven individuals representing a vast range of experience. On the UCSD side, there was Stephen Checkoway, then-postdoc Damon McCoy, research staff member Brian Kantor, then-master’s student Danny Anderson, professor Hovav Shacham, and Stefan Savage. On the UW side, there was Karl Koscher, Alexei Czeskis, and Franziska Roesner, all PhD students at the time, professor Shwetak Patel, and Tadayoshi Kohno. Although the teams were working in two separate states, every member worked closely together, sharing discoveries and ideas through frequent conference calls and over the group chat.

As the project’s leads, Checkoway, Kohno, Koscher, and Savage worked together to set the direction of the research. However, all four stress that the project’s success was only possible because of the combined efforts of every member of the team. “We had a shared vision and shared belief in the potential of this project,” says Kohno, “and everyone believed in each other and knew that this was going to be a lot of work.”

Another key member of the plan, the funder, was a little more hesitant to get on board. The team applied for a National Science Foundation (NSF) Cyber-Physical Systems (CPS) grant. Back then, according to Koscher, the CPS program was focused more on the power grid and related systems than on connected devices. The reviewers didn’t have a firm grasp on what the researchers were planning to do—“we didn’t either,” jokes Savage in a nod to the project’s curiosity-driven, experimental quality—and there was an erroneous belief that the industry must be taking care of the problem. Ultimately, though, the proposal was approved under a different funding stream, NSF’s Trustworthy Computing Systems (TWC) program, with one catch: NSF wouldn’t buy the cars.

Using private funds, the researchers were able to buy two identical cars, one for each campus. Although at the time the research was published the team declined to identify the cars, it has since been revealed that they bought two 2009 Chevy Impalas, a car manufactured by the American automotive corporation General Motors (GM). All the members of the team, however, emphasize that cyber security and privacy for automobiles was not strictly a GM problem. In fact, the Impala was chosen in part because it was representative of many cars, made by many different manufacturers, on the market at the time. The UW team named their car Emma, while the UCSD team named theirs Vlad. Vlad the Impala. 

Once all the players were assembled, the funding was in place, and cars were acquired, the team had to figure out what to do with them. And to do that, they needed to understand what a modern car is… and what it isn’t.

Members of the UW team, including Alexei Czeskis, Karl Koscher, Tadayoshi Kohno, and Franziska Roesner

Members of the UW team, including Alexei Czeskis, Karl Koscher, Tadayoshi Kohno, and Franziska Roesner

Cracking the Code

When we think of a car, many of us still think of a mechanical device—a gas-powered engine, controlled by a steering wheel, gearshift, and brakes, on four wheels. And for most of their history, that’s what cars were. But no longer, according to Savage. “A car is a big distributed system that has wheels connected to it.” In fact, Savage continues, “It’s probably the most complex distributed system that you personally own.” By “complex distributed system,” Savage means a network of computers, and not just two or three. In their 2010 paper, the team quotes research that suggests that the average luxury sedan is controlled by 50-70 independent computers, called Electronic Control Units (ECUs), all of which communicate through one or two wires, an internal “party line” called the Controller Area Network (CAN) bus. The first thing the researchers had to figure out was the party line’s language.

Using a tool built by Koscher, the researchers were able to tap into the CAN bus using the car’s Onboard Diagnostics II (OBD-II) port, a standard connector used for emissions and other diagnostic tests. Kohno likens the process that came next to learning a truly alien language. “There’s lots of different ways to learn a foreign language,” he says, “but let’s say no one has ever learned this foreign language before. And you’re suddenly plopped into a foreign planet and you’re trying to figure out what people are speaking.” What would you do next? You’d want to be able to observe your new alien neighbors interacting with each other—Kohno imagines a café.  

So you sit in the café (i.e., tap into the CAN bus) and observe, Kohno says. “You see someone say something and you see they’re brought coffee and a cake. And you watch someone else, and you see they’re brought tea and some other type of pastry.” This teaches you something about the alien’s grammar. The next step is to try speaking.

Kohno continues the metaphor: “We saw someone do something and they got a coffee and a cake, so we might just repeat half of that sentence and see which we got back. Did we get the coffee or the cake? And that gives us a little more understanding of the language that’s being used within the vehicle.” As they gained knowledge of the car’s grammar, the researchers were then able to expand their vocabulary through random trial and error, a process called “fuzzing.” “You know the sentence structure begins ‘may I please have a [something],’” Kohno continues, “so we would just say ‘may I please have a—’ and then generate random syllables and observe what happens as a result.”

As the researchers learned more about the members of the CAN bus—the ECUs that controlled nearly everything in the car, from the door locks to the steering to the braking system—they became able to take individual components, reverse-engineer the software, and replace it with their own. Along the way, they discovered that the interconnection of all the ECUs on the CAN bus meant that if they could take over one ECU, they could functionally take over any of them.

“Once you find a flaw, it’s game over,” says Savage. “You control everything.”

A Bug in the System

Tapping into the OBD-II port allowed the team to learn what they could do once they’d taken the car over remotely (in brief: nearly everything). Hacking the car in that way, however, required direct physical access. The next step was to figure out all the other ways they could attack the car.

Using the knowledge of the car’s internal workings they had acquired through the long, slow process of learning its language, the team discovered a multitude of ways they could exploit the car without ever having to touch it. Some methods required indirect physical access—that is, someone (not necessarily the attacker) had to access the car. Standard tools used at dealerships and mechanics could be remotely compromised such that, when they were used on a car, they could then covertly take it over. The team also discovered a vulnerability in the CD player that allowed them to encode a seemingly normal CD such that it could infiltrate the car (the song they used for the test? Beethoven’s Ninth).

Other attack vectors didn’t require anyone to touch the car at all. For example, the team was able to hack the car’s telematics system. Initially intended to provide assistance in case of an emergency, a telematics system, such as OnStar, requires that a car be equipped with its own phone number. Once the team had an individual car’s number, they could call it and play a carefully coded sound that would allow them to take over key functions.

How could this happen? How could millions of cars be so vulnerable to attack? Savage explains that one of the most startling things they discovered in their researcher is that, because of the way supply chains are organized, no car manufacturer has the code to their car because no car manufacturer is making all the parts of its own cars. On the contrary, most of a car’s ECUs are outsourced to third-party companies, who protect each component’s code as their intellectual property. Furthermore, these vendors aren’t only selling to one company or consumer. Consequently, their products may have more functionality than any individual car manufacturer is aware of. And that’s where you tend to get vulnerabilities.

“It was at this interface between bodies of code written by different entities, where one assumed there was a more restrictive use, and the other offered more capability, where we would always gain purchase,” Savage says. “Some interface allowed you to do more than GM had any conception of.”

Adventures in Car Hacking  

Listening to the team talk, you get the sense that, after they cracked the car’s code, working on this project felt like careening from one adventure to another. There was the time when the team was conducting experiments with the car’s horn and a campus police officer came over to tell them, in so many words, to knock it off (they replaced the horn with a buzzer during experiments).

There was the time when Checkoway accidentally hacked into the UW car’s audio and caught snippets of the UW team’s private conversation. “I was trying to figure out how I could turn on the microphone and then stream the audio from the car’s cabin back to my computer,” he says. He tried it with the UCSD car, got it to work, and then decided to test it on the UW car. “I turned on the microphone, and then realized I could hear them.” Because of the covertness of the attack method, however, the UW team didn’t know that Steve was able to hear them. He quickly disconnected and told them about the accidental hack later. He recalls the UW team’s shock: “They were very disturbed, and rightly so!”

And then there was the time when the UW team requisitioned a defunct airport.

For the bulk of the project, the team worked on individual parts of the car, spread out on lab tables, or with the full cars on jacks. It was important to perform a road test to ensure that all of their hacks would work while the car was driving under normal conditions, but it wasn’t like they could just take the car on the highway. They needed to consider the safety of both their driver and other cars on the road.

Ultimately, it was a creative Program Director named Melody Kadenko who saved the day. She identified a state reciprocity provision in Washington law that allowed the University of Washington to requisition access to a decommissioned airport runway in Blaine, WA. Alexei Czeskis, a member of the team and a motorcycle rider who had both the appropriate safety gear and a certain tolerance for automotive risk, enthusiastically volunteered to be the driver.

After some wrangling with UW’s legal team and the institution of a host of backup safety measures, the test was on. Czeskis rode in the test car with a laptop hooked up through the OBD-II port, while Koscher controlled the laptop from another car driving beside him. Communicating through two-way radios, Koscher inputted the protocol that prevented the use of Czeskis’ brakes. It worked. They had real-life control.

The road test at a decommissioned airport

The road test at a decommissioned airport

Progress, Not Panic: A Restrained Approach Leads to a Big Impact

The team knew they had uncovered something important, but they weren’t initially sure what to do with it. “In the beginning, we had no idea how to disclose,” says Savage. It might seem strange to think about now, but before the team disclosed their findings in 2010, none of the major car manufacturers had given much thought to the security risks of increasingly networked cars. None of the major automakers had product security groups, there were no industry standards for the cybersecurity of vehicles and the US regulator of record (the National and Highway Traffic Safety Administration [NHTSA]) had no cybersecurity guidelines or evaluatory capability.

The team eventually was able to connect with the right people at General Motors and the federal government. Meanwhile, they also faced another, more philosophical question: whether to “name and shame” their object of research, the Chevy Impala, or to be more muted in their approach. Ultimately, the team declined to name the cars in their research, opting instead to approach General Motors privately.

“The decision to be more subdued in our approach was because we felt that was the most responsible way to share the results with the public and the various stakeholders,” says Kohno. “Because ultimately we wanted to see an improvement in the cybersecurity of future vehicles across all manufacturers, and work by the government to secure vehicles, and we didn’t want to see panic.”

That worry about causing a panic also led the team to push their research showing that cars could be hacked remotely to the second of the two papers. “The [first] paper could have said—but did not say—‘And we can currently remotely take over three million cars that are on the road today,’ which would have been an accurate statement, but not really a scientific statement,” Savage says. “It would be correct, but it wouldn’t advance people’s understanding of the problem.”

Thanks to the tireless, meticulous work performed by Checkoway, Kohno, Koscher, and Savage, along with the UW and UCSD teams, understanding of the problem has advanced. “You can’t talk to someone inside the automotive security industry who does not know these papers intimately,” says Savage. Together, the team’s two papers unlocked (if you will) a new sense of urgency within the automotive industry, prompting manufacturers to rethink car safety concerns and to adopt a range of new security practices as standard procedures. Following the team’s disclosure of their work, GM appointed a Vice President of product security to lead a new division. Other car companies followed suit, as did the federal government. In 2012, DARPA launched a massive program, the High Assurance Cyber Military Systems (HACMS) project, with the goal of creating hacking-resistant cyber-physical systems.

Road test warriors

Road test warriors

“This research is certainly the most impactful work that I have done,” says Checkoway in a conversation with Koscher, who immediately replies: “Pun not intended.”

From door locks and refrigerators to baby monitors and thermostats, the devices with which we surround ourselves are becoming increasingly connected. In the future, the team hopes that researchers will continue looking into other technologies that are integrated into our lives but that may not have received the same level of security analysis they gave automobiles. “We will never encounter a world, in my mind, where people stop finding vulnerabilities,” says Kohno. “It is better to be in a world where people are finding vulnerabilities in an ethical and responsible way and are fixing them.”

By Haylie Swenson

2021: The Secrets of SERMs

AWARDEE: V. Craig Jordan

FEDERAL FUNDING AGENCIES: National Institutes of Health, U.S Department of Defense

Sometimes, it’s the small moments of serendipity that define one’s path through life. So it is with V. Craig Jordan.

Jordan is known as the “father of tamoxifen,” a groundbreaking medication for breast cancer. He pioneered the scientific principles behind a new class of drugs that has helped save or improve millions of lives. And when he looks back on his accomplishments, what strikes him are the winding paths and unexpected developments that came together to produce a scientific success story.

A Winding Path to Scientific Insights

V. Craig Jordan. Photo credit: MD Anderson Cancer Center

V. Craig Jordan. Photo credit: MD Anderson Cancer Center

A Texan by birth, Jordan grew up in the United Kingdom. He admits to being a lackluster student as a teenager, but he was labeled by one teacher, in a university letter of recommendation, as a “VERY unusual young man” due to his singular scholarly passion: chemistry. By that time, Jordan was independently teaching university-level biochemistry to his classmates, and he won a place at the University of Leeds in the U.K. to pursue pharmacology. As a doctoral candidate in the early 1970s, he became intrigued by the idea of using drugs to treat cancer. At the time, for cancer doctors, chemotherapy was king. Tamoxifen, unlike chemo, was not designed to kill cancer cells, so most medical professionals then thought it was unlikely to save lives. Jordan’s idea was so unorthodox, he initially struggled to find a dissertation advisor; he found one in Arthur Walpole at the chemical company ICI (which would go on to become part of AstraZeneca).

Another obstacle at the time was sexism in research. Though breast cancer can affect anyone with breast tissue, it was largely considered one of the “women’s diseases,” Jordan recalls, which were not necessarily as fashionable in pharmacology as diseases such as coronary heart disease. Despite that, Jordan became curious about antiestrogens, substances that prevent cells from making or using estrogen, a hormone critical to female sex characteristics, menstruation, and pregnancy. In 1972, with some support from the National Institutes of Health, he began a two-year stint as a visiting scientist at the Worcester Foundation for Experimental Biology in Massachusetts to study the compound ICI 46,474, a “failed morning-after pill.” While there, he conducted animal studies on the compound, studies that helped pave the way for ICI 46,474 to receive approval as the first targeted therapy for breast cancer. It is now known – particularly to breast cancer patients and their families – as tamoxifen. The failure of tamoxifen as a contraceptive was one of what Jordan would describe as a series of disappointments that would ultimately turn into scientific insights. It was at WFEB where “everything changed,” Jordan says. “There, I was allowed to put my ideas into action.”

A 1998 Chicago Tribune profile underscores the extent to which Jordan’s basic research enabled the first major clinical trial showing that tamoxifen reduced breast cancer rates by nearly half for patients at high risk: “His team’s work on lab rats has been the basis of practically every advance related to the drug, going back to his discovery in 1973 that it could treat cancer.”

Vital U.S. Funding Support

After his stint at WFEB, Jordan subsequently returned to the University of Leeds, this time as faculty. Within a few years his work was noticed by the University of Wisconsin. According to Jordan, the U.S. government’s support of scientific research was a major draw. “I have lots of stories about how my whole life has been improbable - how none of this should have happened in the first place and there should have been no science whatsoever,” he says. “It took federal funding to make this happen.”

Jordan in the lab

Jordan in the lab

At Wisconsin, he formed a “tamoxifen team” that carried out scientific experiments to further test the drug’s safety and effectiveness. A chance encounter with Dr. Mara Lieberman, who was studying estrogen response in animals, yielded insights into a potential new way for Jordan to test antiestrogens. Jordan hired Lieberman to his newly created team, and they collaborated to further define the way antiestrogens worked in living tissues.

Early grants were dedicated to studying the mechanisms of antiestrogen action and using animal models to predict drug effects. By the mid-1980s, tamoxifen was poised to become widely prescribed as a breast cancer therapy, but Jordan and others had concerns about potential side effects, such as bone loss or heart attacks. His lab initiated a series of experiments to test this idea.

Solving a 70-Year Mystery

In a 2019 interview, Jordan recalled, “As a pharmacologist I’m interested in mechanisms and how to explain the strange results we might get in the lab. One of these strange results came when I was at the University of Wisconsin (WI, USA), showing that tamoxifen would switch on and switch off sites around a woman’s body. In the experimental mouse model, it would cause the uterus grow, which was odd, but breast cancer would not.”

This work led to a body of knowledge around a new class of drugs called selective estrogen receptor modulators, or SERMs. What was so strange about SERMs – and so unpredictable – was the fact that they affected certain tissues differently than others. In 1990, Jordan published his vision of the new group of medicines. Later, he gave a talk entitled “The World Turned Upside Down,” a nod to the unanticipated victory of U.S. revolutionaries at the Battle of Yorktown, in reference to the surprising finding that estrogen inhibited, rather than stimulated, the growth of certain breast cancer cells.

In some tissues, SERMs behaved like estrogen. But in others, they blocked its action. This was confounding. Estrogen is an important hormone, yet contradictory; as Jordan explains, it can be both helpful and harmful. It is necessary for reproduction and also helps maintain body temperature, safeguard the heart from the buildup of plaque in coronary arteries, preserve bone density, and strengthen vaginal health. However, it can also promote breast and uterine cancer.

Thus, the conventional wisdom at the time was that estrogen would prevent osteoporosis and heart disease – which led to the reasonable assumption that using an antiestrogen to treat breast cancer would promote these conditions in patients. Surprisingly, this turned out to not be true, and Jordan’s studies showed it. Though estrogen can fuel breast cancer growth, it can also kill vulnerable breast cancer cells under the right conditions. In figuring out how estrogen can both promote and prevent breast cancer, Jordan and his colleagues “solved a 70-year mystery.”

Unraveling SERMs’ Secrets

Tamoxifen is perhaps the best-known SERM; many breast cancer patients rely on the drug for its estrogen-blocking properties. As with most targeted drugs, however, the long-term problem with tamoxifen is acquired resistance. Jordan and his teams devoted themselves to unraveling the mechanism and timeframe for this to occur in patients. Among the important principles the team learned – in particular, through the experiments of Marco Gottardis, then one of Jordan’s trainees – is that while tamoxifen blocks breast cancer growth, it also stimulates endometrial cancer growth in animals. This ultimately led to a change in clinical care and enhanced patient safety. Other colleagues who made key experiments in his labs at multiple institutions included Anna Riegel, Doug Wolf, S.Y. Jiang, Mei-Huey Jeng, Bill Catherino, Anait Levenson, Joan Lewis, Eric Ariazi, Ping Fan, Philipp Maximov, and Balkees Abderrahman.

Said Jordan in a video produced by his current institution, The University of Texas MD Anderson Cancer Center, “We needed to find out everything we could about tamoxifen.” Tamoxifen is now on the World Health Organization’s list of essential medicines. “We worked out exactly the strategy to use to target only those tumors that have estrogen receptors,” he added. “It was the first targeted therapy.”

With the critical contributions of the Jordan lab experiments, scientists conducted a large-scale study published in 1998 demonstrating that tamoxifen can prevent breast cancer in women as young as 35 who are at high risk. No drug had ever before been shown to help stave off breast tumors like this.

There were more scientific mysteries to solve. The Jordan lab was integral to the development of the SERM raloxifene to prevent and treat the bone disorder osteoporosis, which makes bones weaker and increases the chance of fractures. But when the lab tried to publish some of the experiments, osteoporosis journals initially rejected the findings, saying that since estrogen preserved bone loss, it didn’t make sense that antiestrogens would do the same thing.

Raloxifene has also proven effective in the prevention of breast cancer without raising the risk of endometrial cancer after scientists tested it in post-menopausal women who, aside from their age, had no other apparent risk factors. A large-scale clinical trial called the Study of Tamoxifen and Raloxifene (STAR) demonstrated that both compounds showed a decrease in breast cancer in high-risk women, and raloxifene produced fewer adverse effects.

A Maverick in Science

A letter from Princess Diana to Dr. Jordan. “I wish you all the strength to press ahead with your vital work.”

A letter from Princess Diana to Dr. Jordan. “I wish you all the strength to press ahead with your vital work.”

Jordan’s work through the years to unravel the secrets of SERMs has been funded by NIH and the Department of Defense, with additional support from philanthropic organizations. He has published hundreds of peer reviewed papers on antiestrogens, as well as contributed to many books and conferences and served on several editorial boards. His work has resulted in multiple scientific awards. Perhaps the most exciting was an appointment by Queen Elizabeth II as a “Companion of the Most Distinguished Order of St. Michael and St. George” for services to women’s health. Jordan also befriended another royal, Princess Diana, after he was tasked with organizing a conference in Chicago in her honor. He subsequently became Northwestern University’s first endowed chair named after the princess in the wake of her tragic death.

“Sometimes you need mavericks in science that don’t follow what everyone else does,” says Gottardis, now at the Janssen Pharmaceutical Companies of Johnson & Johnson. “[Jordan] was lucky to get funded by NIH; otherwise, this wouldn’t have happened. Somebody believed in him.”

In 2014, Jordan went full circle: He returned to Texas, the state of his birth, to join MD Anderson. Though he built a decades-long career in the United States, Jordan maintained another connection with the country in which he spent his childhood, the United Kingdom. According to an MD Anderson profile, “For the past 50 years, Jordan has led what he calls a ‘double life.’ For his day job, Jordan developed breakthrough breast cancer treatments, pioneering the estrogen-blocking drug tamoxifen, which has been credited with saving the lives of millions of women worldwide. But for most of his career, Jordan also served as a reserve officer in the British Special Air Service (SAS), one of the most elite military units in the world. Founded in 1941, the SAS is the rough equivalent of the U.S. Army Green Berets or Navy SEALs — a small, secretive fraternity of Special Forces soldiers and intelligence officers.”

In 1990, when Jordan wrote his first paper on SERMs, the new class of drugs was an idea. Now five of them have been approved by the Food and Drug Administration, and millions of lives have been saved or improved – which shows that little moments of serendipity can add up to make a big difference.

Dr. Jordan receiving an award from Prince William, The Duke of Cambridge

Dr. Jordan receiving an award from Prince William, The Duke of Cambridge

By Erin Heath

2020: A Spike in Momentum

AWARDEES: Kizzmekia Corbett, Barney Graham, Emmie de Wit & Vincent Munster

FEDERAL FUNDING AGENCIES: National Institutes of Health

A Spike in Momentum

“If you’d have waited until the pandemic existed, you’d have waited too long.” This is the assessment of Steven Holland, who directs intramural (in-house) research at the National Institute of Allergy and Infectious Diseases. He should know: He oversees hundreds of researchers in multiple biomedical labs, many of whom are trying to solve pressing public health problems.

“I don’t think it is news to anyone that we pay attention to what is dramatic and dire, and we don’t pay attention to what is simply annoying or trivial. Coronaviruses are split up into seasonal viruses, like the ones that cause the common cold, and those that have taken over the global consciousness—SARS, MERS, and now SARS-CoV-2,” says Holland. Prior to the emergence of the COVID-19 pandemic, only a relatively small number of researchers were working on coronaviruses. Those researchers—including 2020 Golden Goose awardees Kizzmekia Corbett, Barney Graham, Emmie de Wit and Vincent Munster—have used their expertise in SARS, MERS, and other viruses like Ebola to pivot quickly to responding to the COVID-19 pandemic.

A Noteworthy Mentor

Barney Graham has spent the past two decades at NIAID’s Vaccine Research Center in Bethesda, MD, and is currently the center’s deputy director. A turning point for him in understanding coronaviruses came in 2013 after he and his team—which included fellow 2020 Golden Goose honoree Jason McLellan—spent three years defining the structure of a protein on the respiratory syncytial virus (RSV) that is similar to the spike protein, so named for its spiky appearance, in the coronavirus causing global havoc today. Kizzmekia Corbett joined the lab the following year and jumped right into working with coronaviruses with Graham at the tail end of the MERS outbreak. While the push to develop a universal influenza vaccine was generating significant buzz and scientific enthusiasm, Corbett was drawn to coronavirus research, a relatively “quiet” field in which she felt she would have space to grow and hone her skills.

Barney Graham and Kizzmekia Corbett

Barney Graham and Kizzmekia Corbett

It wasn’t the first time the two had shared a lab. Corbett first met Graham as a high school intern. “He took me into his lab when I was 18 – no one would have done that,” she recalls. Since then, Graham has served as a mentor to her and others in the lab. “He’s really been one of those people who has in so many ways unselfishly given up his seat at the table [and] propelled career trajectories.”

Now they are close collaborators in the fight against COVID-19. Life has changed for Corbett, who has inspired several media profiles, though she notes “I don’t really think I’ve had time to process it.” Her work at the Vaccine Research Center is gratifying, she says, because she and her colleagues learn, problem-solve, and get better with each new virus.

Spiked Proteins

The spike protein exists on the surface of a coronavirus; it is what the virus uses to attach to and enter a cell. Originally, says Corbett, scientists “didn’t know what spike proteins looked like, because they are unstable in the lab.” Using the knowledge generated from previous work, Graham and Corbett partnered with McLellan and his team to define the structure of the SARS-CoV-2 spike protein. (For more on McLellan, read A Llama Named Winter.) None of them could have predicted just how important their ongoing basic research would become in addressing COVID-19. It gave them, in essence, a head start against the pandemic.

Thus, the NIAID team and their collaborators put their knowledge to work to develop the backbone of a COVID-19 vaccine. The vaccine candidate involves messenger RNA, or mRNA, a kind of genetic material that delivers a stabilized version of the spike protein based on the previous work with the MERS coronavirus. Explained Corbett in a NIAID video, “The messenger RNA will tell the body to present this spike protein and the body will respond by creating an immune response, and hypothetically, if all goes well, then that immune response will then be able to see a novel coronavirus before a person gets infected and prevent that infection.” An existing partnership with a pharmaceutical company, Moderna, also helped speed the process.

“It was a staggeringly short period of time” to develop a vaccine candidate, emphasizes Holland, because the work they had done on similar viruses prepared them so well for SARS-CoV-2. “It was so far out in front of anything that had been done before.” The important question though, he says, was whether the vaccine would prevent disease in humans.

A Dynamic Duo

Emmie de Wit

Emmie de Wit

Another NIAID duo, this one in the NIH’s Rocky Mountain Labs in Montana, have been doing their part to help answer that question and learn more about the nature of SARS-CoV-2. For Emmie de Wit, the fascination with viral outbreaks began in the early 2000s in the Netherlands, where an emergence of bird flu occurred while she was working toward her PhD at Erasmus University Rotterdam. She and her lab mates went into action to identify the virus and work on patient samples. The work became the basis for her thesis project and the foundation for a career spent investigating outbreaks.

The PhD program was fortuitous for another reason: It is where she met Vincent Munster. The pair married in 2009.

“While Emmie did a lot of work on molecular biology in her thesis, I was trying to understand where viruses come from,” Munster recalls – in particular, how they spilled from the natural world into humans. Among his projects was a stint working in the Middle East on camels during the MERS epidemic; though camels didn’t get ill with the MERS virus, they spread it quite effectively to humans. In fact, Munster used his starter funds at NIH to acquire three camels, a purchase he recalls having to then explain to new boss Dr. Anthony Fauci, NIAID’s longtime director and now a household name as a leading federal infectious disease expert. For the record, Fauci was supportive.

Vincent Munster

Vincent Munster

Munster and de Wit have separate research labs at the Rocky Mountain Labs, where they moved in 2009. “We have different skills sets, and when you put them together, they work really well,” de Wit points out. The common thread of their work involves investigating outbreaks. They work with animals to learn about viruses and create tests to figure out whether vaccine candidates and treatments have the potential to work in humans.

With SARS-CoV-2, their skills in developing animal models for previous viruses came in handy, and like their colleagues in Bethesda, they were able to quickly get their COVID-19 response research up and running. When the pair first learned of the new virus, “we immediately dropped everything,” de Wit says. Since that moment in January, using routines devised from their earlier investigations, they have developed new ways to understand the virus and test multiple vaccine candidates and potential antiviral treatments.

Promising Results

Thanks to the work of Corbett, Graham, de Wit and Munster, as well as McLellan and many other researchers who have dedicated their careers to the study of infectious diseases, several vaccine candidates have already progressed to Phase 3 clinical trials, generally the final and most rigorous phase of testing, to see if they will work to prevent the novel coronavirus from taking hold in humans. On November 16, Moderna, the company partnering with the NIAID team, announced its initial findings that its vaccine was more than 94 percent effective at preventing COVID-19. It was the second after Pfizer to show such a hopeful finding.

Concludes Holland about his NIAID colleagues: “They were working on what we needed all along.”

Camel.jpg

By Erin Heath

 

 

2020: The Human Immunome: Small Moves Become a Movement

AWARDEE: James E. Crowe, Jr.

FEDERAL FUNDING AGENCIES: Department of Defense, National Institutes of Health

The Human Immunome: Small Moves Become a Movement

James E. Crowe, Jr.

James E. Crowe, Jr.

“We’re basic scientists. We did not set out initially just to make drugs. That’s not what it was about when we started. I’ve just always been interested in how the human immune system even recognizes a virus,” says James E. Crowe, Jr., a well-known infectious disease expert who directs the Vanderbilt Vaccine Center in Nashville, Tennessee. “Now we can study how the body works and have curiosity about that, but also have drugs in the pipeline. It’s the unexpected benefit we get by delving deeply into the human immune system.” Crowe is one of the recipients of the 2020 Golden Goose COVID-19 Recognition for his decades-long pursuit of a better understanding of the human immune system, which is now paying dividends in the fight against COVID-19.

Investigating the Immune System

A pediatrician by training, Crowe spent part of his time in medical school caring for children in developing countries, where he saw the difference that access to health care could make. Vaccines held a particular interest. He was inspired by the advent of the life-changing polio vaccine, so much that he decided to join the lab of the late Robert Chanock, who led vaccine research at the NIH campus in Bethesda, Maryland. “Because of this deep immersion,” he says, “I became interested in research and its potential to save the world from the consequences of many diseases.” In 1995, Crowe joined the Vanderbilt faculty, where he soon won an award for young investigators from the American Society for Microbiology (ASM). At the same ASM meeting was innovator Craig Venter, famous for his role in the Human Genome Project. Sitting in the audience, listening to Venter’s speech, Crowe began thinking about small moves in science that can lead to a movement. If Venter could do it with the genome, why couldn’t Crowe do it with the immune system?

Well, for one thing: The genes that encode antibodies, proteins that function in the body as soldiers against disease, are more complicated. “Antibodies are formed by a combination of genes. You can make over ten thousand combinations, almost to the scale of the genome, but the three genes used are not stitched end to end,” Crowe explains. The spaces between them, where the antibody genes are joined, are filled with random DNA sequences that lead to many times more combinations. Even more challenging? The antibody genes can mutate. Factoring in those mutations, the scale of trying to characterize the human immunome, the “parts list” of the human immune system, is immense—billions of times larger than the human genome.

The focus on antibodies appealed to Crowe’s sense of wonder at the complex forms and patterns to be found in a biodiverse world. “My fascination with the pattern in nature touches back to my childhood,” Crowe says. “I was a collector of rocks and seashells and stamps, and the idea was to get one of everything. The idea of sequencing every possible antibody gene of the billions possible spoke to me.”

At the time Crowe was beginning to launch the idea, another epidemic was tearing through the globe: AIDS. HIV, the virus that causes AIDS, was garnering significant scientific interest. Crowe received a grant from the National Institutes of Health (NIH) to study B cells, the type of immune cell that makes antibodies, and he began to sequence antibodies in patients who tested positive or negative for HIV. Since then, the field has advanced in areas such as genome sequencing, synthetic biology, and data analysis, and this advancement has enabled the groundbreaking work that Crowe and his colleagues are now doing to understand the human immune system. Despite the potential, “sometimes big science can get criticized for scope relative to return on investment,” says Crowe’s colleague Jennifer Pietenpol, who oversees research at the Vanderbilt University Medical Center. It was a perception that Crowe and his team had to consider as they ramped up their antibody work over the years. However, as Pietenpol shares, “the techniques developed and learnings from big projects typically benefit the entire discovery continuum.”

The Human Genome Project, for example, took 13 years and cost around $3 billion, and it raised similar questions. Today, next-generation DNA sequencers can do the same job in a day for less than $1,000. Motivated by such advances, in 2016 a public-private partnership called The Human Vaccines Project launched the Human Immunome Program, with Crowe as its scientific director. The decade-long, multi-million-dollar effort brings together the expertise of many different disciplines and partners—a clear example of “team science.” A stronger understanding of the immunome could offer the potential to tailor human immune response and better ward off illness.

Advances in immunology have also led to a focus on monoclonal antibodies, lab-produced proteins that can bind to substances in the body that cause disease. Crowe’s lab has been able to generate such antibodies to target viruses that include dengue, Ebola, HIV, influenza, norovirus, respiratory syncytial virus (RSV), rotavirus, Zika virus and others. Antibodies the lab made for Marburg virus and chikungunya virus are now being explored in clinical trials. As with genome sequencing, the isolation and production of such antibodies is becoming easier and cheaper for scientists.

A Serendipitous Delivery

Crowe knew his lab’s approach to antibodies could help when the first U.S. cases of COVID-19 emerged in Seattle—but before his lab could engage, he needed to get blood samples from the patients to his teammates. “We had to get a blood sample from the first U.S. case from Seattle to Nashville on a Saturday night,” Crowe recalls. What followed was a series of calls—and, like many Golden Goose stories, a little serendipity.

“I called my CEO and asked him to call the CEO of FedEx, because I figured they knew each other,” he says, noting that FedEx is also based in Tennessee. “They got it done! Someone in a Lincoln town car drove the sample to my home on Sunday morning, I walked it into work, and we were up and running.”

Crowe and his wife in Italy

Crowe and his wife in Italy

Problem solving is a necessity, and not only in collecting viable samples. In Crowe’s case, this work often means devising new strategies on short notice. Case in point: He and his wife departed for a long-planned three-month sabbatical to Italy in February–just before the novel coronavirus swept that country. After 25 fast-paced years at Vanderbilt, Crowe had been excited to have time and space to do some strategic thinking about high impact work he wanted to do over the next decade. So he made preparations to manage his lab from afar. These plans quickly changed as COVID-19 spread and Italy became the next major hot spot; Crowe and his wife made it onto one of the last available flights home.

By then, he and his team had worked out the techniques they wanted to use to do the coronavirus antibody work. Fortuitously, the best blood samples arrived the day he got back to the lab. “You have to let the immune response mature over time to get the best antibodies,” he says. “We tracked down some people who had been infected in Wuhan in December and got their samples in March.”

Just how does a scientist track down blood samples in situations like this? One has to be part news junkie, part detective. Crowe and his colleagues would scan the headlines for cases, distribute flyers, and get in touch with research and public health contacts in other cities who could share publicly available information and spread the news of the lab’s efforts. Often, the patients who learned of the project were eager to help. One patient even came twice when called for a blood draw—the second time on the way to the airport at the crack of dawn.

Accelerated Work

As the team collected samples, they were also conducting the science at a vastly accelerated pace. Though Crowe’s work on the Human Immunome Program was the basis for the work his lab is doing to help solve COVID-19, it wasn’t a straight line from one to the other. Crowe also credits work funded by the Defense Advanced Research Projects Agency (DARPA) at the Department of Defense for enabling the lab to respond so quickly and efficiently to the pandemic.

Running in the Alps

Running in the Alps

In late 2017, the team received a grant from DARPA to take on a challenge: find a way to produce antibodies for any virus within 60 days of collecting a blood sample from a survivor. DARPA focuses on cutting-edge defense-related research and is often know for outside-the-box thinking; in this case, the aim was to protect American troops if they were to confront a deadly virus in the field. “We thought it sounded crazy—this type of work usually takes years—but that’s what DARPA does. It funds the crazy,” Crowe says.

The team decided to attempt the rapid production of Zika virus antibodies after a flare-up of the mosquito-borne disease that caused birth defects in several countries. “We took a sample, started the clock, and made Zika antibodies and fully tested them in 78 days. We had a lot of semi-disastrous stuff happen like instruments breaking, so we had already learned how to make mistakes, face unexpected accidents, and move on.” The team had begun planning a second simulation of an avian influenza outbreak for 2020 but pivoted in January to handling COVID-19 as it became clear how serious the new virus had become. “With COVID-19, we jumped right in. We worked at a scale that was unprecedented because we knew how to do this already. … Staying stuck or quitting is not acceptable. You have to move forward—like in improv comedy. It was bootstrapping with very few assets to start with, and this type of work is very exciting if the team works together well. Each person has to enjoy the freedom of curiosity and extemporaneous ideas, while working up to 20 hours a day with urgent goals and timelines.”

With the blood samples they received from the Wuhan patients, Crowe and his team made thousands of monoclonal antibodies. After selecting the most promising ones and rapidly testing them against the virus in animal models, they sent the leading candidates for antibody tests and treatments to pharmaceutical companies. The first antibody sequences, which went to Astra Zeneca, have now led to potential treatments being tested in five different Phase 3 clinical trials. Some of the antibodies also are being readied for clinical trials by the U.S. government’s Joint Program Executive Office for Chemical, Biological, Radiological and Nuclear Defense (JPEO-CBRND). “In the context of this pandemic,” says Pietenpol, “the Crowe team’s pursuit of large-scale antibody science is returning milestone advances for the prevention and treatment of COVID-19.”

Crowe points out the importance of having multiple federal agencies with different missions and approaches, like NIH, DARPA, and the Biomedical Advanced Research and Development Authority (BARDA) supporting scientific research. “[COVID-19] is the best case I’ve ever seen where the agencies were all working together” toward the same goal, he says. He sees it as a testament to what federal R&D can accomplish to help solve pressing public health problems—now and in the future.

 By Meredith Asbury

2020: A Llama Named Winter

AWARDEES: Jason McLellan and Daniel Wrapp

FEDERAL FUNDING AGENCIES: National Institutes of Health, Department of Energy

A Llama Named Winter: An Unlikely Partner in the Fight Against COVID-19

Winter the llama

Winter the llama

Winter the llama currently spends her days grazing in peaceful retirement on a research farm in Ghent, Belgium, with about 130 of her llama and alpaca friends. But although you wouldn’t know it to look at her, this cocoa-colored, long-legged camelid is already playing a pivotal role in the hunt for effective treatments for COVID-19.

In partnership with researchers at Ghent University, Dr. Jason McLellan, a structural virologist at the University of Texas at Austin, and Daniel Wrapp, a doctoral student in McLellan’s lab, linked a special antibody produced by Winter to a human antibody to create a new antibody that binds to a protein on the coronavirus that causes COVID-19, thus inhibiting the virus from infecting human cells.

The Spike of the Virus

To fully grasp Winter’s unlikely contribution to the battle against COVID-19, it helps to understand the way coronaviruses spread throughout the body.

Scientists have identified hundreds of unique coronaviruses, although most of them circulate in non-human mammals and birds. Seven coronaviruses are known to cause sickness in humans; of these, four cause only mild-to-moderate illness, including some variations of the common cold. The other three have been the cause of more serious diseases: SARS, MERS, and COVID-19.

Coronaviruses get their name from a group of specialized proteins, called spike proteins, which dot the surface of each viral envelope and give the virus the appearance of a crown-like ring, similar to a solar corona. These proteins are what allow coronaviruses to break into human cells, after which they begin to use the machinery of those cells to start replicating. If this break-in is interrupted, the virus can’t infect the host.

Here’s Where the Llama Comes In

Scientists in the 1990s discovered something special about the antibodies produced by camelids, the family of animals that includes llamas, alpacas, and camels, among others. While humans only make one kind of antibody, made up of heavy and light protein chains arranged in a Y shape, camelids produce two. One of these is very similar to a human antibody. The other, called a single-domain antibody, VHH, or nanobody, does not have any light-chain proteins. This makes it much smaller, which is a boon to researchers.

A rendering of a camelid antibody

A rendering of a camelid antibody

One of the ways an antibody can disrupt a coronavirus is by binding to key areas on the spike protein. Because they are smaller, the nanobodies produced by camelids can sneak into nooks and crannies on the spike protein while larger antibodies are blocked. These nanobodies also have the advantage of being stable and easy to manipulate. They can be linked with other antibodies, including human antibodies, to increase their effectiveness. Furthermore, nanobodies can be nebulized and used in an inhaler, which is good news for a respiratory illness such as COVID-19.  

Camelids are not the only animals that produce nanobodies: sharks produce them, as well, although biologists suspect that those nanobodies evolved through different processes.

Since the discovery of nanobodies, scientists have worked with camelids (and even some sharks) to develop promising therapies to treat various diseases. McLellan and Wrapp are among those scientists.

Building Relationships

A first-generation college student from a suburb outside of Detroit, McLellan fell in love with structural biology in college before training in the field in graduate school. “In structural biology, we’re trying to determine the first structures of proteins and molecules,” he says. “We’re answering basic science questions about molecules, their structure, their function, but then also trying to take some shots on goal and actually generate some products—some vaccines, some antibodies—that could have an impact on human health.”

As a post-doc, he joined a lab run by Dr. Peter Kwong, who was working on the possibility of a structure-based vaccine for HIV. A quirk of fate led to another fruitful partnership. According to McLellan, there was no room for him on Dr. Kwong’s lab on the fourth floor, so he moved to the second floor. It was there that he met Dr. Barney Graham—another 2020 Golden Goose honoree—who would become a close friend and collaborator.

“He’s a really generous, warm person,” McLellan says about Graham. “He’s just one of the really good people in science.”

McLellan recalls describing his frustrations with structure-based vaccine design for HIV to Graham, who suggested that he try out his ideas on respiratory syncytial virus (RSV), a respiratory illness that can be serious in infants and older adults. McLellan’s work on the structure of the RSV F protein (a very similar protein to coronavirus spikes) caught the attention of a group of researchers in Ghent led by Dr. Xavier Saelens. They reached out about a potential collaboration. Their goal? Identifying and isolating camelid nanobodies that could neutralize RSV.

winter 4.jpg

Along with Graham, McLellan and Saelens signed an agreement to work together on RSV in July 2013. Over the next couple years, the labs traded material. McLellan’s lab sent over stabilized RSV F proteins, which Saelens’ group then used to immunize a llama from their herd (Winter wasn’t born yet). When that llama produced antibodies, Saelens’ lab sent those back to McLellan’s group, where a first-year graduate student named Daniel Wrapp helped to map their structures.

“That was actually the first crystal structure I ever solved,” says Wrapp, “that first little camelid nanobody that neutralized RSV.”

Prior to this first big solve, back when he was still a senior in college, Wrapp started reading the work on RSV that McLellan was publishing. “This is the coolest thing in the world,” he recalls thinking. “He’s found a way to look at these proteins on an atomic level. He’s figured out how to manipulate them, and make them function—or not function—exactly the way that he wants, and he’s figured out how to manipulate that and leverage it to elicit a particular immune response that will be effective at neutralizing a virus. And I thought that that was incredible.” Wrapp applied to Dartmouth, where McLellan was at the time, specifically because he wanted to join his lab.

It was a charmed partnership from the start. During Wrapp’s first year of graduate school, the lab received a National Institutes of Health R01 grant to look at the structure and function of coronaviruses. He remembers the date they received the good news with absolute clarity: it was his birthday.

Though the NIH funding was critical, the team’s investigation of protein structures, including those related to SARS-CoV-2, also received a critical assist from the Department of Energy’s Argonne National Laboratory. “For structural determination, we hit the protein crystals with x-rays,” McLellan explains. Argonne allocate time to the team to use its synchrotron facilities for this purpose. 

McLellan and Wrapp in lab

McLellan and Wrapp in lab

Laying the Groundwork

After the success of their work on RSV, McLellan and Graham decided to apply what they had learned about structure-based vaccine design to coronaviruses. It was a natural leap: the RSV F protein McLellan and his team had mapped out belongs to the same family of proteins as the coronavirus spike. And again, they reached out to Saelens’ lab—and herd—in Ghent.

Winter was just nine months old when she was chosen at random to participate in the coronavirus study. This was in 2016. Just as they had with another llama and RSV, scientists at Winter’s facility in Ghent injected her with stabilized spike proteins from SARS-CoV-1 and MERS-CoV. Inspired by work in the influenza field, their hope was to isolate a single antibody that could neutralize all coronaviruses.

“We wanted to get the one antibody or nanobody to rule them all,” McLellan says. That didn’t work. But the team was able to isolate some potent MERS-specific and SARS-specific nanobodies. They were writing up their findings when everything started to change in January 2020.

The Call

McLellan was on a snowboarding trip with his family in Park City, UT, when he got the call that changed everything. It was Dr. Graham, calling to tell him that he had been in contact with the CDC, and that it looked like the new pathogen that was making headlines was a coronavirus. McLellan recalled Graham wanting to know if he wanted to rush to make a vaccine together. “I said, ‘sure, let’s do it. This is what we’ve been preparing for,’” McLellan says. “I immediately messaged Daniel, told him, ‘Be ready. We’re going all-in as soon as we get the sequence.’”

Jason McLellan

Jason McLellan

Because of their past work understanding and manipulating the spike proteins of SARS-CoV and MERS-CoV, McLellan’s team, which included colleague Nianshuang Wang as well as Wrapp, was able to rapidly map the structure of the SARS-CoV-2 spike protein and develop a stabilized version of it that could be used as a COVID-19 vaccine antigen. Antigens, which are molecules or molecular structures that trigger a hoped-for immune response, are crucial elements of vaccine development. Shortly after that, the team filed a joint patent application on their stabilized spike protein along with fellow 2020 Golden Goose Award winners Graham and Kizzmekia Corbett. (For more on Graham and Corbett, read A Spike in Momentum.) Genetic information from this stabilized protein has been incorporated in the vaccines currently under development at Moderna, Pfizer/BioMTech, Novavax, and Johnson & Johnson.

 At the same time, Wrapp and McLellan still had Winter’s MERS- and SARS-reactive nanobodies and decided to put together an experiment to see if either of them bound to SARS-CoV-2. They discovered that one of them, the SARS-CoV-1 nanobody, did. They then linked this to a fragment from a human antibody to produce an antibody that tightly binds to a key area of the SARS-CoV-2 spike protein, effectively blocking it from infecting human cells. 

Daniel Wrapp

Daniel Wrapp

Wrapp recalls that the experiment “was kind of almost an afterthought, which sounds ridiculous now, because it ended up being so fruitful.” The hope is that Winter’s nanobodies might be used to develop prophylactic treatments. Unlike a vaccine, which must be administered before a person is infected to provide protection, antibody therapies can be used to treat someone who is already sick. The team published their findings earlier this year, and a company has formed with the hopes of shepherding the research through the testing process to a viable product. In the meantime, Wrapp plans to finish his Ph.D. Winter will continue to enjoy her retirement.

Reflecting on the discovery, McLellan stressed the importance of funding basic research. When they first immunized Winter, he pointed out, SARS no longer in the human population, and MERS was only popping up sporadically. Their funding “wasn’t in response to a pandemic: it was just good science on an important class of pathogens, and I think it’s really paid off.”

By Haylie Swenson