3 Futuristic Biotech Programs the U.S. Government Is Funding Right Now
Kira Peikoff was the editor-in-chief of Leaps.org from 2017 to 2021. As a journalist, her work has appeared in The New York Times, Newsweek, Nautilus, Popular Mechanics, The New York Academy of Sciences, and other outlets. She is also the author of four suspense novels that explore controversial issues arising from scientific innovation: Living Proof, No Time to Die, Die Again Tomorrow, and Mother Knows Best. Peikoff holds a B.A. in Journalism from New York University and an M.S. in Bioethics from Columbia University. She lives in New Jersey with her husband and two young sons. Follow her on Twitter @KiraPeikoff.

Biomedical engineer Kevin Zhao has a sensor in his arm and his chest that monitors his oxygen level in those tissues in real time.
Last month, at a conference celebrating DARPA, the research arm of the Defense Department, FBI Special Agent Edward You declared, "The 21st century will be the revolution of the life sciences."
Biomedical engineer Kevin Zhao has a sensor in his arm and chest that monitors his oxygen level in real time.
Indeed, four years ago, the agency dedicated a new office solely to advancing biotechnology. Its primary goal is to combat bioterrorism, protect U.S. forces, and promote warfighter readiness. But its research could also carry over to improve health care for the general public.
With an annual budget of about $3 billion, DARPA's employees oversee about 250 research and development programs, working with contractors from corporations, universities, and government labs to bring new technologies to life.
Check out these three current programs:
1) IMPLANTABLE SENSORS TO MEASURE OXYGEN, LACTATE, AND GLUCOSE LEVELS IN REAL TIME
Biomedical engineer Kevin Zhao has a sensor in his arm and his chest that monitors his oxygen level in those tissues in real time. With funding from DARPA for the program "In Vivo Nanoplatforms," he developed soft, flexible hydrogels that are injected just beneath the skin to perform the monitoring and that sync to a smartphone app to give the user immediate health insights.
A first-in-man trial for the glucose sensor is now underway in Europe for monitoring diabetics, according to Zhao. Volunteers eat sugary food to spike their glucose levels and prompt the monitor to register the changes.
"If this pans out, with approval from FDA, then consumers could get the sensors implanted in their core to measure their levels of glucose, oxygen, and lactate," Zhao said.
Lactate, especially, interests DARPA because it's a first responder molecule to the onset of trauma, sepsis, and potentially infection.
"The sensor could potentially detect rise of these [body chemistry numbers] and alert the user to prevent onset of dangerous illness."
2) NEAR INSTANTANEOUS VACCINE PROTECTION DURING A PANDEMIC
Traditional vaccines can take months or years to develop, then weeks to become effective once you get it. But when an unknown virus emerges, there's no time to waste.
This program, called P3, envisions a much more ambitious approach to stop a pandemic in its tracks.
"We want to confer near instantaneous protection by doing it a different way – enlist the body as a bioreactor to produce therapeutics," said Col. Matthew Hepburn, the program manager.
So how would it work?
To fight a pandemic, we will need 20,000 doses of a vaccine in 60 days.
If you have antibodies against a certain infection, you'll be protected against that infection. This idea is to discover the genetic code for the antibody to a specific pathogen, manufacture those pieces of DNA and RNA, and then inject the code into a person's arm so the muscle cells will begin producing the required antibodies.
"The amazing thing is that it actually works, at least in animal models," said Hepburn. "The mouse muscles made enough protective antibodies so that the mice were protected."
The next step is to test the approach in humans, which the program will do over the next two years.
But the hard part is actually not discovering the genetic code for highly potent antibodies, according to Hepburn. In fact, researchers already have been able to do so in two to four weeks' time.
"The hard part is once I have an antibody, a large pharma company will say in 2 years, I can make 100-200 doses. Give us 4 years to get to 20,000 doses. That's not good enough," Hepburn said.
To fight a pandemic, we will need 20,000 doses of a vaccine in 60 days.
"We have to fundamentally change the idea that it takes a billion dollars and ten years to make a drug," he concluded. "We're going to do something radically different."
3) RAPID DIAGNOSING OF PATHOGEN EXPOSURE THROUGH EPIGENETICS
Imagine that you come down with a mysterious illness. It could be caused by a virus, bacteria, or in the most extreme catastrophe, a biological agent from a weapon of mass destruction.
What if a portable device existed that could identify--within 30 minutes—which pathogen you have been exposed to and when? It would be pretty remarkable for soldiers in the field, but also for civilians seeking medical treatment.
This is the lofty ambition of a DARPA program called Epigenetic Characterization and Observation, or ECHO.
Its success depends on a biological phenomenon known as the epigenome. While your DNA is relatively immutable, your environment can modify how your DNA is expressed, leaving marks of exposure that register within seconds to minutes; these marks can persist for decades. It's thanks to the epigenome that identical twins – who share identical DNA – can differ in health, temperament, and appearance.
These three mice are genetically identical. Epigenetic differences, however, result in vastly different observed characteristics.
Reading your epigenetic marks could theoretically reveal a time-stamped history of your body's environmental exposures.
Researchers in the ECHO program plan to create a database of signatures for exposure events, so that their envisioned device will be able to quickly scan someone's epigenome and refer to the database to sort out a diagnosis.
"One difficult part is to put a timestamp on this result, in addition to the sign of which exposure it was -- to tell us when this exposure happened," says Thomas Thomou, a contract scientist who is providing technical assistance to the ECHO program manager.
Other questions that remain up in the air for now: Do all humans have the same epigenetic response to the same exposure events? Is it possible to distinguish viral from bacterial exposures? Does dose and duration of exposure affect the signature of epigenome modification?
The program will kick off in January 2019 and is planned to last four years, as long as certain milestones of development are reached along the way. The desired prototype would be a simple device that any untrained person could operate by taking a swab or a fingerprick.
"In an outbreak," says Dr. Thomou, "it will help everyone on the ground immediately to have a rapidly deployable machine that will give you very quick answers to issues that could have far-reaching ramifications for public health safety."
Kira Peikoff was the editor-in-chief of Leaps.org from 2017 to 2021. As a journalist, her work has appeared in The New York Times, Newsweek, Nautilus, Popular Mechanics, The New York Academy of Sciences, and other outlets. She is also the author of four suspense novels that explore controversial issues arising from scientific innovation: Living Proof, No Time to Die, Die Again Tomorrow, and Mother Knows Best. Peikoff holds a B.A. in Journalism from New York University and an M.S. in Bioethics from Columbia University. She lives in New Jersey with her husband and two young sons. Follow her on Twitter @KiraPeikoff.
Podcast: The Friday Five weekly roundup in health research
Researchers are making progress on a vaccine for Lyme disease, sex differences in cancer, new research on reducing your risk of dementia with leisure activities, and more in this week's Friday Five
The Friday Five covers five stories in health research that you may have missed this week. There are plenty of controversies and troubling ethical issues in science – and we get into many of them in our online magazine – but this news roundup focuses on scientific creativity and progress to give you a therapeutic dose of inspiration headed into the weekend.
Covered in this week's Friday Five:
- Sex differences in cancer
- Promising research on a vaccine for Lyme disease
- Using a super material for brain-like devices
- Measuring your immunity to Covid
- Reducing risk of dementia with leisure activities
Matt Fuchs is the editor-in-chief of Leaps.org. He is also a contributing reporter to the Washington Post and has written for the New York Times, Time Magazine, WIRED and the Washington Post Magazine, among other outlets. Follow him on Twitter @fuchswriter.
Giving robots self-awareness as they move through space - and maybe even providing them with gene-like methods for storing rules of behavior - could be important steps toward creating more intelligent machines.
One day in recent past, scientists at Columbia University’s Creative Machines Lab set up a robotic arm inside a circle of five streaming video cameras and let the robot watch itself move, turn and twist. For about three hours the robot did exactly that—it looked at itself this way and that, like toddlers exploring themselves in a room full of mirrors. By the time the robot stopped, its internal neural network finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. In other words, the robot built a spatial self-awareness, just like humans do. “We trained its deep neural network to understand how it moved in space,” says Boyuan Chen, one of the scientists who worked on it.
For decades robots have been doing helpful tasks that are too hard, too dangerous, or physically impossible for humans to carry out themselves. Robots are ultimately superior to humans in complex calculations, following rules to a tee and repeating the same steps perfectly. But even the biggest successes for human-robot collaborations—those in manufacturing and automotive industries—still require separating the two for safety reasons. Hardwired for a limited set of tasks, industrial robots don't have the intelligence to know where their robo-parts are in space, how fast they’re moving and when they can endanger a human.
Over the past decade or so, humans have begun to expect more from robots. Engineers have been building smarter versions that can avoid obstacles, follow voice commands, respond to human speech and make simple decisions. Some of them proved invaluable in many natural and man-made disasters like earthquakes, forest fires, nuclear accidents and chemical spills. These disaster recovery robots helped clean up dangerous chemicals, looked for survivors in crumbled buildings, and ventured into radioactive areas to assess damage.
Now roboticists are going a step further, training their creations to do even better: understand their own image in space and interact with humans like humans do. Today, there are already robot-teachers like KeeKo, robot-pets like Moffin, robot-babysitters like iPal, and robotic companions for the elderly like Pepper.
But even these reasonably intelligent creations still have huge limitations, some scientists think. “There are niche applications for the current generations of robots,” says professor Anthony Zador at Cold Spring Harbor Laboratory—but they are not “generalists” who can do varied tasks all on their own, as they mostly lack the abilities to improvise, make decisions based on a multitude of facts or emotions, and adjust to rapidly changing circumstances. “We don’t have general purpose robots that can interact with the world. We’re ages away from that.”
Robotic spatial self-awareness – the achievement by the team at Columbia – is an important step toward creating more intelligent machines. Hod Lipson, professor of mechanical engineering who runs the Columbia lab, says that future robots will need this ability to assist humans better. Knowing how you look and where in space your parts are, decreases the need for human oversight. It also helps the robot to detect and compensate for damage and keep up with its own wear-and-tear. And it allows robots to realize when something is wrong with them or their parts. “We want our robots to learn and continue to grow their minds and bodies on their own,” Chen says. That’s what Zador wants too—and on a much grander level. “I want a robot who can drive my car, take my dog for a walk and have a conversation with me.”
Columbia scientists have trained a robot to become aware of its own "body," so it can map the right path to touch a ball without running into an obstacle, in this case a square.
Jane Nisselson and Yinuo Qin/ Columbia Engineering
Today’s technological advances are making some of these leaps of progress possible. One of them is the so-called Deep Learning—a method that trains artificial intelligence systems to learn and use information similar to how humans do it. Described as a machine learning method based on neural network architectures with multiple layers of processing units, Deep Learning has been used to successfully teach machines to recognize images, understand speech and even write text.
Trained by Google, one of these language machine learning geniuses, BERT, can finish sentences. Another one called GPT3, designed by San Francisco-based company OpenAI, can write little stories. Yet, both of them still make funny mistakes in their linguistic exercises that even a child wouldn’t. According to a paper published by Stanford’s Center for Research on Foundational Models, BERT seems to not understand the word “not.” When asked to fill in the word after “A robin is a __” it correctly answers “bird.” But try inserting the word “not” into that sentence (“A robin is not a __”) and BERT still completes it the same way. Similarly, in one of its stories, GPT3 wrote that if you mix a spoonful of grape juice into your cranberry juice and drink the concoction, you die. It seems that robots, and artificial intelligence systems in general, are still missing some rudimentary facts of life that humans and animals grasp naturally and effortlessly.
How does one give robots a genome? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create.
It's not exactly the robots’ fault. Compared to humans, and all other organisms that have been around for thousands or millions of years, robots are very new. They are missing out on eons of evolutionary data-building. Animals and humans are born with the ability to do certain things because they are pre-wired in them. Flies know how to fly, fish knows how to swim, cats know how to meow, and babies know how to cry. Yet, flies don’t really learn to fly, fish doesn’t learn to swim, cats don’t learn to meow, and babies don’t learn to cry—they are born able to execute such behaviors because they’re preprogrammed to do so. All that happens thanks to the millions of years of evolutions wired into their respective genomes, which give rise to the brain’s neural networks responsible for these behaviors. Robots are the newbies, missing out on that trove of information, Zador argues.
A neuroscience professor who studies how brain circuitry generates various behaviors, Zador has a different approach to developing the robotic mind. Until their creators figure out a way to imbue the bots with that information, robots will remain quite limited in their abilities. Each model will only be able to do certain things it was programmed to do, but it will never go above and beyond its original code. So Zador argues that we have to start giving robots a genome.
How does one do that? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create. Genomes lay out rules for brain development. Specifically, the genome encodes blueprints for wiring up our nervous system—the details of which neurons are connected, the strength of those connections and other specs that will later hold the information learned throughout life. “Our genomes serve as blueprints for building our nervous system and these blueprints give rise to a human brain, which contains about 100 billion neurons,” Zador says.
If you think what a genome is, he explains, it is essentially a very compact and compressed form of information storage. Conceptually, genomes are similar to CliffsNotes and other study guides. When students read these short summaries, they know about what happened in a book, without actually reading that book. And that’s how we should be designing the next generation of robots if we ever want them to act like humans, Zador says. “We should give them a set of behavioral CliffsNotes, which they can then unwrap into brain-like structures.” Robots that have such brain-like structures will acquire a set of basic rules to generate basic behaviors and use them to learn more complex ones.
Currently Zador is in the process of developing algorithms that function like simple rules that generate such behaviors. “My algorithms would write these CliffsNotes, outlining how to solve a particular problem,” he explains. “And then, the neural networks will use these CliffsNotes to figure out which ones are useful and use them in their behaviors.” That’s how all living beings operate. They use the pre-programmed info from their genetics to adapt to their changing environments and learn what’s necessary to survive and thrive in these settings.
For example, a robot’s neural network could draw from CliffsNotes with “genetic” instructions for how to be aware of its own body or learn to adjust its movements. And other, different sets of CliffsNotes may imbue it with the basics of physical safety or the fundamentals of speech.
At the moment, Zador is working on algorithms that are trying to mimic neuronal blueprints for very simple organisms—such as earthworms, which have only 302 neurons and about 7000 synapses compared to the millions we have. That’s how evolution worked, too—expanding the brains from simple creatures to more complex to the Homo Sapiens. But if it took millions of years to arrive at modern humans, how long would it take scientists to forge a robot with human intelligence? That’s a billion-dollar question. Yet, Zador is optimistic. “My hypotheses is that if you can build simple organisms that can interact with the world, then the higher level functions will not be nearly as challenging as they currently are.”