Ten thousand years ago, the average human spent a maximum of 30 years on Earth. Despite the glory of Ancient Greece and the Roman Empire, most of their inhabitants didn’t surpass the age of 35. Between the 1500s and 1800, life expectancy (at least in Europe) fluctuated between 30 and 40 years.
Public health advancements like control of infectious diseases, better diet and clean sanitation, as well as social improvements have made it possible for human lifespans to double since 1800. Although lifespan differs widely today from country to country according to socioeconomic health, the average has soared to 73.2 years.
But this may turn out to be on the low side if epigenetic rejuvenation fulfills its great promise: to reverse aging, perhaps even completely. Epigenetic rejuvenation, or partial reprogramming, is the process by which a set of therapies are trying to manipulate epigenetics – how various changes can affect our genes – and the Yamanaka factors. These Yamanaka factors are a group of proteins that can convert any cell of the body into pluripotent stem cells, a group of cells that can turn into brand new cells, such as those of the brain or skin. At least in theory, it could be a recipe for self-renewal.
“Partial reprogramming tries to knock a few years off of people’s biological age, while preserving their original cell identity and function,” says Yuri Deigin, cofounder and director of YouthBio Therapeutics, a longevity startup utilizing partial reprogramming to develop gene therapies aimed at the renewal of epigenetic profiles. YouthBio plans to experiment with injecting these gene therapies into target organs. Once the cargo is delivered, a specific small molecule will trigger gene expression and rejuvenate those organs.
“Our ultimate mission is to find the minimal number of tissues we would need to target to achieve significant systemic rejuvenation,” Deigin says. Initially, YouthBio will apply these therapies to treat age-related conditions. Down the road, though, their goal is for everyone to get younger. “We want to use them for prophylaxis, which is rejuvenation that would lower disease risk,” Deigin says.
Epigenetics has swept the realm of biology off its feet over the last decade. We now know that we can switch genes on and off by tweaking the chemical status quo of the DNA’s local environment. "Epigenetics is a fascinating and important phenomenon in biology,’’ says Henry Greely, a bioethicist at Stanford Law School. Greely is quick to stress that this kind of modulation (turning genes on and off and not the entire DNA) happens all the time. “When you eat and your blood sugar goes up, the gene in the beta cells of your pancreas that makes insulin is turned on or up. Almost all medications are going to have effects on epigenetics, but so will things like exercise, food, and sunshine.”
Can intentional control over epigenetic mechanisms lead to novel and useful therapies? “It is a very plausible scenario,” Greely says, though a great deal of basic research into epigenetics is required before it becomes a well-trodden way to stay healthy or treat disease. Whether these therapies could cause older cells to become younger in ways that have observable effects is “far from clear,” he says. “Historically, betting on someone’s new ‘fountain of youth’ has been a losing strategy.”
The road to de-differentiation, the process by which cells return to an earlier state, is not paved with roses; de-differentiate too much and you may cause pathology and even death.
In 2003 researchers finished sequencing the roughly 3 billion letters of DNA that make up the human genome. The human genome sequencing was hailed as a vast step ahead in our understanding of how genetics contribute to diseases like cancer or to developmental disorders. But for Josephine Johnston, director of research and research scholar at the Hastings Center, the hype has not lived up to its initial promise. “Other than some quite effective tests to diagnose certain genetic conditions, there isn't a radical intervention that reverses things yet,” Johnston says. For her, this is a testament to the complexity of biology or at least to our tendency to keep underestimating it. And when it comes to epigenetics specifically, Johnston believes there are some hard questions we need to answer before we can safely administer relevant therapies to the population.
“You'd need to do longitudinal studies. You can't do a study and look at someone and say they’re safe only six months later,” Johnston says. You can’t know long-term side effects this way, and how will companies position their therapies on the market? Are we talking about interventions that target health problems, or life enhancements? “If you describe something as a medical intervention, it is more likely to be socially acceptable, to attract funding from governments and ensure medical insurance, and to become a legitimate part of medicine,” she says.
Johnston’s greatest concerns are of the philosophical and ethical nature. If we’re able to use epigenetic reprogramming to double the human lifespan, how much of the planet’s resources will we take up during this long journey? She believes we have a moral obligation to make room for future generations. “We should also be honest about who's actually going to afford such interventions; they would be extraordinarily expensive and only available to certain people, and those are the people who would get to live longer, healthier lives, and the rest of us wouldn't.”
That said, Johnston agrees there is a place for epigenetic reprogramming. It could help people with diseases that are caused by epigenetic problems such as Fragile X syndrome, Prader-Willi syndrome and various cancers.
Zinaida Good, a postdoctoral fellow at Stanford Cancer Institute, says these problems are still far in the future. Any change will be incremental. “Thinking realistically, there’s not going to be a very large increase in lifespan anytime soon,” she says. “I would not expect something completely drastic to be invented in the next 5 to 10 years. ”
Good won’t get any such treatment for herself until it’s shown to be effective and safe. Nature has programmed our bodies to resist hacking, she says, in ways that could undermine any initial benefits to longevity. A preprint that is not yet peer-reviewed reports cellular reprogramming may lead to premature death due to liver and intestinal problems, and using the Yamanaka factors may have the potential to cause cancer, at least in animal studies.
“Side effects are an open research question that all partial reprogramming companies and labs are trying to address,” says Deigin. The road to de-differentiation, the process by which cells return to an earlier state, is not paved with roses; de-differentiate too much and you may cause pathology and even death. Deigin is exploring other, less risky approaches. “One way is to look for novel factors tailored toward rejuvenation rather than de-differentiation.” Unlike Yamanaka factors, such novel factors would never involve taking a given cell to a state in which it could turn cancerous, according to Deigin.
An example of a novel factor that could lower the risk of cancer is artificially introducing mRNA molecules, or molecules carrying the genetic information necessary to make proteins, by using electricity to penetrate the cell instead of a virus. There is also chemical-based reprogramming, in which chemicals are applied to convert regular cells into pluripotent cells. This approach is currently effective only for mice though.
“The search for novel factors tailored toward rejuvenation without de-differentiation is an ongoing research and development effort by several longevity companies, including ours,” says Deigin.
He isn't disclosing the details of his own company’s underlying approach to lowering the risk, but he’s hopeful that something will eventually end up working in humans. Yet another challenge is that, partly because of the uncertainties, the FDA hasn’t seen fit to approve a single longevity therapy. But with the longevity market projected to soar to $600 billion by 2025, Deigin says naysayers are clinging irrationally to the status quo. “Thankfully, scientific progress is moved forward by those who bet for something while disregarding the skeptics - who, in the end, are usually proven wrong.”
In October 2021, a woman from Gujarat, India, stunned the world when it was revealed she had her first child through in vitro fertilization (IVF) at age 70. She had actually been preceded by a compatriot of hers who, two years before, gave birth to twins at the age of 73, again with the help of IVF treatment. The oldest known mother to conceive naturally lived in the UK; in 1997, Dawn Brooke conceived a son at age 59.
These women may seem extreme outliers, almost freaks of nature; in the US, for example, the average age of first-time mothers is 26. A few decades from now, though, the sight of 70-year-old first-time mothers may not even raise eyebrows, say futurists.
“We could absolutely have more 70-year-old mothers because we are learning how to regulate the aging process better,” says Andrew Hessel, a microbiologist and geneticist, who cowrote "The Genesis Machine," a book about “rewriting life in the age of synthetic biology,” with Amy Webb, the futurist who recently wondered why 70-year-old women shouldn’t give birth.
Technically, we're already doing this, says Hessel, pointing to a technique known as in vitro gametogenesis (IVG). IVG refers to turning adult cells into sperm or egg cells. “You can think of it as the upgrade to IVF,” Hessel says. These vanguard stem cell research technologies can take even skin cells and turn them into induced pluripotent stem cells (iPSCs), which are basically master cells capable of maturing into any human cell, be it kidney cells, liver cells, brain cells or gametes, aka eggs and sperm, says Henry T. “Hank” Greely, a Stanford law professor who specializes in ethical, legal, and social issues in biosciences.
Mothers over 70 will be a minor blip, statistically speaking, Greely predicts.
In 2016, Greely wrote "The End of Sex," a book in which he described the science of making gametes out of iPSCs in detail. Greely says science will indeed enable us to see 70-year-old new mums fraternize with mothers several decades younger at kindergartens in the (not far) future. And it won’t be that big of a deal.
“An awful lot of children all around the world have been raised by grandmothers for millennia. To have 70-year-olds and 30-year-olds mingling in maternal roles is not new,” he says. That said, he doubts that many women will want to have a baby in the eighth decade of their life, even if science allows it. “Having a baby and raising a child is hard work. Even if 1% of all mothers are over 65, they aren’t going to change the world,” Greely says. Mothers over 70 will be a minor blip, statistically speaking, he predicts. But one thing is certain: the technology is here.
And more technologies for the same purpose could be on the way. In March 2021, researchers from Monash University in Melbourne, Australia, published research in Nature, where they successfully reprogrammed skin cells into a three-dimensional cellular structure that was morphologically and molecularly similar to a human embryo–the iBlastoid. In compliance with Australian law and international guidelines referencing the “primitive streak rule," which bans the use of embryos older than 14 days in scientific research, Monash scientists stopped growing their iBlastoids in vitro on day 11.
“The research was both cutting-edge and controversial, because it essentially created a new human life, not for the purpose of a patient who's wanting to conceive, but for basic research,” says Lindsay Wu, a senior lecturer in the School of Medical Sciences at the University of New South Wales (UNSW), in Kensington, Australia. If you really want to make sure what you are breeding is an embryo, you need to let it develop into a viable baby. “This is the real proof in the pudding,'' says Wu, who runs UNSW’s Laboratory for Ageing Research. Then you get to a stage where you decide for ethical purposes you have to abort it. “Fiddling here a bit too much?” he asks. Wu believes there are other approaches to tackling declining fertility due to older age that are less morally troubling.
He is actually working on them. Why would it be that women, who are at peak physical health in almost every other regard, in their mid- to late- thirties, have problems conceiving, asked Wu and his team in a research paper published in 2020 in Cell Reports. The simple answer is the egg cell. An average girl in puberty has between 300,000 and 400,000 eggs, while at around age 37, the same woman has only 25,000 eggs left. Things only go downhill from there. So, what torments the egg cells?
The UNSW team found that the levels of key molecules called NAD+ precursors, which are essential to the metabolism and genome stability of egg cells, decline with age. The team proceeded to add these vitamin-like substances back into the drinking water of reproductively aged, infertile lab mice, which then had babies.
“It's an important proof of concept,” says Wu. He is investigating how safe it is to replicate the experiment with humans in two ongoing studies. The ultimate goal is to restore the quality of egg cells that are left in patients in their late 30s and early- to mid-40s, says Wu. He sees the goal of getting pregnant for this age group as less ethically troubling, compared to 70-year-olds.
But what is ethical, anyway? “It is a tricky word,” says Hessel. He differentiates between ethics, which represent a personal position and may, thus, be more transient, and morality, longer lasting principles embraced across society such as, “Thou shalt not kill.” Unprecedented advances often bring out fear and antagonism until time passes and they just become…ordinary. When IVF pioneer Landrum Shettles tried to perform IVF in 1973, the chairman of Columbia’s College of Physicians and Surgeons interdicted the procedure at the last moment. Almost all countries in the world have IVF clinics today, and the global IVF services market is clearly a growth industry.
Besides, you don’t have a baby at 70 by accident: you really want it, Greely and Hessel agree. And by that age, mothers may be wiser and more financially secure, Hessel says (though he is quick to add that even the pregnancy of his own wife, who had her child at 40, was a high-risk one).
As a research question, figuring out whether older mothers are better than younger ones and vice-versa entails too many confounding variables, says Greely. And why should we focus on who’s the better mother anyway? “We've had 70-year-old and 80-year-old fathers forever–why should people have that much trouble getting used to mothers doing the same?” Greely wonders. For some women having a child at an old(er) age would be comforting; maybe that’s what matters.
And the technology to enable older women to have children is already here or coming very soon. That, perhaps, matters even more. Researchers have already created mice–and their offspring–entirely from scratch in the lab. “Doing this to produce human eggs is similar," says Hessel. "It is harder to collect tissues, and the inducing cocktails are different, but steady advances are being made." He predicts that the demand for fertility treatments will keep financing research and development in the area. He says that big leaps will be made if ethical concerns don’t block them: it is not far-fetched to believe that the first baby produced from lab-grown eggs will be born within the next decade.
In an op-ed in 2020 with Stat, Greely argued that we’ve already overcome the technical barrier for human cloning, but no one's really talking about it. Likewise, scientists are also working on enabling 70-year-old women to have babies, says Hessel, but most commentators are keeping really quiet about it. At least so far.
When she woke up after a procedure involving drilling small holes in her skull, a woman suffering from chronic depression reported feeling “euphoric”. The holes were made to fit the wires that connected her brain with a matchbox-sized electrical implant; this would deliver up to 300 short-lived electricity bursts per day to specific parts of her brain.
Over a year later, Sarah, 36, says the brain implant has turned her life around. A sense of alertness and energy have replaced suicidal thoughts and feelings of despair, which had persisted despite antidepressants and electroconvulsive therapy. Sarah is the first person to have received a brain implant to treat depression, a breakthrough that happened during an experimental study published recently in Nature Medicine.
“What we did was use deep-brain stimulation (DBS), a technique used in the treatment of epilepsy,” says Andrew Krystal, professor of psychiatry at University of California, San Francisco (UCSF), and one of the study’s researchers. DBS typically involves implanting electrodes into specific areas of the brain to reduce seizures not controlled with medication or to remove the part of the brain that causes the seizures. Instead of choosing and stimulating a single brain site though, the UCSF team took a different approach.
They first used 10 electrodes to map Sarah’s brain activity, a phase that lasted 10 days, during which they developed a neural biomarker, a specific pattern of brain activity that indicated the onset of depression symptoms (in Sarah, this was detected in her amygdala, an almondlike structure located near the base of the brain). But they also saw that delivering a tiny burst of electricity to the patient’s ventral striatum, an area of the brain that sits in the center, above and behind the ears, dramatically improved these symptoms. What they had to do was outfit Sara’s brain with a DBS-device programmed to propagate small waves of electricity to the ventral striatum only when it discerned the pattern.
“We are not trying to take away normal responses to the world. We are just trying to eliminate this one thing, which is depression, which impedes patients’ ability to function and deal with normal stuff.”
“It was a personalized treatment not only in where to stimulate, but when to stimulate,” Krystal says. Sarah’s depression translated to low amounts of energy, loss of pleasure and interest in life, and feelings of sluggishness. Those symptoms went away when scientists stimulated her ventral capsule area. When the same area was manipulated by electricity when Sarah’s symptoms “were not there” though, she was feeling more energetic, but this sudden flush of energy soon gave way to feelings of overstimulation and anxiety. “This is a very tangible illustration of why it's best to simulate only when you need it,” says Krystal.
We have the tendency to lump together depression symptoms, but, in reality, they are quite diverse; some people feel sad and lethargic, others stay up all night; some overeat, others don’t eat at all. “This happens because people have different underlying dysfunctions in different parts of their brain. Our approach is targeting the specific brain circuit that modulates different kinds of symptoms. Simply, where we stimulate depends on the specific set of problems a person has,” Krystal says. Such tailormade brain stimulation for patients with long-term, drug-resistant depression, which would be easy to use at home, could be transformative, the UCSF researcher concludes.
In the U.S., 12.7 percent of the population is on antidepressants. Almost exactly the same percentage of Australians–12.5–take similar drugs every day. With 13 percent of its population being on antidepressants, Iceland is the world’s highest antidepressant consumer. And quite away from Scandinavia, the Southern European country of Portugal is the world’s third strongest market for corresponding medication.
By 2020, nearly 15.5 million people had been consuming antidepressants for a time period exceeding five years. Between 40 and 60 percent of them saw improvements. “For those people, it was absolutely what they needed, whether that was increased serotonin, or increased norepinephrine or increased dopamine, ” says Frank Anderson, a psychiatrist who has been administering antidepressants in his private practice “for a long time”, and author of Transcending Trauma, a book about resolving complex and dissociative trauma.
Yet the UCSF study brings to the mental health field a specificity it has long lacked. “A lot of the traditional medications only really work on six neurotransmitters, when there are over 100 neurotransmitters in the brain,” Anderson says. Drugs are changing the chemistry of a single system in the brain, but brain stimulation is essentially changing the very architecture of the brain, says James Giordano, professor of neurology and biochemistry at Georgetown University Medical Center in Washington and a neuroethicist. It is a far more elegant approach to treating brain disorders, with the potential to prove a lifesaver for the 40 to 50 percent of patients who see no benefits at all with antidepressants, Giordano says. It is neurofeedback, on steroids, adds Anderson. But it comes with certain risks.
Even if the device generating the brain stimulation sits outside the skull and could be easily used at home, the whole process still involves neurosurgery. While the sophistication and precision of brain surgeries has significantly improved over the last years, says Giordano, they always carry risks, such as an allergic reaction to anesthesia, bleeding in the brain, infection at the wound site, blood clots, even coma. Non-invasive brain stimulation (NIBS), a technology currently being developed by the Defense Advanced Research Projects Agency (DARPA), could potentially tackle this. Patients could wear a cap, helmet, or visor that transmits electrical signals from the brain to a computer system and back, in a brain-computer interface that would not need surgery.
“This could counter the implantation of hardware into the brain and body, around which there is also a lot of public hesitance,” says Giordano, who is working on such techniques at DARPA.
Embedding a chip in your head is one of the finest examples of biohacking, an umbrella word for all the practices aimed at hacking one’s body and brain to enhance performance –a citizen do-it-yourself biology. It is also a word charged enough to set off a public backlash. Large segments of the population will simply refuse to allow that level of invasiveness in their heads, says Laura Cabrera, an associate professor of neuroethics at the Center for Neural Engineering, Department of Engineering Science and Mechanics at Penn State University. Cabrera urges caution when it comes to DBS’s potential.
“We've been using it for Parkinson's for over two decades, hoping that now that they get DBS, patients will get off medications. But people have continued taking their drugs, even increasing them,” she says. What the UCSF found is a proof of concept that DBS worked in one depressed person, but there’s a long way ahead until we can confidently say this finding is generalizable to a large group of patients. Besides, as a society, we are not there yet, says Cabrera. “Most people, at least in my research, say they don't want to have things in their brain,” she says. But what could really go wrong if we biohacked our own brains anyway?
In 2014, a man who had received a deep brain implant for a movement disorder started developing an affection for Johnny Cash’s music when he had previously been an avid country music fan. Many protested that the chip had tampered with his personality. Could sparking the brain with electricity generated by a chip outside it put an end to our individuality, messing with our musical preferences, unique quirks, our deeper sense of ego?
“What we found is that when you stimulate a region, you affect people’s moods, their energies,” says Krystal. You are neither changing their personality nor creating creatures of eternal happiness, he says. “’Being on a phone call would generally be a setting that would normally trigger symptoms of depression in me,’” Krystal reports his patient telling him. ‘I now know bad things happen, but am not affected by them in the same way. They don’t trigger the depression.’” Of the research, Krystal continues: “We are not trying to take away normal responses to the world. We are just trying to eliminate this one thing, which is depression, which impedes patients’ ability to function and deal with normal stuff.”
Yet even change itself shouldn't be seen as threatening, especially if the patient had probably desired it in the first place. “The intent of therapy in psychiatric disorders is to change the personality, because a psychiatric disorder by definition is a disorder of personality,” says Cabrera. A person in therapy wants to restore the lost sense of “normal self”. And as for this restoration altering your original taste in music, Cabrera says we are talking about rarities, extremely scarce phenomena that are possible with medication as well.
Maybe it is the allure of dystopian sci-fi films: people have a tendency to worry about dark forces that will spread malice across the world when the line between human and machine has blurred. Such mind-control through DBS would probably require a decent leap of logic with the tools science has--at least to this day. “This would require an understanding of the parameters of brain stimulation we still don't have,” says Cabrera. Still, brain implants are not fully corrupt-proof.
“Hackers could shut off the device or change the parameters of the patient's neurological function enhancing symptoms or creating harmful side-effects,” says Giordano.
There are risks, but also failsafe ways to tackle them, adds Anderson. “Just like medications are not permanent, we could ensure the implants are used for a specific period of time,” he says. And just like people go in for checkups when they are under medication, they could periodically get their personal brain implants checked to see if they have been altered or not, he continues. “It is what my research group refers to as biosecurity by design,” says Giordano. “It is important that we proactively design systems that cannot be corrupted.”
Two weeks after receiving the implant, Sarah scored 14 out of 54 on the Montgomery-Åsberg Depression Rating Scale, a ten-item questionnaire psychiatrists use to measure the severity of depressive episodes. She had initially scored 36. Today she scores under 10. She would have had to wait between four and eight weeks to see positive results had she taken the antidepressant road, says Krystal.
He and his team have enrolled two other patients in the trials and hope to add nine more. They already have some preliminary evidence that there's another place that works better in the brain of another patient, because that specific patient had been experiencing more anxiety as opposed to despondency. Almost certainly, we will have different biomarkers for different people, and brain stimulation will be tailored to a person’s unique situation, says Krystal. “Each brain is different, just like each face is different.”