Should We Use Technologies to Enhance Morality?
Our moral ‘hardware’ evolved over 100,000 years ago while humans were still scratching the savannah. The perils we encountered back then were radically different from those that confront us now. To survive and flourish in the face of complex future challenges our archaic operating systems might need an upgrade – in non-traditional ways.
Morality refers to standards of right and wrong when it comes to our beliefs, behaviors, and intentions. Broadly, moral enhancement is the use of biomedical technology to improve moral functioning. This could include augmenting empathy, altruism, or moral reasoning, or curbing antisocial traits like outgroup bias and aggression.
The claims related to moral enhancement are grand and polarizing: it’s been both tendered as a solution to humanity’s existential crises and bluntly dismissed as an armchair hypothesis. So, does the concept have any purchase? The answer leans heavily on our definition and expectations.
One issue is that the debate is often carved up in dichotomies – is moral enhancement feasible or unfeasible? Permissible or impermissible? Fact or fiction? On it goes. While these gesture at imperatives, trading in absolutes blurs the realities at hand. A sensible approach must resist extremes and recognize that moral disrupters are already here.
We know that existing interventions, whether they occur unknowingly or on purpose, have the power to modify moral dispositions in ways both good and bad. For instance, neurotoxins can promote antisocial behavior. The ‘lead-crime hypothesis’ links childhood lead-exposure to impulsivity, antisocial aggression, and various other problems. Mercury has been associated with cognitive deficits, which might impair moral reasoning and judgement. It’s well documented that alcohol makes people more prone to violence.
So, what about positive drivers? Here’s where it gets more tangled.
Medicine has long treated psychiatric disorders with drugs like sedatives and antipsychotics. However, there’s short mention of morality in the Diagnostic and Statistical Manual of Mental Disorders (DSM) despite the moral merits of pharmacotherapy – these effects are implicit and indirect. Such cases are regarded as treatments rather than enhancements.
It would be dangerously myopic to assume that moral augmentation is somehow beyond reach.
Conventionally, an enhancement must go beyond what is ‘normal,’ species-typical, or medically necessary – this is known as the ‘treatment-enhancement distinction.’ But boundaries of health and disease are fluid, so whether we call a procedure ‘moral enhancement’ or ‘medical treatment’ is liable to change with shifts in social values, expert opinions, and clinical practices.
Human enhancements are already used for a range of purported benefits: caffeine, smart drugs, and other supplements to boost cognitive performance; cosmetic procedures for aesthetic reasons; and steroids and stimulants for physical advantage. More boldly, cyborgs like Moon Ribas and Neil Harbisson are pushing transpecies boundaries with new kinds of sensory perception. It would be dangerously myopic to assume that moral augmentation is somehow beyond reach.
How might it work?
One possibility for shaping moral temperaments is with neurostimulation devices. These use electrodes to deliver a low-intensity current that alters the electromagnetic activity of specific neural regions. For instance, transcranial Direct Current Stimulation (tDCS) can target parts of the brain involved in self-awareness, moral judgement, and emotional decision-making. It’s been shown to increase empathy and valued-based learning, and decrease aggression and risk-taking behavior. Many countries already use tDCS to treat pain and depression, but evidence for enhancement effects on healthy subjects is mixed.
Another suggestion is targeting neuromodulators like serotonin and dopamine. Serotonin is linked to prosocial attributes like trust, fairness, and cooperation, but low activity is thought to motivate desires for revenge and harming others. It’s not as simple as indiscriminately boosting brain chemicals though. While serotonin is amenable to SSRIs, precise levels are difficult to measure and track, and there’s no scientific consensus on the “optimum” amount or on whether such a value even exists. Fluctuations due to lifestyle factors such as diet, stress, and exercise add further complexity. Currently, more research is needed on the significance of neuromodulators and their network dynamics across the moral landscape.
There are a range of other prospects. The ‘love drugs’ oxytocin and MDMA mediate pair bonding, cooperation, and social attachment, although some studies suggest that people with high levels of oxytocin are more aggressive toward outsiders. Lithium is a mood stabilizer that has been shown to reduce aggression in prison populations; beta-blockers like propranolol and the supplement omega-3 have similar effects. Increasingly, brain-computer interfaces augur a world of brave possibilities. Such appeals are not without limitations, but they indicate some ways that external tools can positively nudge our moral sentiments.
Who needs morally enhancing?
A common worry is that enhancement technologies could be weaponized for social control by authoritarian regimes, or used like the oppressive eugenics of the early 20th century. Fortunately, the realities are far more mundane and such dystopian visions are fantastical. So, what are some actual possibilities?
Some researchers suggest that neurotechnologies could help to reactivate brain regions of those suffering from moral pathologies, including healthy people with psychopathic traits (like a lack of empathy). Another proposal is using such technology on young people with conduct problems to prevent serious disorders in adulthood.
Most of us aren’t always as ethical as we would like – given the option of ‘priming’ yourself to act in consistent accord with your higher values, would you take it?
A question is whether these kinds of interventions should be compulsory for dangerous criminals. On the other hand, a voluntary treatment for inmates wouldn’t be so different from existing incentive schemes. For instance, some U.S. jurisdictions already offer drug treatment programs in exchange for early release or instead of prison time. Then there’s the difficult question of how we should treat non-criminal but potentially harmful ‘successful’ psychopaths.
Others argue that if virtues have a genetic component, there is no technological reason why present practices of embryo screening for genetic diseases couldn’t also be used for selecting socially beneficial traits.
Perhaps the most immediate scenario is a kind of voluntary moral therapy, which would use biomedicine to facilitate ideal brain-states to augment traditional psychotherapy. Most of us aren’t always as ethical as we would like – given the option of ‘priming’ yourself to act in consistent accord with your higher values, would you take it? Approaches like neurofeedback and psychedelic-assisted therapy could prove helpful.
What are the challenges?
A general challenge is that of setting. Morality is context dependent; what’s good in one environment may be bad in another and vice versa, so we don’t want to throw out the baby with the bathwater. Of course, common sense tells us that some tendencies are more socially desirable than others: fairness, altruism, and openness are clearly preferred over aggression, dishonesty, and prejudice.
One argument is that remoulding ‘brute impulses’ via biology would not count as moral enhancement. This view claims that for an action to truly count as moral it must involve cognition – reasoning, deliberation, judgement – as a necessary part of moral behavior. Critics argue that we should be concerned more with ends rather than means, so ultimately it’s outcomes that matter most.
Another worry is that modifying one biological aspect will have adverse knock-on effects for other valuable traits. Certainly, we must be careful about the network impacts of any intervention. But all stimuli have distributed effects on the body, so it’s really a matter of weighing up the cost/benefit trade-offs as in any standard medical decision.
Is it ethical?
Our values form a big part of who we are – some bioethicists argue that altering morality would pose a threat to character and personal identity. Another claim is that moral enhancement would compromise autonomy by limiting a person’s range of choices and curbing their ‘freedom to fall.’ Any intervention must consider the potential impacts on selfhood and personal liberty, in addition to the wider social implications.
This includes the importance of social and genetic diversity, which is closely tied to considerations of fairness, equality, and opportunity. The history of psychiatry is rife with examples of systematic oppression, like ‘drapetomania’ – the spurious mental illness that was thought to cause African slaves’ desire to flee captivity. Advocates for using moral enhancement technologies to help kids with conduct problems should be mindful that they disproportionately come from low-income communities. We must ensure that any habilitative practice doesn’t perpetuate harmful prejudices by unfairly targeting marginalized people.
Human capacities are the result of environmental influences, and external conditions still coax our biology in unknown ways. Status quo bias for ‘letting nature take its course’ may actually be worse long term – failing to utilize technology for human development may do more harm than good.
Then, there are concerns that morally-enhanced persons would be vulnerable to predation by those who deliberately avoid moral therapies. This relates to what’s been dubbed the ‘bootstrapping problem’: would-be moral enhancement candidates are the types of individuals that benefit from not being morally enhanced. Imagine if every senator was asked to undergo an honesty-boosting procedure prior to entering public office – would they go willingly? Then again, perhaps a technological truth-serum wouldn’t be such a bad requisite for those in positions of stern social consequence.
Advocates argue that biomedical moral betterment would simply offer another means of pursuing the same goals as fixed social mechanisms like religion, education, and community, and non-invasive therapies like cognitive-behavior therapy and meditation. It’s even possible that technological efforts would be more effective. After all, human capacities are the result of environmental influences, and external conditions still coax our biology in unknown ways. Status quo bias for ‘letting nature take its course’ may actually be worse long term – failing to utilize technology for human development may do more harm than good. If we can safely improve ourselves in direct and deliberate ways then there’s no morally significant difference whether this happens via conventional methods or new technology.
Where speculation about human enhancement has led to hype and technophilia, many bioethicists urge restraint. We can be grounded in current science while anticipating feasible medium-term prospects. It’s unlikely moral enhancement heralds any metamorphic post-human utopia (or dystopia), but that doesn’t mean dismissing its transformative potential. In one sense, we should be wary of transhumanist fervour about the salvatory promise of new technology. By the same token we must resist technofear and alarmist efforts to balk social and scientific progress. Emerging methods will continue to shape morality in subtle and not-so-subtle ways – the critical steps are spotting and scaffolding these with robust ethical discussion, public engagement, and reasonable policy options. Steering a bright and judicious course requires that we pilot the possibilities of morally-disruptive technologies.
When I greeted Rodney Gorham, age 63, in an online chat session, he replied within seconds: “My pleasure.”
“Are you moving parts of your body as you type?” I asked.
This time, his response came about five minutes later: “I position the cursor with the eye tracking and select the same with moving my ankles.” Gorham, a former sales representative from Melbourne, Australia, living with amyotrophic lateral sclerosis, or ALS, a rare form of Lou Gehrig’s disease that impairs the brain’s nerve cells and the spinal cord, limiting the ability to move. ALS essentially “locks” a person inside their own body. Gorham is conversing with me by typing with his mind only–no fingers in between his brain and his computer.
The brain-computer interface enabling this feat is called the Stentrode. It's the brainchild of Synchron, a company backed by Amazon’s Jeff Bezos and Microsoft cofounder Bill Gates. After Gorham’s neurologist recommended that he try it, he became one of the first volunteers to have an 8mm stent, laced with small electrodes, implanted into his jugular vein and guided by a surgeon into a blood vessel near the part of his brain that controls movement.
After arriving at their destination, these tiny sensors can detect neural activity. They relay these messages through a small receiver implanted under the skin to a computer, which then translates the information into words. This minimally invasive surgery takes a day and is painless, according to Gorham. Recovery time is typically short, about two days.
When a paralyzed patient thinks about trying to move their arms or legs, the motor cortex will fire patterns that are specific to the patient’s thoughts.
When a paralyzed patient such as Gorham thinks about trying to move their arms or legs, the motor cortex will fire patterns that are specific to the patient’s thoughts. This pattern is detected by the Stentrode and relayed to a computer that learns to associate this pattern with the patient’s physical movements. The computer recognizes thoughts about kicking, making a fist and other movements as signals for clicking a mouse or pushing certain letters on a keyboard. An additional eye-tracking device controls the movement of the computer cursor.
The process works on a letter by letter basis. That’s why longer and more nuanced responses often involve some trial and error. “I have been using this for about two years, and I enjoy the sessions,” Gorham typed during our chat session. Zafar Faraz, field clinical engineer at Synchron, sat next to Gorham, providing help when required. Gorham had suffered without internet access, but now he looks forward to surfing the web and playing video games.
Gorham, age 63, has been enjoying Stentrode sessions for about two years.
The BCI revolution
In the summer of 2021, Synchron became the first company to receive the FDA’s Investigational Device Exemption, which allows research trials on the Stentrode in human patients. This past summer, the company, together with scientists from Icahn School of Medicine at Mount Sinai and the Neurology and Neurosurgery Department at Utrecht University, published a paper offering a framework for how to develop BCIs for patients with severe paralysis – those who can't use their upper limbs to type or use digital devices.
Three months ago, Synchron announced the enrollment of six patients in a study called COMMAND based in the U.S. The company will seek approval next year from the FDA to make the Stentrode available for sale commercially. Meanwhile, other companies are making progress in the field of BCIs. In August, Neuralink announced a $280 million financing round, the biggest fundraiser yet in the field. Last December, Synchron announced a $75 million financing round. “One thing I can promise you, in five years from now, we’re not going to be where we are today. We're going to be in a very different place,” says Elad I. Levy, professor of neurosurgery and radiology at State University of New York in Buffalo.
The risk of hacking exists, always. Cybercriminals, for example, might steal sensitive personal data for financial reasons, blackmailing, or to spread malware to other connected devices while extremist groups could potentially hack BCIs to manipulate individuals into supporting their causes or carrying out actions on their behalf.
“The prospect of bestowing individuals with paralysis a renewed avenue for communication and motor functionality is a step forward in neurotech,” says Hayley Nelson, a neuroscientist and founder of The Academy of Cognitive and Behavioral Neuroscience. “It is an exciting breakthrough in a world of devastating, scary diseases,” says Neil McArthur, a professor of philosophy and director of the Centre for Professional and Applied Ethics at the University of Manitoba. “To connect with the world when you are trapped inside your body is incredible.”
While the benefits for the paraplegic community are promising, the Stentrode’s long-term effectiveness and overall impact needs more research on safety. “Potential risks like inflammation, damage to neural tissue, or unexpected shifts in synaptic transmission due to the implant warrant thorough exploration,” Nelson says.
There are also concens about data privacy concerns and the policies of companies to safeguard information processed through BCIs. “Often, Big Tech is ahead of the regulators because the latter didn’t envisage such a turn of events...and companies take advantage of the lack of legal framework to push forward,” McArthur says. Hacking is another risk. Cybercriminals could steal sensitive personal data for financial reasons, blackmailing, or to spread malware to other connected devices. Extremist groups could potentially hack BCIs to manipulate individuals into supporting their causes or carrying out actions on their behalf.
“We have to protect patient identity, patient safety and patient integrity,” Levy says. “In the same way that we protect our phones or computers from hackers, we have to stay ahead with anti-hacking software.” Even so, Levy thinks the anticipated benefits for the quadriplegic community outweigh the potential risks. “We are on the precipice of an amazing technology. In the future, we would be able to connect patients to peripheral devices that enhance their quality of life.”
In the near future, the Stentrode could enable patients to use the Stentrode to activate their wheelchairs, iPods or voice modulators. Synchron's focus is on using its BCI to help patients with significant mobility restrictions—not to enhance the lives of healthy people without any illnesses. Levy says we are not prepared for the implications of endowing people with superpowers.
I wondered what Gorham thought about that. “Pardon my question, but do you feel like you have sort of transcended human nature, being the first in a big line of cybernetic people doing marvelous things with their mind only?” was my last question to Gorham.
A slight smile formed on his lips. In less than a minute, he typed: “I do a little.”
A new competition by the XPRIZE Foundation is offering $101 million to researchers who discover therapies that give a boost to people aged 65-80 so their bodies perform more like when they were middle-aged.
For today’s podcast episode, I talked with Dr. Peter Diamandis, XPRIZE’s founder and executive chairman. Under Peter’s leadership, XPRIZE has launched 27 previous competitions with over $300 million in prize purses. The latest contest aims to enhance healthspan, or the period of life when older people can play with their grandkids without any restriction, disability or disease. Such breakthroughs could help prevent chronic diseases that are closely linked to aging. These illnesses are costly to manage and threaten to overwhelm the healthcare system, as the number of Americans over age 65 is rising fast.
In this competition, called XPRIZE Healthspan, multiple awards are available, depending on what’s achieved, with support from the nonprofit Hevolution Foundation and Chip Wilson, the founder of Lululemon and nonprofit SOLVE FSHD. The biggest prize, $81 million, is for improvements in cognition, muscle and immunity by 20 years. An improvement of 15 years will net $71 million, and 10 years will net $61 million.
In our conversation for this episode, Peter talks about his plans for XPRIZE Healthspan and why exponential technologies make the current era - even with all of its challenges - the most exciting time in human history. We discuss the best mental outlook that supports a person in becoming truly innovative, as well as the downsides of too much risk aversion. We talk about how to overcome the negativity bias in ourselves and in mainstream media, how Peter has shifted his own mindset to become more positive over the years, how to inspire a culture of innovation, Peter’s personal recommendations for lifestyle strategies to live longer and healthier, the innovations we can expect in various fields by 2030, the future of education and the importance of democratizing tech and innovation.
In addition to Peter’s pioneering leadership of XPRIZE, he is also the Executive Founder of Singularity University. In 2014, he was named by Fortune as one of the “World’s 50 Greatest Leaders.” As an entrepreneur, he’s started over 25 companies in the areas of health-tech, space, venture capital and education. He’s Co-founder and Vice-Chairman of two public companies, Celularity and Vaxxinity, plus being Co-founder & Chairman of Fountain Life, a fully-integrated platform delivering predictive, preventative, personalized and data-driven health. He also serves as Co-founder of BOLD Capital Partners, a venture fund with a half-billion dollars under management being invested in exponential technologies and longevity companies. Peter is a New York Times Bestselling author of four books, noted during our conversation and in the show notes of this episode. He has degrees in molecular genetics and aerospace engineering from MIT and holds an M.D. from Harvard Medical School.
- Peter Diamandis bio
- New XPRIZE Healthspan
- Peter Diamandis books
- Longevity Insider newsletter – AI identifies the news
- Peter Diamandis Longevity Handbook
- Hevolution funding for longevity
XPRIZE Founder Peter Diamandis speaks with Mehmoud Khan, CEO of Hevolution Foundation, at the launch of XPRIZE Healthspan.