"All fixed, fast-frozen relations, with their train of ancient and venerable prejudices and opinions, are swept away, all new-formed ones become antiquated before they can ossify. All that is solid melts into air, all that is holy is profaned…"
On July 25, 1978, Louise Brown was born in Oldham, England, the first human born through in vitro fertilization, through the work of Patrick Steptoe, a gynecologist, and Robert Edwards, a physiologist. Her birth was greeted with strong (though not universal) expressions of ethical dismay. Yet in 2016, the latest year for which we have data, nearly two percent of the babies born in the United States – and around the same percentage throughout the developed world – were the result of IVF. Few, if any, think of these children as unnatural, monsters, or freaks or of their parents as anything other than fortunate.
How should we view Dr. He today, knowing that the world's eventual verdict on the ethics of biomedical technologies often changes?
On November 25, 2018, news broke that Chinese scientist, Dr. He Jiankui, claimed to have edited the genomes of embryos, two of whom had recently become the new babies, Lulu and Nana. The response was immediate and overwhelmingly negative.
Times change. So do views. How will Dr. He be viewed in 40 years? And, more importantly, how should we view him today, knowing that the world's eventual verdict on the ethics of biomedical technologies often changes? And when what biomedicine can do changes with vertiginous frequency?
How to determine what is and isn't ethical is above my pay grade. I'm a simple law professor – I can't claim any deeper insight into how to live a moral life than the millennia of religious leaders, philosophers, ethicists, and ordinary people trying to do the right thing. But I can point out some ways to think about these questions that may be helpful.
First, consider two different kinds of ethical commands. Some are quite specific – "thou shalt not kill," for example. Others are more general – two of them are "do unto others as you would have done to you" or "seek the greatest good for the greatest number."
Biomedicine in the last two centuries has often surprised us with new possibilities, situations that cultures, religions, and bodies of ethical thought had not previously had to consider, from vaccination to anesthesia for women in labor to genome editing. Sometimes these possibilities will violate important and deeply accepted precepts for a group or a person. The rise of blood transfusions around World War I created new problems for Jehovah's Witnesses, who believe that the Bible prohibits ingesting blood. The 20th century developments of artificial insemination and IVF both ran afoul of Catholic doctrine prohibiting methods other than "traditional" marital intercourse for conceiving children. If you subscribe to an ethical or moral code that contains prohibitions that modern biomedicine violates, the issue for you is stark – adhere to those beliefs or renounce them.
If the harms seem to outweigh the benefits, it's easy to conclude "this is worrisome."
But many biomedical changes violate no clear moral teachings. Is it ethical or not to edit the DNA of embryos? Not surprisingly, the sacred texts of various religions – few of which were created after, at the latest, the early 19th century, say nothing specific about this. There may be hints, precedents, leanings that could argue one way or another, but no "commandments." In that case, I recommend, at least as a starting point, asking "what are the likely consequences of these actions?"
Will people be, on balance, harmed or helped by them? "Consequentialist" approaches, of various types, are a vast branch of ethical theories. Personally I find a completely consequentialist approach unacceptable – I could not accept, for example, torturing an innocent child even in order to save many lives. But, in the absence of a clear rule, looking at the consequences is a great place to start. If the harms seem to outweigh the benefits, it's easy to conclude "this is worrisome."
Let's use that starting place to look at a few bioethical issues. IVF, for example, once proven (relatively) safe seems to harm no one and to help many, notably the more than 8 million children worldwide born through IVF since 1978 – and their 16 million parents. On the other hand, giving unknowing, and unconsenting, intellectually disabled children hepatitis A harmed them, for an uncertain gain for science. And freezing the heads of the dead seems unlikely to harm anyone alive (except financially) but it also seems almost certain not to benefit anyone. (Those frozen dead heads are not coming back to life.)
Now let's look at two different kinds of biomedical advances. Some are controversial just because they are new; others are controversial because they cut close to the bone – whether or not they violate pre-established ethical or moral norms, they clearly relate to them.
Consider anesthesia during childbirth. When first used, it was controversial. After all, said critics, in Genesis, the Bible says God told Eve, "I will greatly multiply Your pain in childbirth, In pain you will bring forth children." But it did not clearly prohibit pain relief and from the advent of ether on, anesthesia has been common, though not universal, in childbirth in western societies. The pre-existing ethical precepts were not clear and the consequences weighed heavily in favor of anesthesia. Similarly, vaccination seems to violate no deep moral principle. It was, and for some people, still is just strange, and unnatural. The same was true of IVF initially. Opposition to all of these has faded with time and familiarity. It has not disappeared – some people continue to find moral or philosophical problems with "unnatural" childbirth, vaccination, and IVF – but far fewer.
On the other hand, human embryonic stem cell research touches deeper issues. Human embryos are destroyed to make those stem cells. Reasonable people disagree on the moral status of the human embryo, and the moral weight of its destruction, but it does at least bring into play clear and broadly accepted moral precepts, such as "Thou shalt not kill." So, at the far side of an individual's time, does euthanasia. More exposure to, and familiarity with, these practices will not necessarily lead to broad acceptance as the objections involve more than novelty.
The first is "what would I do?" The second – what should my government, culture, religion allow or forbid?
Finally, all this ethical analysis must work at two levels. The first is "what would I do?" The second – what should my government, culture, religion allow or forbid? There are many things I would not do that I don't think should be banned – because I think other people may reasonably have different views from mine. I would not get cosmetic surgery, but I would not ban it – and will try not to think ill of those who choose it
So, how should we assess the ethics of new biomedical procedures when we know that society's views may change? More specifically, what should we think of He Jiankui's experiment with human babies?
First, look to see whether the procedure in question violates, at least fairly clearly, some rule in your ethical or moral code. If so, your choice may not be difficult. But if the procedure is unmentioned in your moral code, probably because it was inconceivable to the code's creators, examine the consequences of the act.
If the procedure is just novel, and not something that touches on important moral concerns, looking at the likely consequences may be enough for your ethical analysis –though it is always worth remembering that predicting consequences perfectly is impossible and predicting them well is never certain. If it does touch on morally significant issues, you need to think those issues through. The consequences may be important to your conclusions but they may not be determinative.
And, then, if you conclude that it is not ethical from your perspective, you need to take yet another step and consider whether it should be banned for people who do not share your perspective. Sometimes the answer will be yes – that psychopaths may not view murder as immoral does not mean we have to let them kill – but sometimes it will be no.
What does this say about He Jiankui's experiment? I have no qualms in condemning it, unequivocally. The potential risks to the babies grossly outweighed any benefits to them, and to science. And his secret work, against a near universal scientific consensus, privileged his own ethical conclusions without giving anyone else a vote, or even a voice.
But if, in ten or twenty years, genome editing of human embryos is shown to be safe (enough) and it is proposed to be used for good reasons – say, to relieve human suffering that could not be treated in other good ways – and with good consents from those directly involved as well as from the relevant society and government – my answer might well change. Yours may not. Bioethics is a process for approaching questions; it is not a set of universal answers.
This article opened with a quotation from the 1848 Communist Manifesto, referring to the dizzying pace of change from industrialization and modernity. You don't need to be a Marxist to appreciate that sentiment. Change – especially in the biosciences – keeps accelerating. How should we assess the ethics of new biotechnologies? The best we can, with what we know, at the time we inhabit. And, in the face of vast uncertainty, with humility.
Astronauts at the International Space Station today depend on pre-packaged, freeze-dried food, plus some fresh produce thanks to regular resupply missions. This supply chain, however, will not be available on trips further out, such as the moon or Mars. So what are astronauts on long missions going to eat?
Going by the options available now, says Christel Paille, an engineer at the European Space Agency, a lunar expedition is likely to have only dehydrated foods. “So no more fresh product, and a limited amount of already hydrated product in cans.”
For the Mars mission, the situation is a bit more complex, she says. Prepackaged food could still constitute most of their food, “but combined with [on site] production of certain food products…to get them fresh.” A Mars mission isn’t right around the corner, but scientists are currently working on solutions for how to feed those astronauts. A number of boundary-pushing efforts are now underway.
The logistics of growing plants in space, of course, are very different from Earth. There is no gravity, sunlight, or atmosphere. High levels of ionizing radiation stunt plant growth. Plus, plants take up a lot of space, something that is, ironically, at a premium up there. These and special nutritional requirements of spacefarers have given scientists some specific and challenging problems.
To study fresh food production systems, NASA runs the Vegetable Production System (Veggie) on the ISS. Deployed in 2014, Veggie has been growing salad-type plants on “plant pillows” filled with growth media, including a special clay and controlled-release fertilizer, and a passive wicking watering system. They have had some success growing leafy greens and even flowers.
"Ideally, we would like a system which has zero waste and, therefore, needs zero input, zero additional resources."
A larger farming facility run by NASA on the ISS is the Advanced Plant Habitat to study how plants grow in space. This fully-automated, closed-loop system has an environmentally controlled growth chamber and is equipped with sensors that relay real-time information about temperature, oxygen content, and moisture levels back to the ground team at Kennedy Space Center in Florida. In December 2020, the ISS crew feasted on radishes grown in the APH.
“But salad doesn’t give you any calories,” says Erik Seedhouse, a researcher at the Applied Aviation Sciences Department at Embry-Riddle Aeronautical University in Florida. “It gives you some minerals, but it doesn’t give you a lot of carbohydrates.” Seedhouse also noted in his 2020 book Life Support Systems for Humans in Space: “Integrating the growing of plants into a life support system is a fiendishly difficult enterprise.” As a case point, he referred to the ESA’s Micro-Ecological Life Support System Alternative (MELiSSA) program that has been running since 1989 to integrate growing of plants in a closed life support system such as a spacecraft.
Paille, one of the scientists running MELiSSA, says that the system aims to recycle the metabolic waste produced by crew members back into the metabolic resources required by them: “The aim is…to come [up with] a closed, sustainable system which does not [need] any logistics resupply.” MELiSSA uses microorganisms to process human excretions in order to harvest carbon dioxide and nitrate to grow plants. “Ideally, we would like a system which has zero waste and, therefore, needs zero input, zero additional resources,” Paille adds.
Microorganisms play a big role as “fuel” in food production in extreme places, including in space. Last year, researchers discovered Methylobacterium strains on the ISS, including some never-seen-before species. Kasthuri Venkateswaran of NASA’s Jet Propulsion Laboratory, one of the researchers involved in the study, says, “[The] isolation of novel microbes that help to promote the plant growth under stressful conditions is very essential… Certain bacteria can decompose complex matter into a simple nutrient [that] the plants can absorb.” These microbes, which have already adapted to space conditions—such as the absence of gravity and increased radiation—boost various plant growth processes and help withstand the harsh physical environment.
MELiSSA, says Paille, has demonstrated that it is possible to grow plants in space. “This is important information because…we didn’t know whether the space environment was affecting the biological cycle of the plant…[and of] cyanobacteria.” With the scientific and engineering aspects of a closed, self-sustaining life support system becoming clearer, she says, the next stage is to find out if it works in space. They plan to run tests recycling human urine into useful components, including those that promote plant growth.
The MELiSSA pilot plant uses rats currently, and needs to be translated for human subjects for further studies. “Demonstrating the process and well-being of a rat in terms of providing water, sufficient oxygen, and recycling sufficient carbon dioxide, in a non-stressful manner, is one thing,” Paille says, “but then, having a human in the loop [means] you also need to integrate user interfaces from the operational point of view.”
Growing food in space comes with an additional caveat that underscores its high stakes. Barbara Demmig-Adams from the Department of Ecology and Evolutionary Biology at the University of Colorado Boulder explains, “There are conditions that actually will hurt your health more than just living here on earth. And so the need for nutritious food and micronutrients is even greater for an astronaut than for [you and] me.”
Demmig-Adams, who has worked on increasing the nutritional quality of plants for long-duration spaceflight missions, also adds that there is no need to reinvent the wheel. Her work has focused on duckweed, a rather unappealingly named aquatic plant. “It is 100 percent edible, grows very fast, it’s very small, and like some other floating aquatic plants, also produces a lot of protein,” she says. “And here on Earth, studies have shown that the amount of protein you get from the same area of these floating aquatic plants is 20 times higher compared to soybeans.”
Aquatic plants also tend to grow well in microgravity: “Plants that float on water, they don’t respond to gravity, they just hug the water film… They don’t need to know what’s up and what’s down.” On top of that, she adds, “They also produce higher concentrations of really important micronutrients, antioxidants that humans need, especially under space radiation.” In fact, duckweed, when subjected to high amounts of radiation, makes nutrients called carotenoids that are crucial for fighting radiation damage. “We’ve looked at dozens and dozens of plants, and the duckweed makes more of this radiation fighter…than anything I’ve seen before.”
Despite all the scientific advances and promising leads, no one really knows what the conditions so far out in space will be and what new challenges they will bring. As Paille says, “There are known unknowns and unknown unknowns.”
One definite “known” for astronauts is that growing their food is the ideal scenario for space travel in the long term since “[taking] all your food along with you, for best part of two years, that’s a lot of space and a lot of weight,” as Seedhouse says. That said, once they land on Mars, they’d have to think about what to eat all over again. “Then you probably want to start building a greenhouse and growing food there [as well],” he adds.
And that is a whole different challenge altogether.
We are sticking our heads into the sand of reality on Omicron, and the results may be catastrophic.
Omicron is over 4 times more infectious than Delta. The Pfizer two-shot vaccine offers only 33% protection from infection. A Pfizer booster vaccine does raises protection to about 75%, but wanes to around 30-40 percent 10 weeks after the booster.
That’s because the much faster disease transmission and vaccine escape undercut the less severe overall nature of Omicron. That’s why hospitals have a large probability of being overwhelmed, as the Center for Disease Control warned, in this major Omicron wave.
Yet despite this very serious threat, we see the lack of real action. The federal government tightened international travel guidelines and is promoting boosters. Certainly, it’s crucial to get as many people to get their booster – and initial vaccine doses – as soon as possible. But the government is not taking the steps that would be the real game-changers.
Pfizer’s anti-viral drug Paxlovid decreases the risk of hospitalization and death from COVID by 89%. Due to this effectiveness, the FDA approved Pfizer ending the trial early, because it would be unethical to withhold the drug from people in the control group. Yet the FDA chose not to hasten the approval process along with the emergence of Omicron in late November, only getting around to emergency authorization in late December once Omicron took over. That delay meant the lack of Paxlovid for the height of the Omicron wave, since it takes many weeks to ramp up production, resulting in an unknown number of unnecessary deaths.
We humans are prone to falling for dangerous judgment errors called cognitive biases.
Widely available at-home testing would enable people to test themselves quickly, so that those with mild symptoms can quarantine instead of infecting others. Yet the federal government did not make tests available to patients when Omicron emerged in late November. That’s despite the obviousness of the coming wave based on the precedent of South Africa, UK, and Denmark and despite the fact that the government made vaccines freely available. Its best effort was to mandate that insurance cover reimbursements for these kits, which is way too much of a barrier for most people. By the time Omicron took over, the federal government recognized its mistake and ordered 500 million tests to be made available in January. However, that’s far too late. And the FDA also played a harmful role here, with its excessive focus on accuracy going back to mid-2020, blocking the widespread availability of cheap at-home tests. By contrast, Europe has a much better supply of tests, due to its approval of quick and slightly less accurate tests.
Neither do we see meaningful leadership at the level of employers. Some are bringing out the tired old “delay the office reopening” play. For example, Google, Uber, and Ford, along with many others, have delayed the return to the office for several months. Those that already returned are calling for stricter pandemic measures, such as more masks and social distancing, but not changing their work arrangements or adding sufficient ventilation to address the spread of COVID.
Despite plenty of warnings from risk management and cognitive bias experts, leaders are repeating the same mistakes we fell into with Delta. And so are regular people. For example, surveys show that Omicron has had very little impact on the willingness of unvaccinated Americans to get a first vaccine dose, or of vaccinated Americans to get a booster. That’s despite Omicron having taken over from Delta in late December.
What explains this puzzling behavior on both the individual and society level? We humans are prone to falling for dangerous judgment errors called cognitive biases. Rooted in wishful thinking and gut reactions, these mental blindspots lead to poor strategic and financial decisions when evaluating choices.
These cognitive biases stem from the more primitive, emotional, and intuitive part of our brains that ensured survival in our ancestral environment. This quick, automatic reaction of our emotions represents the autopilot system of thinking, one of the two systems of thinking in our brains. It makes good decisions most of the time but also regularly makes certain systematic thinking errors, since it’s optimized to help us survive. In modern society, our survival is much less at risk, and our gut is more likely to compel us to focus on the wrong information to make decisions.
One of the biggest challenges relevant to Omicron is the cognitive bias known as the ostrich effect. Named after the myth that ostriches stick their heads into the sand when they fear danger, the ostrich effect refers to people denying negative reality. Delta illustrated the high likelihood of additional dangerous variants, yet we failed to pay attention to and prepare for such a threat.
We want the future to be normal. We’re tired of the pandemic and just want to get back to pre-pandemic times. Thus, we greatly underestimate the probability and impact of major disruptors, like new COVID variants. That cognitive bias is called the normalcy bias.
When we learn one way of functioning in any area, we tend to stick to that way of functioning. You might have heard of this as the hammer-nail syndrome: when you have a hammer, everything looks like a nail. That syndrome is called functional fixedness. This cognitive bias causes those used to their old ways of action to reject any alternatives, including to prepare for a new variant.
Our minds naturally prioritize the present. We want what we want now, and downplay the long-term consequences of our current desires. That fallacious mental pattern is called hyperbolic discounting, where we excessively discount the benefits of orienting toward the future and focus on the present. A clear example is focusing on the short-term perceived gains of trying to return to normal over managing the risks of future variants.
The way forward into the future is to defeat cognitive biases and avoid denying reality by rethinking our approach to the future.
The FDA requires a serious overhaul. It’s designed for a non-pandemic environment, where the goal is to have a highly conservative, slow-going, and risk-averse approach so that the public feels confident trusting whatever it approved. That’s simply unacceptable in a fast-moving pandemic, and we are bound to face future pandemics in the future.
The federal government needs to have cognitive bias experts weigh in on federal policy. Putting all of its eggs in one basket – vaccinations – is not a wise move when we face the risks of a vaccine-escaping variant. Its focus should also be on expediting and prioritizing anti-virals, scaling up cheap rapid testing, and subsidizing high-filtration masks.
For employers, instead of dictating a top-down approach to how employees collaborate, companies need to adopt a decentralized team-led approach. Each individual team leader of a rank-and-file employee team should determine what works best for their team. After all, team leaders tend to know much more of what their teams need, after all. Moreover, they can respond to local emergencies like COVID surges.
At the same time, team leaders need to be trained to integrate best practices for hybrid and remote team leadership. Companies transitioned to telework abruptly as part of the March 2020 lockdowns. They fell into the cognitive bias of functional fixedness and transposed their pre-existing, in-office methods of collaboration on remote work. Zoom happy hours are a clear example: The large majority of employees dislike them, and research shows they are disconnecting, rather than connecting.
Yet supervisors continue to use them, despite the existence of much better methods of facilitating colalboration, which have been shown to work, such as virtual water cooler discussions, virtual coworking, and virtual mentoring. Leaders also need to facilitate innovation in hybrid and remote teams through techniques such as virtual asynchronous brainstorming. Finally, team leaders need to adjust performance evaluation to adapt to the needs of hybrid and remote teams.
On an individual level, people built up certain expectations during the first two years of the pandemic, and they don't apply with Omicron. For example, most people still think that a cloth mask is a fine source of protection. In reality, you really need an N-95 mask, since Omicron is so much more infectious. Another example is that many people don’t realize that symptom onset is much quicker with Omicron, and they aren’t prepared for the consequences.
Remember that we have a huge number of people who are asymptomatic, often without knowing it, due to the much higher mildness of Omicron. About 8% of people admitted to hospitals for other reasons in San Francisco test positive for COVID without symptoms, which we can assume translates for other cities. That means many may think they're fine and they're actually infectious. The result is a much higher chance of someone getting many other people sick.
During this time of record-breaking cases, you need to be mindful about your internalized assumptions and adjust your risk calculus accordingly. So if you can delay higher-risk activities, January and February might be the time to do it. Prepare for waves of disruptions to continue over time, at least through the end of February.
Of course, you might also choose to not worry about getting infected. If you are vaccinated and boosted, and do not have any additional health risks, you are very unlikely to have a serious illness due to Omicron. You can just take the small risk of a serious illness – which can happen – and go about your daily life. If doing so, watch out for those you care about who do have health concerns, since if you infect them, they might not have a mild case even with Omicron.
In short, instead of trying to turn back the clock to the lost world of January 2020, consider how we might create a competitive advantage in our new future. COVID will never go away: we need to learn to live with it. That means reacting appropriately and thoughtfully to new variants and being intentional about our trade-offs.