[Editor's Note: This essay is in response to our current Big Question, which we posed to experts with different perspectives: "How should DNA tests for intelligence be used, if at all, by parents and educators?"]
Imagine a world in which pregnant women could go to the doctor and obtain a simple inexpensive genetic test of their unborn child that would allow them to predict how tall he or she would eventually be. The test might also tell them the child's risk for high blood pressure or heart disease.
Can we use DNA not to understand, but to predict who is going to be intelligent or extraverted or mentally ill?
Even more remarkable -- and more dangerous -- the test might predict how intelligent the child would be, or how far he or she could be expected to go in school. Or heading further out, it might predict whether he or she will be an alcoholic or a teetotaler, or straight or gay, or… you get the idea. Is this really possible? If it is, would it be a good idea? Answering these questions requires some background in a scientific field called behavior genetics.
Differences in human behavior -- intelligence, personality, mental illness, pretty much everything -- are related to genetic differences among people. Scientists have known this for 150 years, ever since Darwin's half-cousin Francis Galton first applied Shakespeare's phrase, "Nature and Nurture" to the scientific investigation of human differences. We knew about the heritability of behavior before Mendel's laws of genetics had been re-discovered at the end of the last century, and long before the structure of DNA was discovered in the 1950s. How could discoveries about genetics be made before a science of genetics even existed?
The answer is that scientists developed clever research designs that allowed them to make inferences about genetics in the absence of biological knowledge about DNA. The best-known is the twin study: identical twins are essentially clones, sharing 100 percent of their DNA, while fraternal twins are essentially siblings, sharing half. To the extent that identical twins are more similar for some trait than fraternal twins, one can infer that heredity is playing a role. Adoption studies are even more straightforward. Is the personality of an adopted child more like the biological parents she has never seen, or the adoptive parents who raised her?
Twin and adoption studies played an important role in establishing beyond any reasonable doubt that genetic differences play a role in the development of differences in behavior, but they told us very little about how the genetics of behavior actually worked. When the human genome was finally sequenced in the early 2000s, and it became easier and cheaper to obtain actual DNA from large samples of people, scientists anticipated that we would soon find the genes for intelligence, mental illness, and all the other behaviors that were known to be "heritable" in a general way.
But to everyone's amazement, the genes weren't there. It turned out that there are thousands of genes related to any given behavior, so many that they can't be counted, and each one of them has such a tiny effect that it can't be tied to meaningful biological processes. The whole scientific enterprise of understanding the genetics of behavior seemed ready to collapse, until it was rescued -- sort of -- by a new method called polygenic scores, PGS for short. Polygenic scores abandon the old task of finding the genes for complex human behavior, replacing it with black-box prediction: can we use DNA not to understand, but to predict who is going to be intelligent or extraverted or mentally ill?
Prediction from observing parents works better, and is far easier and cheaper, than anything we can do with DNA.
PGS are the shiny new toy of human genetics. From a technological standpoint they are truly amazing, and they are useful for some scientific applications that don't involve making decisions about individual people. We can obtain DNA from thousands of people, estimate the tiny relationships between individual bits of DNA and any outcome we want — height or weight or cardiac disease or IQ — and then add all those tiny effects together into a single bell-shaped score that can predict the outcome of interest. In theory, we could do this from the moment of conception.
Polygenic scores for height already work pretty well. Physicians are debating whether the PGS for heart disease are robust enough to be used in the clinic. For some behavioral traits-- the most data exist for educational attainment -- they work well enough to be scientifically interesting, if not practically useful. For traits like personality or sexual orientation, the prediction is statistically significant but nowhere close to practically meaningful. No one knows how much better any of these predictions are likely to get.
Without a doubt, PGS are an amazing feat of genomic technology, but the task they accomplish is something scientists have been able to do for a long time, and in fact it is something that our grandparents could have done pretty well. PGS are basically a new way to predict a trait in an individual by using the same trait in the individual's parents — a way of observing that the acorn doesn't fall far from the tree.
The children of tall people tend to be tall. Children of excellent athletes are athletic; children of smart people are smart; children of people with heart disease are at risk, themselves. Not every time, of course, but that is how imperfect prediction works: children of tall parents vary in their height like anyone else, but on average they are taller than the rest of us. Prediction from observing parents works better, and is far easier and cheaper, than anything we can do with DNA.
But wait a minute. Prediction from parents isn't strictly genetic. Smart parents not only pass on their genes to their kids, but they also raise them. Smart families are privileged in thousands of ways — they make more money and can send their kids to better schools. The same is true for PGS.
The ability of a genetic score to predict educational attainment depends not only on examining the relationship between certain genes and how far people go in school, but also on every personal and social characteristic that helps or hinders education: wealth, status, discrimination, you name it. The bottom line is that for any kind of prediction of human behavior, separation of genetic from environmental prediction is very difficult; ultimately it isn't possible.
Still, experts are already discussing how to use PGS to make predictions for children, and even for embryos.
This is a reminder that we really have no idea why either parents or PGS predict as well or as poorly as they do. It is easy to imagine that a PGS for educational attainment works because it is summarizing genes that code for efficient neurological development, bigger brains, and swifter problem solving, but we really don't know that. PGS could work because they are associated with being rich, or being motivated, or having light skin. It's the same for predicting from parents. We just don't know.
Still, experts are already discussing how to use PGS to make predictions for children, and even for embryos.
For example, maybe couples could fertilize multiple embryos in vitro, test their DNA, and select the one with the "best" PGS on some trait. This would be a bad idea for a lot of reasons. Such scores aren't effective enough to be very useful to parents, and to the extent they are effective, it is very difficult to know what other traits might be selected for when parents try to prioritize intelligence or attractiveness. People will no doubt try it anyway, and as a matter of reproductive freedom I can't think of any way to stop them. Fortunately, the practice probably won't have any great impact one way or another.
That brings us to the ethics of PGS, particularly in the schools. Imagine that when a child enrolls in a public school, an IQ test is given to her biological parents. Children with low-IQ parents are statistically more likely to have low IQs themselves, so they could be assigned to less demanding classrooms or vocational programs. Hopefully we agree that this would be unethical, but let's think through why.
First of all, it would be unethical because we don't know why the parents have low IQs, or why their IQs predict their children's. The parents could be from a marginalized ethnic group, recognizable by their skin color and passed on genetically to their children, so discriminating based on a parent's IQ would just be a proxy for discriminating based on skin color. Such a system would be no more than a social scientific gloss on an old-fashioned program for perpetuating economic and cognitive privilege via the educational system.
People deserve to be judged on the basis of their own behavior, not a genetic test.
Assigning children to classrooms based on genetic testing would be no different, although it would have the slight ethical advantage of being less effective. The PGS for educational attainment could reflect brain-efficiency, but it could also depend on skin color, or economic advantage, or personality, or literally anything that is related in any way to economic success. Privileging kids with higher genetic scores would be no different than privileging children with smart parents. If schools really believe that a psychological trait like IQ is important for school placement, the sensible thing is to administer the children an actual IQ test – not a genetic test.
IQ testing has its own issues, of course, but at least it involves making decisions about individuals based on their own observable characteristics, rather than on characteristics of their parents or their genome. If decisions must be made, if resources must be apportioned, people deserve to be judged on the basis of their own behavior, the content of their character. Since it can't be denied that people differ in all sorts of relevant ways, this is what it means for all people to be created equal.
[Editor's Note: Read another perspective in the series here.]
Astronauts at the International Space Station today depend on pre-packaged, freeze-dried food, plus some fresh produce thanks to regular resupply missions. This supply chain, however, will not be available on trips further out, such as the moon or Mars. So what are astronauts on long missions going to eat?
Going by the options available now, says Christel Paille, an engineer at the European Space Agency, a lunar expedition is likely to have only dehydrated foods. “So no more fresh product, and a limited amount of already hydrated product in cans.”
For the Mars mission, the situation is a bit more complex, she says. Prepackaged food could still constitute most of their food, “but combined with [on site] production of certain food products…to get them fresh.” A Mars mission isn’t right around the corner, but scientists are currently working on solutions for how to feed those astronauts. A number of boundary-pushing efforts are now underway.
The logistics of growing plants in space, of course, are very different from Earth. There is no gravity, sunlight, or atmosphere. High levels of ionizing radiation stunt plant growth. Plus, plants take up a lot of space, something that is, ironically, at a premium up there. These and special nutritional requirements of spacefarers have given scientists some specific and challenging problems.
To study fresh food production systems, NASA runs the Vegetable Production System (Veggie) on the ISS. Deployed in 2014, Veggie has been growing salad-type plants on “plant pillows” filled with growth media, including a special clay and controlled-release fertilizer, and a passive wicking watering system. They have had some success growing leafy greens and even flowers.
"Ideally, we would like a system which has zero waste and, therefore, needs zero input, zero additional resources."
A larger farming facility run by NASA on the ISS is the Advanced Plant Habitat to study how plants grow in space. This fully-automated, closed-loop system has an environmentally controlled growth chamber and is equipped with sensors that relay real-time information about temperature, oxygen content, and moisture levels back to the ground team at Kennedy Space Center in Florida. In December 2020, the ISS crew feasted on radishes grown in the APH.
“But salad doesn’t give you any calories,” says Erik Seedhouse, a researcher at the Applied Aviation Sciences Department at Embry-Riddle Aeronautical University in Florida. “It gives you some minerals, but it doesn’t give you a lot of carbohydrates.” Seedhouse also noted in his 2020 book Life Support Systems for Humans in Space: “Integrating the growing of plants into a life support system is a fiendishly difficult enterprise.” As a case point, he referred to the ESA’s Micro-Ecological Life Support System Alternative (MELiSSA) program that has been running since 1989 to integrate growing of plants in a closed life support system such as a spacecraft.
Paille, one of the scientists running MELiSSA, says that the system aims to recycle the metabolic waste produced by crew members back into the metabolic resources required by them: “The aim is…to come [up with] a closed, sustainable system which does not [need] any logistics resupply.” MELiSSA uses microorganisms to process human excretions in order to harvest carbon dioxide and nitrate to grow plants. “Ideally, we would like a system which has zero waste and, therefore, needs zero input, zero additional resources,” Paille adds.
Microorganisms play a big role as “fuel” in food production in extreme places, including in space. Last year, researchers discovered Methylobacterium strains on the ISS, including some never-seen-before species. Kasthuri Venkateswaran of NASA’s Jet Propulsion Laboratory, one of the researchers involved in the study, says, “[The] isolation of novel microbes that help to promote the plant growth under stressful conditions is very essential… Certain bacteria can decompose complex matter into a simple nutrient [that] the plants can absorb.” These microbes, which have already adapted to space conditions—such as the absence of gravity and increased radiation—boost various plant growth processes and help withstand the harsh physical environment.
MELiSSA, says Paille, has demonstrated that it is possible to grow plants in space. “This is important information because…we didn’t know whether the space environment was affecting the biological cycle of the plant…[and of] cyanobacteria.” With the scientific and engineering aspects of a closed, self-sustaining life support system becoming clearer, she says, the next stage is to find out if it works in space. They plan to run tests recycling human urine into useful components, including those that promote plant growth.
The MELiSSA pilot plant uses rats currently, and needs to be translated for human subjects for further studies. “Demonstrating the process and well-being of a rat in terms of providing water, sufficient oxygen, and recycling sufficient carbon dioxide, in a non-stressful manner, is one thing,” Paille says, “but then, having a human in the loop [means] you also need to integrate user interfaces from the operational point of view.”
Growing food in space comes with an additional caveat that underscores its high stakes. Barbara Demmig-Adams from the Department of Ecology and Evolutionary Biology at the University of Colorado Boulder explains, “There are conditions that actually will hurt your health more than just living here on earth. And so the need for nutritious food and micronutrients is even greater for an astronaut than for [you and] me.”
Demmig-Adams, who has worked on increasing the nutritional quality of plants for long-duration spaceflight missions, also adds that there is no need to reinvent the wheel. Her work has focused on duckweed, a rather unappealingly named aquatic plant. “It is 100 percent edible, grows very fast, it’s very small, and like some other floating aquatic plants, also produces a lot of protein,” she says. “And here on Earth, studies have shown that the amount of protein you get from the same area of these floating aquatic plants is 20 times higher compared to soybeans.”
Aquatic plants also tend to grow well in microgravity: “Plants that float on water, they don’t respond to gravity, they just hug the water film… They don’t need to know what’s up and what’s down.” On top of that, she adds, “They also produce higher concentrations of really important micronutrients, antioxidants that humans need, especially under space radiation.” In fact, duckweed, when subjected to high amounts of radiation, makes nutrients called carotenoids that are crucial for fighting radiation damage. “We’ve looked at dozens and dozens of plants, and the duckweed makes more of this radiation fighter…than anything I’ve seen before.”
Despite all the scientific advances and promising leads, no one really knows what the conditions so far out in space will be and what new challenges they will bring. As Paille says, “There are known unknowns and unknown unknowns.”
One definite “known” for astronauts is that growing their food is the ideal scenario for space travel in the long term since “[taking] all your food along with you, for best part of two years, that’s a lot of space and a lot of weight,” as Seedhouse says. That said, once they land on Mars, they’d have to think about what to eat all over again. “Then you probably want to start building a greenhouse and growing food there [as well],” he adds.
And that is a whole different challenge altogether.
We are sticking our heads into the sand of reality on Omicron, and the results may be catastrophic.
Omicron is over 4 times more infectious than Delta. The Pfizer two-shot vaccine offers only 33% protection from infection. A Pfizer booster vaccine does raises protection to about 75%, but wanes to around 30-40 percent 10 weeks after the booster.
That’s because the much faster disease transmission and vaccine escape undercut the less severe overall nature of Omicron. That’s why hospitals have a large probability of being overwhelmed, as the Center for Disease Control warned, in this major Omicron wave.
Yet despite this very serious threat, we see the lack of real action. The federal government tightened international travel guidelines and is promoting boosters. Certainly, it’s crucial to get as many people to get their booster – and initial vaccine doses – as soon as possible. But the government is not taking the steps that would be the real game-changers.
Pfizer’s anti-viral drug Paxlovid decreases the risk of hospitalization and death from COVID by 89%. Due to this effectiveness, the FDA approved Pfizer ending the trial early, because it would be unethical to withhold the drug from people in the control group. Yet the FDA chose not to hasten the approval process along with the emergence of Omicron in late November, only getting around to emergency authorization in late December once Omicron took over. That delay meant the lack of Paxlovid for the height of the Omicron wave, since it takes many weeks to ramp up production, resulting in an unknown number of unnecessary deaths.
We humans are prone to falling for dangerous judgment errors called cognitive biases.
Widely available at-home testing would enable people to test themselves quickly, so that those with mild symptoms can quarantine instead of infecting others. Yet the federal government did not make tests available to patients when Omicron emerged in late November. That’s despite the obviousness of the coming wave based on the precedent of South Africa, UK, and Denmark and despite the fact that the government made vaccines freely available. Its best effort was to mandate that insurance cover reimbursements for these kits, which is way too much of a barrier for most people. By the time Omicron took over, the federal government recognized its mistake and ordered 500 million tests to be made available in January. However, that’s far too late. And the FDA also played a harmful role here, with its excessive focus on accuracy going back to mid-2020, blocking the widespread availability of cheap at-home tests. By contrast, Europe has a much better supply of tests, due to its approval of quick and slightly less accurate tests.
Neither do we see meaningful leadership at the level of employers. Some are bringing out the tired old “delay the office reopening” play. For example, Google, Uber, and Ford, along with many others, have delayed the return to the office for several months. Those that already returned are calling for stricter pandemic measures, such as more masks and social distancing, but not changing their work arrangements or adding sufficient ventilation to address the spread of COVID.
Despite plenty of warnings from risk management and cognitive bias experts, leaders are repeating the same mistakes we fell into with Delta. And so are regular people. For example, surveys show that Omicron has had very little impact on the willingness of unvaccinated Americans to get a first vaccine dose, or of vaccinated Americans to get a booster. That’s despite Omicron having taken over from Delta in late December.
What explains this puzzling behavior on both the individual and society level? We humans are prone to falling for dangerous judgment errors called cognitive biases. Rooted in wishful thinking and gut reactions, these mental blindspots lead to poor strategic and financial decisions when evaluating choices.
These cognitive biases stem from the more primitive, emotional, and intuitive part of our brains that ensured survival in our ancestral environment. This quick, automatic reaction of our emotions represents the autopilot system of thinking, one of the two systems of thinking in our brains. It makes good decisions most of the time but also regularly makes certain systematic thinking errors, since it’s optimized to help us survive. In modern society, our survival is much less at risk, and our gut is more likely to compel us to focus on the wrong information to make decisions.
One of the biggest challenges relevant to Omicron is the cognitive bias known as the ostrich effect. Named after the myth that ostriches stick their heads into the sand when they fear danger, the ostrich effect refers to people denying negative reality. Delta illustrated the high likelihood of additional dangerous variants, yet we failed to pay attention to and prepare for such a threat.
We want the future to be normal. We’re tired of the pandemic and just want to get back to pre-pandemic times. Thus, we greatly underestimate the probability and impact of major disruptors, like new COVID variants. That cognitive bias is called the normalcy bias.
When we learn one way of functioning in any area, we tend to stick to that way of functioning. You might have heard of this as the hammer-nail syndrome: when you have a hammer, everything looks like a nail. That syndrome is called functional fixedness. This cognitive bias causes those used to their old ways of action to reject any alternatives, including to prepare for a new variant.
Our minds naturally prioritize the present. We want what we want now, and downplay the long-term consequences of our current desires. That fallacious mental pattern is called hyperbolic discounting, where we excessively discount the benefits of orienting toward the future and focus on the present. A clear example is focusing on the short-term perceived gains of trying to return to normal over managing the risks of future variants.
The way forward into the future is to defeat cognitive biases and avoid denying reality by rethinking our approach to the future.
The FDA requires a serious overhaul. It’s designed for a non-pandemic environment, where the goal is to have a highly conservative, slow-going, and risk-averse approach so that the public feels confident trusting whatever it approved. That’s simply unacceptable in a fast-moving pandemic, and we are bound to face future pandemics in the future.
The federal government needs to have cognitive bias experts weigh in on federal policy. Putting all of its eggs in one basket – vaccinations – is not a wise move when we face the risks of a vaccine-escaping variant. Its focus should also be on expediting and prioritizing anti-virals, scaling up cheap rapid testing, and subsidizing high-filtration masks.
For employers, instead of dictating a top-down approach to how employees collaborate, companies need to adopt a decentralized team-led approach. Each individual team leader of a rank-and-file employee team should determine what works best for their team. After all, team leaders tend to know much more of what their teams need, after all. Moreover, they can respond to local emergencies like COVID surges.
At the same time, team leaders need to be trained to integrate best practices for hybrid and remote team leadership. Companies transitioned to telework abruptly as part of the March 2020 lockdowns. They fell into the cognitive bias of functional fixedness and transposed their pre-existing, in-office methods of collaboration on remote work. Zoom happy hours are a clear example: The large majority of employees dislike them, and research shows they are disconnecting, rather than connecting.
Yet supervisors continue to use them, despite the existence of much better methods of facilitating colalboration, which have been shown to work, such as virtual water cooler discussions, virtual coworking, and virtual mentoring. Leaders also need to facilitate innovation in hybrid and remote teams through techniques such as virtual asynchronous brainstorming. Finally, team leaders need to adjust performance evaluation to adapt to the needs of hybrid and remote teams.
On an individual level, people built up certain expectations during the first two years of the pandemic, and they don't apply with Omicron. For example, most people still think that a cloth mask is a fine source of protection. In reality, you really need an N-95 mask, since Omicron is so much more infectious. Another example is that many people don’t realize that symptom onset is much quicker with Omicron, and they aren’t prepared for the consequences.
Remember that we have a huge number of people who are asymptomatic, often without knowing it, due to the much higher mildness of Omicron. About 8% of people admitted to hospitals for other reasons in San Francisco test positive for COVID without symptoms, which we can assume translates for other cities. That means many may think they're fine and they're actually infectious. The result is a much higher chance of someone getting many other people sick.
During this time of record-breaking cases, you need to be mindful about your internalized assumptions and adjust your risk calculus accordingly. So if you can delay higher-risk activities, January and February might be the time to do it. Prepare for waves of disruptions to continue over time, at least through the end of February.
Of course, you might also choose to not worry about getting infected. If you are vaccinated and boosted, and do not have any additional health risks, you are very unlikely to have a serious illness due to Omicron. You can just take the small risk of a serious illness – which can happen – and go about your daily life. If doing so, watch out for those you care about who do have health concerns, since if you infect them, they might not have a mild case even with Omicron.
In short, instead of trying to turn back the clock to the lost world of January 2020, consider how we might create a competitive advantage in our new future. COVID will never go away: we need to learn to live with it. That means reacting appropriately and thoughtfully to new variants and being intentional about our trade-offs.