Isaac Asimov on the History of Infectious Disease—and How Humanity Learned to Fight Back
[EDITOR'S FORWARD: Humanity has always faced existential threats from dangerous microbes, and though this is the first pandemic in our lifetimes, it won't be the last our species will ever face. This newly relevant work by beloved sci-fi writer Isaac Asimov, an excerpt from his 1979 book, A Choice of Catastrophes, establishes that reality in its historical context and makes clear how far we have come since ancient times. But by some measures, we are still in the earliest stages of figuring out how to effectively neutralize such threats. Advancing progress as fast as we can—by leveraging all the insights of modern science—offers our best hope for containing this pandemic and those that will inevitably follow.]
An even greater danger to humanity than the effect of small, fecund pests on human beings, their food, and their possessions, is their tendency to spread some forms of infectious disease.
Every living organism is subject to disease of various sorts, where disease is defined in its broadest sense as "dis-ease," that is, as any malfunction or alteration of the physiology or biochemistry that interferes with the smooth workings of the organism. In the end, the cumulative effect of malfunctions, misfunctions, nonfunctions, even though much of it is corrected or patched up, produces irreversible damage—we call it old age—and, even with the best care in the world, brings on inevitable death.
Civilization has meant the development and growth of cities and the crowding of people into close quarters.
There are some individual trees that may live five thousand years, some cold-blooded animals that may live two hundred years, some warm-blooded animals that may live one hundred years, but for each multicellular individual death comes as the end.
This is an essential part of the successful functioning of life. New individuals constantly come into being with new combinations of chromosomes and genes, and with mutated genes, too. These represent new attempts, so to speak, at fitting the organism to the environment. Without the continuing arrival of new organisms that are not mere copies of the old, evolution would come to a halt. Naturally, the new organisms cannot perform their role properly unless the old ones are removed from the scene after they have performed their function of producing the new. In short, the death of the individual is essential to the life of the species.
It is essential, however, that the individual not die before the new generation has been produced; at least, not in so many cases as to ensure the population dwindling to extinction.
The human species cannot have the relative immunity to harm from individual death possessed by the small and fecund species. Human beings are comparatively large, long-lived, and slow to reproduce, so that too rapid individual death holds within it the specter of catastrophe. The rapid death of unusually high numbers of human beings through disease can seriously dent the human population. Carried to an extreme, it is not too hard to imagine it wiping out the human species.
Most dangerous in this respect is that class of malfunction referred to as "infectious disease." There are many disorders that affect a particular human being for one reason or another and may kill him or her, too, but which will not, in itself, offer a danger to the species, because it is strictly confined to the suffering individual. Where, however, a disease can, in some way travel from one human being to another, and where its occurrence in a single individual may lead to the death of not that one alone but of millions of others as well, then there is the possibility of catastrophe.
And indeed, infectious disease has come closer to destroying the human species in historic times than have the depredations of any animals. Although infectious disease, even at its worst, has never yet actually put an end to human beings as a living species (obviously), it can seriously damage a civilization and change the course of history. It has, in fact, done so not once, but many times.
What's more, the situation has perhaps grown worse with the coming of civilization. Civilization has meant the development and growth of cities and the crowding of people into close quarters. Just as fire can spread much more rapidly from tree to tree in a dense forest than in isolated stands, so can infectious disease spread more quickly in crowded quarters than in sparse settlements.
To mention a few notorious cases in history:
In 431 B.C., Athens and its allies went to war with Sparta and its allies. It was a twenty-seven-year war that ruined Athens and, to a considerable extent, all of Greece. Since Sparta controlled the land, the entire Athenian population crowded into the walled city of Athens. There they were safe and could be provisioned by sea, which was controlled by the Athenian navy. Athens would very likely have won a war of attrition before long and Greece might have avoided ruin, but for disease.
In 430 B.C., an infectious plague struck the crowded Athenian population and killed 20 percent of them, including the charismatic leader, Pericles. Athens kept on fighting but it never recovered its population or its strength and in the end it lost.
Plagues very frequently started in eastern and southern Asia, where population was densest, and spread westward. In A.D. 166, when the Roman Empire was at its peak of strength and civilization under the hard-working philosopher-emperor Marcus Aurelius, the Roman armies, fighting on the eastern borders in Asia Minor, began to suffer from an epidemic disease (possibly smallpox). They brought it back with them to other provinces and to Rome itself. At its height, 2,000 people were dying in the city of Rome each day. The population began to decline and did not reach its preplague figure again until the twentieth century. There are a great many reasons advanced for the long, slow decline of Rome that followed the reign of Marcus Aurelius, but the weakening effect of the plague of 166 surely played a part.
Even after the western provinces of the empire were torn away by invasions of the German tribes, and Rome itself was lost, the eastern half of the Roman Empire continued to exist, with its capital at Constantinople. Under the capable emperor Justinian I, who came to the throne in 527, Africa, Italy, and parts of Spain were taken and, for a while, it looked as though the empire might be reunited. In 541, however, the bubonic plague struck. It was a disease that attacked rats primarily, but one that fleas could spread to human beings by biting first a sick rat and then a healthy human being. Bubonic disease was fast-acting and often quickly fatal. It may even have been accompanied by a more deadly variant, pneumonic plague, which can leap directly from one person to another.
For two years the plague raged, and between one-third and one-half of the population of the city of Constantinople died, together with many people in the countryside outside the city. There was no hope of uniting the empire thereafter and the eastern portion, which came to be known as the Byzantine Empire, continued to decline thereafter (with occasional rallies).
The very worst epidemic in the history of the human species came in the fourteenth century. Sometime in the 1330s, a new variety of bubonic plague, a particularly deadly one, appeared in central Asia. People began to die and the plague spread outward, inexorably, from its original focus.
Eventually, it reached the Black Sea. There on the Crimean peninsula, jutting into the north-central coast of that sea, was a seaport called Kaffa where the Italian city of Genoa had established a trading post. In October, 1347, a Genoese ship just managed to make it back to Genoa from Kaffa. The few men on board who were not dead of the plague were dying. They were carried ashore and thus the plague entered Europe and began to spread rapidly.
Sometimes one caught a mild version of the disease, but often it struck violently. In the latter case, the patient was almost always dead within one to three days after the onset of the first symptoms. Because the extreme dangers were marked by hemorrhagic spots that turned dark, the disease was called the "Black Death."
The Black Death spread unchecked. It is estimated to have killed some 25 million people in Europe before it died down and many more than that in Africa and Asia. It may have killed a third of all the human population of the planet, perhaps 60 million people altogether or even more. Never before or after do we know of anything that killed so large a percentage of the population as did the Black Death.
It is no wonder that it inspired abject terror among the populace. Everyone walked in fear. A sudden attack of shivering or giddiness, a mere headache, might mean that death had marked one for its own and that no more than a couple of dozen hours were left in which to die. Whole towns were depopulated, with the first to die lying unburied while the survivors fled to spread the disease. Farms lay untended; domestic animals wandered uncared for. Whole nations—Aragon, for instance, in what is now eastern Spain—were afflicted so badly that they never truly recovered.
Distilled liquors had been first developed in Italy about 1100. Now, two centuries later they grew popular. The theory was that strong drink acted as a preventive against contagion. It didn't, but it made the drinker less concerned which, under the circumstances, was something. Drunkenness set in over Europe and it stayed even after the plague was gone; indeed, it has never left. The plague also upset the feudal economy by cutting down on the labor supply very drastically. This did as much to destroy feudalism as did the invention of gunpowder. (Perhaps the most distressing sidelight of the Black Death is the horrible insight into human nature that it offers. England and France were in the early decades of the Hundred Years War at the time. Although the Black Death afflicted both nations and nearly destroyed each, the war continued right on. There was no thought of peace in this greatest of all crises faced by the human species.)
There have been other great plagues since, though none to match the Black Death in unrivaled terror and destruction. In 1664 and 1665, the bubonic plague struck London and killed 75,000.
Cholera, which always simmered just below the surface in India (where it is "endemic") would occasionally explode and spread outward into an "epidemic." Europe was visited by deadly cholera epidemics in 1831 and again in 1848 and 1853. Yellow fever, a tropical disease, would be spread by sailors to more northern seaports, and periodically American cities would be decimated by it. Even as late as 1905, there was a bad yellow fever epidemic in New Orleans.
The most serious epidemic since the Black Death, was one of "Spanish influenza" which struck the world in 1918 and in one year killed 30 million people the world over, and about 600,000 of them in the United States. In comparison, four years of World War I, just preceding 1918, had killed 8 million. However, the influenza epidemic killed less than 2 percent of the world's population, so that the Black Death remains unrivaled.
What stands between such a catastrophe and us is the new knowledge we have gained in the last century and a half concerning the causes of infectious disease and methods for fighting it.
[…] Infectious disease is clearly more dangerous to human existence than any animal possibly could be, and we might be right to wonder whether it might not produce a final catastrophe before the glaciers ever have a chance to invade again and certainly before the sun begins to inch its way toward red gianthood.
What stands between such a catastrophe and us is the new knowledge we have gained in the last century and a half concerning the causes of infectious disease and methods for fighting it.
People, throughout most of history, had no defense whatever against infectious disease. Indeed, the very fact of infection was not recognized in ancient and medieval times. When people began dying in droves, the usual theory was that an angry god was taking vengeance for some reason or other. Apollo's arrows were flying, so that one death was not responsible for another; Apollo was responsible for all, equally.
The Bible tells of a number of epidemics and in each case it is the anger of God kindled against sinners, as in 2 Samuel 24. In New Testament times, the theory of demonic possession as an explanation of disease was popular, and both Jesus and others cast our devils. The biblical authority for this has caused the theory to persist to this day, as witness by the popularity of such movies as The Exorcist.
As long as disease was blamed on divine or demonic influences, something as mundane as contagion was overlooked. Fortunately, the Bible also contains instructions for isolating those with leprosy (a name given not only to leprosy itself, but to other, less serious skin conditions). The biblical practice of isolation was for religious rather than hygienic reasons, for leprosy has a very low infectivity. On biblical authority, lepers were isolated in the Middle Ages, while those with really infectious disease were not. The practice of isolation, however, caused some physicians to think of it in connection with disease generally. In particular, the ultimate terror of the Black Death helped spread the notion of quarantine, a name which referred originally to isolation for forty (quarante in French) days.
The fact that isolation did slow the spread of a disease made it look as though contagion was a factor. The first to deal with this possibility in detail was an Italian physician, Girolamo Fracastoro (1478–1553). In 1546, he suggested that disease could be spread by direct contact of a well person with an ill one or by indirect contact of a well person with infected articles or even through transmission over a distance. He suggested that minute bodies, too small to be seen, passed from an ill person to a well one and that the minute bodies had the power of self-multiplication.
It was a remarkable bit of insight, but Fracastoro had no firm evidence to support his theory. If one is going to accept minute unseen bodies leaping from one body to another and do it on nothing more than faith, one might as well accept unseen demons.
Minute bodies did not, however, remain unseen. Already in Fracastoro's time, the use of lenses to aid vision was well established. By 1608, combinations of lenses were used to magnify distant objects and the telescope came into existence. It didn't take much of a modification to have lenses magnify tiny objects. The Italian physiologist Marcello Malpighi (1628–94) was the first to use a microscope for important work, reporting his observations in the 1650s.
The Dutch microscopist Anton van Leeuwenhoek (1632–1723) laboriously ground small but excellent lenses, which gave him a better view of the world of tiny objects than anyone else in his time had had. In 1677, he placed ditch water at the focus of one of his small lenses and found living organisms too small to see with the naked eye but each one as indisputably alive as a whale or an elephant—or as a human being. These were the one-celled animals we now call "protozoa."
In 1683, van Leeuwenhoek discovered structures still tinier than protozoa. They were at the limit of visibility with even his best lenses, but from his sketches of what he saw, it is clear that he had discovered bacteria, the smallest cellular creatures that exist.
To do any better than van Leeuwenhoek, one had to have distinctly better microscopes and these were slow to be developed. The next microscopist to describe bacteria was the Danish biologist Otto Friedrich Müller (1730–84) who described them in a book on the subject, published posthumously, in 1786.
In hindsight, it seems that one might have guessed that bacteria represented Fracastoro's infectious agents, but there was no evidence of that and even Müller's observations were so borderline that there was no general agreement that bacteria even existed, or that they were alive if they did.
The English optician Joseph Jackson Lister (1786–1869) developed an achromatic microscope in in 1830. Until then, the lenses used had refracted light into rainbows so that tiny objects were rimmed in color and could not be seen clearly. Lister combined lenses of different kinds of glass in such a way as to remove the colors.
With the colors gone, tiny objects stood out sharply and in the 1860s, the German botanist Ferdinand Julius Cohn (1828–98) saw and described bacteria with the first really convincing success. It was only with Cohn's work that the science of bacteriology was founded and that there came to be general agreement that bacteria existed.
Meanwhile, even without a clear indication of the existence of Fracastoro's agents, some physicians were discovering methods of reducing infection.
The Hungarian physician Ignaz Philipp Semmelweiss (1818–65) insisted that childbed fever which killed so many mothers in childbirth, was spread by the doctors themselves, since they went from autopsies straight to women in labor. He fought to get the doctors to wash their hands before attending the women, and when he managed to enforce this, in 1847, the incidence of childbed fever dropped precipitously. The insulted doctors, proud of their professional filth, revolted at this, however and finally managed to do their work with dirty hands again. The incidence of childbed fever climbed as rapidly as it had fallen—but that didn't bother the doctors.
The crucial moment came with the work of the French chemist Louis Pasteur (1822–95). Although he was a chemist his work had turned him more and more toward microscopes and microorganisms, and in 1865 he set to work studying a silkworm disease that was destroying France's silk industry. Using his microscope, he discovered a tiny parasite infesting the silkworms and the mulberry leaves that were fed to them. Pasteur's solution was drastic but rational. All infested worms and infested food must be destroyed. A new beginning must be made with healthy worms and the disease would be wiped out. His advice was followed and it worked. The silk industry was saved.
This turned Pasteur's interest to contagious diseases. It seemed to him that if the silkworm disease was the product of microscopic parasites other diseases might be, and thus was born the "germ theory of disease." Fracastoro's invisible infectious agents were microorganisms, often the bacteria that Cohn was just bringing clearly into the light of day.
It now became possible to attack infectious disease rationally, making use of a technique that had been introduced to medicine over half a century before. In 1798, the English physician Edward Jenner (1749–1823) had shown that people inoculated with the mild disease, cowpox, or vaccinia in Latin, acquired immunity not only to cowpox itself but also to the related but very virulent and dreaded disease, smallpox. The technique of "vaccination" virtually ended most of the devastation of smallpox.
Unfortunately, no other diseases were found to occur in such convenient pairs, with the mild one conferring immunity from the serious one. Nevertheless, with the notion of the germ theory the technique could be extended in another way.
Pasteur located specific germs associated with specific diseases, then weakened those germs by heating them or in other ways, and used the weakened germs for inoculation. Only a very mild disease was produced but immunity was conferred against the dangerous one. The first disease treated in this way was the deadly anthrax that ravaged herds of domestic animals.
Similar work was pursued even more successfully by the German bacteriologist Robert Koch (1843–1910). Antitoxins designed to neutralize bacterial poisons were also developed.
Meanwhile, the English surgeon Joseph Lister (1827–1912), the son of the inventor of the achromatic microscope, had followed up Semmelweiss's work. Once he learned of Pasteur's research he had a convincing rationale as excuse and began to insist that, before operating, surgeons wash their hands in solutions of chemicals known to kill bacteria. From 1867 on, the practice of "antiseptic surgery" spread quickly.
The germ theory also sped the adoption of rational preventive measures—personal hygiene, such as washing and bathing; careful disposal of wastes; the guarding of the cleanliness of food and water. Leaders in this were the German scientist Max Joseph von Pettenkofer (1818–1901) and Rudolph Virchow (1821–1902). They themselves did not accept the germ theory of disease but their recommendations would not have been followed as readily were it not that others did.
In addition, it was discovered that diseases such as yellow fever and malaria were transmitted by mosquitoes, typhus fever by lice, Rocky Mountain spotted fever by ticks, bubonic plague by fleas and so on. Measures against these small germ-transferring organisms acted to reduce the incidence of the diseases. Men such as the Americans Walter Reed (1851–1902) and Howard Taylor Ricketts (1871–1910) and the Frenchman Charles J. Nicolle (1866–1936) were involved in such discoveries.
The German bacteriologist Paul Ehrlich (1854–1915) pioneered the use of specific chemicals that would kill particular bacteria without killing the human being in which it existed. His most successful discovery came in 1910, when he found an arsenic compound that was active against the bacterium that causes syphilis.
This sort of work culminated in the discovery of the antibacterial effect of sulfanilamide and related compounds, beginning with the work of the German biochemist Gerhard Domagk (1895–1964) in 1935 and of antibiotics, beginning with the work of the French-American microbiologist René Jules Dubos (1901–) in 1939.
As late as 1955 came a victory over poliomyelitis, thanks to a vaccine prepared by the American microbiologist Jonas Edward Salk (1914–).
And yet victory is not total. Right now, the once ravaging disease of smallpox seems to be wiped out. Not one case exists, as far as we know, in the entire world. There are however infectious diseases such as a few found in Africa that are very contagious, virtually 100 percent fatal, and for which no cure exists. Careful hygienic measures have made it possible for such diseases to be studied without their spreading, and no doubt effective countermeasures will be worked out.
It would seem, then, that as long as our civilization survives and our medical technology is not shattered there is no longer any danger that infectious disease will produce catastrophe or even anything like the disasters of the Black Death and the Spanish influenza. Yet, old familiar diseases have, within them, the potentiality of arising in new forms.
The human body (and all living organisms) have natural defenses against the invasion of foreign organisms. Antibodies are developed in the bloodstream that neutralize toxins or the microorganisms themselves. White cells in the blood stream physically attack bacteria.
Every few years a new strain of flu rises to pester us. It is possible, however, to produce vaccines against such a new strain once it makes an appearance.
Evolutionary processes generally make the fight an even one. Those organisms more efficient at self-protection against microorganisms tend to survive and pass on their efficiency to their offspring. Nevertheless, microorganisms are far smaller even than insects and far more fecund. They evolve much more quickly, with individual microorganisms almost totally unimportant in the scheme of things.
Considering the uncounted numbers of microorganisms of any particular species that are continually multiplying by cell fission, large numbers of mutations must be produced just as continually. Every once in a while such a mutation may act to make a particular disease far more infectious and deadly. Furthermore, it may sufficiently alter the chemical nature of the microorganism so that the antibodies which the host organism is capable of manufacturing are no longer usable. The result is the sudden onslaught of an epidemic. The Black Death was undoubtedly brought about by a mutant strain of the microorganism causing it.
Eventually, though, those human beings who are most susceptible die, and the relatively resistant survive, so that the virulence of the diseases dies down. In that case, is the human victory over the pathogenic microorganism permanent? Might not new strains of germs arise? They might and they do. Every few years a new strain of flu rises to pester us. It is possible, however, to produce vaccines against such a new strain once it makes an appearance. Thus, when a single case of "swine flu" appeared in 1976, a full scale mass-vaccination was set in action. It turned out not to be needed, but it showed what could be done.
Copyright © 1979 by Isaac Asimov, A Choice of Catastrophes: The Disasters That Threaten Our World, originally published by Simon & Schuster. Reprinted with permission from the Asimov estate.
[This article was originally published on June 8th, 2020 as part of a standalone magazine called GOOD10: The Pandemic Issue. Produced as a partnership among LeapsMag, The Aspen Institute, and GOOD, the magazine is available for free online.]
Telehealth offers a vast improvement in access and convenience to all sorts of medical services, and online therapy for mental health is one of the most promising case studies for telehealth. With many online therapy options available, you can choose whatever works best for you. Yet many people are hesitant about using online therapy. Even if they do give it a try, they often don’t know how to make the most effective use of this treatment modality.
Why do so many feel uncertain about online therapy? A major reason stems from its novelty. Humans are creatures of habit, prone to falling for what behavioral scientists like myself call the status quo bias, a predisposition to stick to traditional practices and behaviors. Many people reject innovative solutions even when they would be helpful. Thus, while teletherapy was available long before the pandemic, and might have fit the needs of many potential clients, relatively few took advantage of this option.
Even when we do try new methodologies, we often don’t do so effectively, because we cling to the same approaches that worked in previous situations. Scientists call this behavior functional fixedness. It’s kind of like the saying about the hammer-nail syndrome: “when you have a hammer, everything looks like a nail.”
These two mental blindspots, the status quo bias and functional fixedness, impact decision making in many areas of life. Fortunately, recent research has shown effective and pragmatic strategies to defeat these dangerous errors in judgment. The nine tips below will help you make the best decisions to get effective online therapy, based on the latest research.
For instance, a 2014 study in the Journal of Affective Disorders reported that online treatment proved just as effective as face-to-face treatment for depression. A 2018 study, published in Journal of Psychological Disorders, found that online cognitive behavioral therapy, or CBT, was just as effective as face-to-face treatment for major depression, panic disorder, social anxiety disorder, and generalized anxiety disorder. And a 2014 study in Behaviour Research and Therapy discovered that online CBT proved effective in treating anxiety disorders, and helped lower costs of treatment.
During the forced teletherapy of COVID, therapists worried that those with serious mental health conditions would be less likely to convert to teletherapy. Yet research published in Counselling Psychology Quarterly has helped to alleviate that concern. It found that those with schizophrenia, bipolar disorder, severe depression, PTSD, and even suicidality converted to teletherapy at about the same rate as those with less severe mental health challenges.
Yet teletherapy may not be for everyone. For example, adolescents had the most varied response to teletherapy, according to a 2020 study in Family Process. Some adapted quickly and easily, while others found it awkward and anxiety-inducing. On the whole, children with trauma respond worse to online therapy, per a 2020 study in Child Abuse & Neglect. The treatment of mental health issues can sometimes require in-person interactions, such as the use of eye movement desensitization and reprocessing to treat post-traumatic stress disorder. And according to a 2020 study from the Journal of Humanistic Psychology, online therapy may not be as effective for those suffering from loneliness.
Online therapy is much more accessible than in-person therapy for those with a decent internet connection, webcam, mic, and digital skills. You don’t have to commute to your therapist’s office, wasting money and time. You can take much less medical leave from work, saving you money and hassle with your boss. If you live in a sparsely populated area, online therapy could allow you to access many specialized kinds of therapy that isn’t accessible locally.
Online options are much quicker compared to the long waiting lines for in-person therapy. You also have much more convenient scheduling options. And you won’t have to worry about running into someone you know in the waiting room. Online therapy is easier to conceal from others and reduces stigma. Many patients may feel more comfortable and open to sharing in the privacy and comfort of their own home.
You can use a variety of communication tools suited to your needs at any given time. Video can be used to start a relationship with a therapist and have more intense and nuanced discussions, but can be draining, especially for those with social anxiety. Voice-only may work well for less intense discussions. Email offers a useful option for long-form, well-thought-out messages. Texting is useful for quick, real-time questions, answers, and reinforcement.
Plus, online therapy is often cheaper than in-person therapy. In the midst of COVID, many insurance providers have decided to cover online therapy.
One weakness is the requirement for appropriate technology and skills to engage in online therapy. Another is the difficulty of forming a close therapeutic relationship with your therapist. You won’t be able to communicate non-verbals as fully and the therapist will not be able to read you as well, requiring you to be more deliberate in how you express yourself.
Another important issue is that online therapy is subject to less government oversight compared to the in-person approach, which is regulated in each state, providing a baseline of quality control. As a result, you have to do more research on the providers that offer online therapy to make sure they’re reputable, use only licensed therapists, and have a clear and transparent pay structure.
Figure out what kind of goals you want to achieve. Consider how, within the context of your goals, you can leverage the benefits of online therapy while addressing the weaknesses. Write down and commit to achieving your goals. Remember, you need to be your own advocate, especially in the less regulated space of online therapy, so focus on being proactive in achieving your goals.
Because online therapy can occur at various times of day through videos calls, emails and text, it might feel more open-ended and less organized, which can have advantages and disadvantages. One way you can give it more structure is to ground these interactions in the story of your self-improvement. Our minds perceive the world through narratives. Create a story of how you’ll get from where you are to where you want to go, meaning your goals.
A good template to use is the Hero’s Journey. Start the narrative with where you are, and what caused you to seek therapy. Write about the obstacles you will need to overcome, and the kind of help from a therapist that you’ll need in the process. Then, describe the final end state: how will you be better off after this journey, including what you will have learned.
Especially in online therapy, you need to be on top of things. Too many people let the therapist manage the treatment plan. As you pursue your hero’s journey, another way to organize for success is to take notes on your progress, and reevaluate how you’re doing every month with your therapist.
Since it’s more difficult to be confident about the quality of service providers in an online setting, you should identify in advance the traits of your desired therapist. Every Hero’s Journey involves a mentor figure who guides the protagonist through this journey. So who’s your ideal mentor? Write out their top 10 characteristics, from most to least important.
For example, you might want someone who is:
- Good listener
That’s my list. Depending on what challenge you’re facing and your personality and preferences, you should make your own. Then, when you are matched with a therapist, evaluate how well they fit your ideal list.
When you first match with a therapist, try to fail fast. That means, instead of focusing on getting treatment, focus on figuring out if the therapist is a good match based on the traits you identified above. That will enable you to move on quickly if they’re not, and it’s very much worth it to figure that out early.
Tell them your goals, your story, and your vision of your ideal mentor. Ask them whether they think they are a match, and what kind of a treatment plan they would suggest based on the information you provided. And observe them yourself in your initial interactions, focusing on whether they’re a good match. Often, you’ll find that your initial vision of your ideal mentor is incomplete, and you’ll learn through doing therapy what kind of a therapist is the best fit for you.
This small subgoal should be sufficient to be meaningful and impactful for improving your mental health, but not a big stretch for you to achieve. This subgoal should be a tool for you to use to evaluate whether the therapist is indeed a good fit for you. It will also help you evaluate whether the treatment plan makes sense, or whether it needs to be revised.
As you approach the end of your planned work and you see you’re reaching your goals, talk to the therapist about how to wrap up rather than letting things drag on for too long. You don’t want to become dependent on therapy: it’s meant to be a temporary intervention. Some less scrupulous therapists will insist that therapy should never end and we should all stay in therapy forever, and you want to avoid falling for this line. When you reach your goals, end your therapy, unless you discover a serious new reason to continue it. Still, it may be wise to set up occasional check-ins once every three to six months to make sure you’re staying on the right track.
This is part 2 of a three part series on a new generation of doctors leading the charge to make the health care industry more sustainable - for the benefit of their patients and the planet. Read part 1 here.
After graduating from her studies as an engineer, Nora Stroetzel ticked off the top item on her bucket list and traveled the world for a year. She loved remote places like the Indonesian rain forest she reached only by hiking for several days on foot, mountain villages in the Himalayas, and diving at reefs that were only accessible by local fishing boats.
“But no matter how far from civilization I ventured, one thing was already there: plastic,” Stroetzel says. “Plastic that would stay there for centuries, on 12,000 foot peaks and on beaches several hundred miles from the nearest city.” She saw “wild orangutans that could be lured by rustling plastic and hermit crabs that used plastic lids as dwellings instead of shells.”
While traveling she started volunteering for beach cleanups and helped build a recycling station in Indonesia. But the pivotal moment for her came after she returned to her hometown Kiel in Germany. “At the dentist, they gave me a plastic cup to rinse my mouth. I used it for maybe ten seconds before it was tossed out,” Stroetzel says. “That made me really angry.”
She decided to research alternatives for plastic in the medical sector and learned that cups could be reused and easily disinfected. All dentists routinely disinfect their tools anyway and, Stroetzel reasoned, it wouldn’t be too hard to extend that practice to cups.
It's a good example for how often plastic is used unnecessarily in medical practice, she says. The health care sector is the fifth biggest source of pollution and trash in industrialized countries. In the U.S., hospitals generate an estimated 6,000 tons of waste per day, including an average of 400 grams of plastic per patient per day, and this sector produces 8.5 percent of greenhouse gas emissions nationwide.
“Sustainable alternatives exist,” Stroetzel says, “but you have to painstakingly look for them; they are often not offered by the big manufacturers, and all of this takes way too much time [that] medical staff simply does not have during their hectic days.”
When Stroetzel spoke with medical staff in Germany, she found they were often frustrated by all of this waste, especially as they took care to avoid single-use plastic at home. Doctors in other countries share this frustration. In a recent poll, nine out of ten doctors in Germany said they’re aware of the urgency to find sustainable solutions in the health industry but don’t know how to achieve this goal.
After a year of researching more sustainable alternatives, Stroetzel founded a social enterprise startup called POP, short for Practice Without Plastic, together with IT expert Nicolai Niethe, to offer well-researched solutions. “Sustainable alternatives exist,” she says, “but you have to painstakingly look for them; they are often not offered by the big manufacturers, and all of this takes way too much time [that] medical staff simply does not have during their hectic days.”
In addition to reusable dentist cups, other good options for the heath care sector include washable N95 face masks and gloves made from nitrile, which waste less water and energy in their production. But Stroetzel admits that truly making a medical facility more sustainable is a complex task. “This includes negotiating with manufacturers who often package medical materials in double and triple layers of extra plastic.”
While initiatives such as Stroetzel’s provide much needed information, other experts reason that a wholesale rethinking of healthcare is needed. Voluntary action won’t be enough, and government should set the right example. Kari Nadeau, a Stanford physician who has spent 30 years researching the effects of environmental pollution on the immune system, and Kenneth Kizer, the former undersecretary for health in the U.S. Department of Veterans Affairs, wrote in JAMA last year that the medical industry and federal agencies that provide health care should be required to measure and make public their carbon footprints. “Government health systems do not disclose these data (and very rarely do private health care organizations), unlike more than 90% of the Standard & Poor’s top 500 companies and many nongovernment entities," they explained. "This could constitute a substantial step toward better equipping health professionals to confront climate change and other planetary health problems.”
Compared to the U.K., the U.S. healthcare industry lags behind in terms of measuring and managing its carbon footprint, and hospitals are the second highest energy user of any sector in the U.S.
Kizer and Nadeau look to the U.K. National Health Service (NHS), which created a Sustainable Development Unit in 2008 and began that year to conduct assessments of the NHS’s carbon footprint. The NHS also identified its biggest culprits: Of the 2019 footprint, with emissions totaling 25 megatons of carbon dioxide equivalent, 62 percent came from the supply chain, 24 percent from the direct delivery of care, 10 percent from staff commute and patient and visitor travel, and 4 percent from private health and care services commissioned by the NHS. From 1990 to 2019, the NHS has reduced its emission of carbon dioxide equivalents by 26 percent, mostly due to the switch to renewable energy for heat and power. Meanwhile, the NHS has encouraged health clinics in the U.K. to install wind generators or photovoltaics that convert light to electricity -- relatively quick ways to decarbonize buildings in the health sector.
Compared to the U.K., the U.S. healthcare industry lags behind in terms of measuring and managing its carbon footprint, and hospitals are the second highest energy user of any sector in the U.S. “We are already seeing patients with symptoms from climate change, such as worsened respiratory symptoms from increased wildfires and poor air quality in California,” write Thomas B. Newman, a pediatrist at the University of California, San Francisco, and UCSF clinical research coordinator Daisy Valdivieso. “Because of the enormous health threat posed by climate change, health professionals should mobilize support for climate mitigation and adaptation efforts.” They believe “the most direct place to start is to approach the low-lying fruit: reducing healthcare waste and overuse.”
In addition to resulting in waste, the plastic in hospitals ultimately harms patients, who may be even more vulnerable to the effects due to their health conditions. Microplastics have been detected in most humans, and on average, a human ingests five grams of microplastic per week. Newman and Valdivieso refer to the American Board of Internal Medicine's Choosing Wisely program as one of many initiatives that identify and publicize options for “safely doing less” as a strategy to reduce unnecessary healthcare practices, and in turn, reduce cost, resource use, and ultimately reduce medical harm.
A few U.S. clinics are pioneers in transitioning to clean energy sources. In Wisconsin, the nonprofit Gundersen Health network became the first hospital to cut its reliance on petroleum by switching to locally produced green energy in 2015, and it saved $1.2 million per year in the process. Kaiser Permanente eliminated its 800,000 ton carbon footprint through energy efficiency and purchasing carbon offsets, reaching a balance between carbon emissions and removing carbon from the atmosphere in 2020, the first U.S. health system to do so.
Cleveland Clinic has pledged to join Kaiser in becoming carbon neutral by 2027. Realizing that 80 percent of its 2008 carbon emissions came from electricity consumption, the Clinic started switching to renewable energy and installing solar panels, and it has invested in researching recyclable products and packaging. The Clinic’s sustainability report outlines several strategies for producing less waste, such as reusing cases for sterilizing instruments, cutting back on materials that can’t be recycled, and putting pressure on vendors to reduce product packaging.
The Charité Berlin, Europe’s biggest university hospital, has also announced its goal to become carbon neutral. Its sustainability managers have begun to identify the biggest carbon culprits in its operations. “We’ve already reduced CO2 emissions by 21 percent since 2016,” says Simon Batt-Nauerz, the director of infrastructure and sustainability.
The hospital still emits 100,000 tons of CO2 every year, as much as a city with 10,000 residents, but it’s making progress through ride share and bicycle programs for its staff of 20,000 employees, who can get their bikes repaired for free in one of the Charité-operated bike workshops. Another program targets doctors’ and nurses’ scrubs, which cause more than 200 tons of CO2 during manufacturing and cleaning. The staff is currently testing lighter, more sustainable scrubs made from recycled cellulose that is grown regionally and requires 80 percent less land use and 30 percent less water.
The Charité hospital in Berlin still emits 100,000 tons of CO2 every year, but it’s making progress through ride share and bicycle programs for its staff of 20,000 employees.
Wiebke Peitz | Specific to Charité
Anesthesiologist Susanne Koch spearheads sustainability efforts in anesthesiology at the Charité. She says that up to a third of hospital waste comes from surgery rooms. To reduce medical waste, she recommends what she calls the 5 Rs: Reduce, Reuse, Recycle, Rethink, Research. “In medicine, people don’t question the use of plastic because of safety concerns,” she says. “Nobody wants to be sued because something is reused. However, it is possible to reduce plastic and other materials safely.”
For instance, she says, typical surgery kits are single-use and contain more supplies than are actually needed, and the entire kit is routinely thrown out after the surgery. “Up to 20 percent of materials in a surgery room aren’t used but will be discarded,” Koch says. One solution could be smaller kits, she explains, and another would be to recycle the plastic. Another example is breathing tubes. “When they became scarce during the pandemic, studies showed that they can be used seven days instead of 24 hours without increased bacteria load when we change the filters regularly,” Koch says, and wonders, “What else can we reuse?”
In the Netherlands, TU Delft researchers Tim Horeman and Bart van Straten designed a method to melt down the blue polypropylene wrapping paper that keeps medical instruments sterile, so that the material can be turned it into new medical devices. Currently, more than a million kilos of the blue paper are used in Dutch hospitals every year. A growing number of Dutch hospitals are adopting this approach.
Another common practice that’s ripe for improvement is the use of a certain plastic, called PVC, in hospital equipment such as blood bags, tubes and masks. Because of its toxic components, PVC is almost never recycled in the U.S., but University of Michigan researchers Danielle Fagnani and Anne McNeil have discovered a chemical process that can break it down into material that could be incorporated back into production. This could be a step toward a circular economy “that accounts for resource inputs and emissions throughout a product’s life cycle, including extraction of raw materials, manufacturing, transport, use and reuse, and disposal,” as medical experts have proposed. “It’s a failure of humanity to have created these amazing materials which have improved our lives in many ways, but at the same time to be so shortsighted that we didn’t think about what to do with the waste,” McNeil said in a press release.
Susanne Koch puts it more succinctly: “What’s the point if we save patients while killing the planet?”