The Voice Behind Some of Your Favorite Cartoon Characters Helped Create the Artificial Heart
Sarah Watts is a health and science writer based in Chicago. Follow her on Twitter at @swattswrites.
In June, a team of surgeons at Duke University Hospital implanted the latest model of an artificial heart in a 39-year-old man with severe heart failure, a condition in which the heart doesn't pump properly. The man's mechanical heart, made by French company Carmat, is a new generation artificial heart and the first of its kind to be transplanted in the United States. It connects to a portable external power supply and is designed to keep the patient alive until a replacement organ becomes available.
Many patients die while waiting for a heart transplant, but artificial hearts can bridge the gap. Though not a permanent solution for heart failure, artificial hearts have saved countless lives since their first implantation in 1982.
What might surprise you is that the origin of the artificial heart dates back decades before, when an inventive television actor teamed up with a famous doctor to design and patent the first such device.
A man of many talents
Paul Winchell was an entertainer in the 1950s and 60s, rising to fame as a ventriloquist and guest-starring as an actor on programs like "The Ed Sullivan Show" and "Perry Mason." When children's animation boomed in the 1960s, Winchell made a name for himself as a voice actor on shows like "The Smurfs," "Winnie the Pooh," and "The Jetsons." He eventually became famous for originating the voices of Tigger from "Winnie the Pooh" and Gargamel from "The Smurfs," among many others.
But Winchell wasn't just an entertainer: He also had a quiet passion for science and medicine. Between television gigs, Winchell busied himself working as a medical hypnotist and acupuncturist, treating the same Hollywood stars he performed alongside. When he wasn't doing that, Winchell threw himself into engineering and design, building not only the ventriloquism dummies he used on his television appearances but a host of products he'd dreamed up himself. Winchell spent hours tinkering with his own inventions, such as a set of battery-powered gloves and something called a "flameless lighter." Over the course of his life, Winchell designed and patented more than 30 of these products – mostly novelties, but also serious medical devices, such as a portable blood plasma defroster.
|Ventriloquist Paul Winchell with Jerry Mahoney, his dummy, in 1951|
A meeting of the minds
In the early 1950s, Winchell appeared on a variety show called the "Arthur Murray Dance Party" and faced off in a dance competition with the legendary Ricardo Montalban (Winchell won). At a cast party for the show later that same night, Winchell met Dr. Henry Heimlich – the same doctor who would later become famous for inventing the Heimlich maneuver, who was married to Murray's daughter. The two hit it off immediately, bonding over their shared interest in medicine. Before long, Heimlich invited Winchell to come observe him in the operating room at the hospital where he worked. Winchell jumped at the opportunity, and not long after he became a frequent guest in Heimlich's surgical theatre, fascinated by the mechanics of the human body.
One day while Winchell was observing at the hospital, he witnessed a patient die on the operating table after undergoing open-heart surgery. He was suddenly struck with an idea: If there was some way doctors could keep blood pumping temporarily throughout the body during surgery, patients who underwent risky operations like open-heart surgery might have a better chance of survival. Winchell rushed to Heimlich with the idea – and Heimlich agreed to advise Winchell and look over any design drafts he came up with. So Winchell went to work.
As it turned out, building ventriloquism dummies wasn't that different from building an artificial heart, Winchell noted later in his autobiography – the shifting valves and chambers of the mechanical heart were similar to the moving eyes and opening mouths of his puppets. After each design, Winchell would go back to Heimlich and the two would confer, making adjustments along the way to.
By 1956, Winchell had perfected his design: The "heart" consisted of a bag that could be placed inside the human body, connected to a battery-powered motor outside of the body. The motor enabled the bag to pump blood throughout the body, similar to a real human heart. Winchell received a patent for the design in 1963.
At the time, Winchell never quite got the credit he deserved. Years later, researchers at the University of Utah, working on their own artificial heart, came across Winchell's patent and got in touch with Winchell to compare notes. Winchell ended up donating his patent to the team, which included Dr. Richard Jarvik. Jarvik expanded on Winchell's design and created the Jarvik-7 – the world's first artificial heart to be successfully implanted in a human being in 1982.
The Jarvik-7 has since been replaced with newer, more efficient models made up of different synthetic materials, allowing patients to live for longer stretches without the heart clogging or breaking down. With each new generation of hearts, heart failure patients have been able to live relatively normal lives for longer periods of time and with fewer complications than before – and it never would have been possible without the unsung genius of a puppeteer and his love of science.
Sarah Watts is a health and science writer based in Chicago. Follow her on Twitter at @swattswrites.
The recent explosion of generative artificial intelligence tools like ChatGPT and Dall-E enabled anyone with internet access to harness AI’s power for enhanced productivity, creativity, and problem-solving. With their ever-improving capabilities and expanding user base, these tools proved useful across disciplines, from the creative to the scientific.
But beneath the technological wonders of human-like conversation and creative expression lies a dirty secret—an alarming environmental and human cost. AI has an immense carbon footprint. Systems like ChatGPT take months to train in high-powered data centers, which demand huge amounts of electricity, much of which is still generated with fossil fuels, as well as water for cooling. “One of the reasons why Open AI needs investments [to the tune of] $10 billion from Microsoft is because they need to pay for all of that computation,” says Kentaro Toyama, a computer scientist at the University of Michigan. There’s also an ecological toll from mining rare minerals required for hardware and infrastructure. This environmental exploitation pollutes land, triggers natural disasters and causes large-scale human displacement. Finally, for data labeling needed to train and correct AI algorithms, the Big Data industry employs cheap and exploitative labor, often from the Global South.
Generative AI tools are based on large language models (LLMs), with most well-known being various versions of GPT. LLMs can perform natural language processing, including translating, summarizing and answering questions. They use artificial neural networks, called deep learning or machine learning. Inspired by the human brain, neural networks are made of millions of artificial neurons. “The basic principles of neural networks were known even in the 1950s and 1960s,” Toyama says, “but it’s only now, with the tremendous amount of compute power that we have, as well as huge amounts of data, that it’s become possible to train generative AI models.”
Though there aren’t any official figures about the power consumption or emissions from data centers, experts estimate that they use one percent of global electricity—more than entire countries.
In recent months, much attention has gone to the transformative benefits of these technologies. But it’s important to consider that these remarkable advances may come at a price.
AI’s carbon footprint
In their latest annual report, 2023 Landscape: Confronting Tech Power, the AI Now Institute, an independent policy research entity focusing on the concentration of power in the tech industry, says: “The constant push for scale in artificial intelligence has led Big Tech firms to develop hugely energy-intensive computational models that optimize for ‘accuracy’—through increasingly large datasets and computationally intensive model training—over more efficient and sustainable alternatives.”
Though there aren’t any official figures about the power consumption or emissions from data centers, experts estimate that they use one percent of global electricity—more than entire countries. In 2019, Emma Strubell, then a graduate researcher at the University of Massachusetts Amherst, estimated that training a single LLM resulted in over 280,000 kg in CO2 emissions—an equivalent of driving almost 1.2 million km in a gas-powered car. A couple of years later, David Patterson, a computer scientist from the University of California Berkeley, and colleagues, estimated GPT-3’s carbon footprint at over 550,000 kg of CO2 In 2022, the tech company Hugging Face, estimated the carbon footprint of its own language model, BLOOM, as 25,000 kg in CO2 emissions. (BLOOM’s footprint is lower because Hugging Face uses renewable energy, but it doubled when other life-cycle processes like hardware manufacturing and use were added.)
Luckily, despite the growing size and numbers of data centers, their increasing energy demands and emissions have not kept pace proportionately—thanks to renewable energy sources and energy-efficient hardware.
But emissions don’t tell the full story.
AI’s hidden human cost
“If historical colonialism annexed territories, their resources, and the bodies that worked on them, data colonialism’s power grab is both simpler and deeper: the capture and control of human life itself through appropriating the data that can be extracted from it for profit.” So write Nick Couldry and Ulises Mejias, authors of the bookThe Costs of Connection.
The energy requirements, hardware manufacture and the cheap human labor behind AI systems disproportionately affect marginalized communities.
Technologies we use daily inexorably gather our data. “Human experience, potentially every layer and aspect of it, is becoming the target of profitable extraction,” Couldry and Meijas say. This feeds data capitalism, the economic model built on the extraction and commodification of data. While we are being dispossessed of our data, Big Tech commodifies it for their own benefit. This results in consolidation of power structures that reinforce existing race, gender, class and other inequalities.
“The political economy around tech and tech companies, and the development in advances in AI contribute to massive displacement and pollution, and significantly changes the built environment,” says technologist and activist Yeshi Milner, who founded Data For Black Lives (D4BL) to create measurable change in Black people’s lives using data. The energy requirements, hardware manufacture and the cheap human labor behind AI systems disproportionately affect marginalized communities.
AI’s recent explosive growth spiked the demand for manual, behind-the-scenes tasks, creating an industry described by Mary Gray and Siddharth Suri as “ghost work” in their book. This invisible human workforce that lies behind the “magic” of AI, is overworked and underpaid, and very often based in the Global South. For example, workers in Kenya who made less than $2 an hour, were the behind the mechanism that trained ChatGPT to properly talk about violence, hate speech and sexual abuse. And, according to an article in Analytics India Magazine, in some cases these workers may not have been paid at all, a case for wage theft. An exposé by the Washington Post describes “digital sweatshops” in the Philippines, where thousands of workers experience low wages, delays in payment, and wage theft by Remotasks, a platform owned by Scale AI, a $7 billion dollar American startup. Rights groups and labor researchers have flagged Scale AI as one company that flouts basic labor standards for workers abroad.
It is possible to draw a parallel with chattel slavery—the most significant economic event that continues to shape the modern world—to see the business structures that allow for the massive exploitation of people, Milner says. Back then, people got chocolate, sugar, cotton; today, they get generative AI tools. “What’s invisible through distance—because [tech companies] also control what we see—is the massive exploitation,” Milner says.
“At Data for Black Lives, we are less concerned with whether AI will become human…[W]e’re more concerned with the growing power of AI to decide who’s human and who’s not,” Milner says. As a decision-making force, AI becomes a “justifying factor for policies, practices, rules that not just reinforce, but are currently turning the clock back generations years on people’s civil and human rights.”
Ironically, AI plays an important role in mitigating its own harms—by plowing through mountains of data about weather changes, extreme weather events and human displacement.
Nuria Oliver, a computer scientist, and co-founder and vice-president of the European Laboratory of Learning and Intelligent Systems (ELLIS), says that instead of focusing on the hypothetical existential risks of today’s AI, we should talk about its real, tangible risks.
“Because AI is a transverse discipline that you can apply to any field [from education, journalism, medicine, to transportation and energy], it has a transformative power…and an exponential impact,” she says.
“At the core of what we were arguing about data capitalism [is] a call to action to abolish Big Data,” says Milner. “Not to abolish data itself, but the power structures that concentrate [its] power in the hands of very few actors.”
A comprehensive AI Act currently negotiated in the European Parliament aims to rein Big Tech in. It plans to introduce a rating of AI tools based on the harms caused to humans, while being as technology-neutral as possible. That sets standards for safe, transparent, traceable, non-discriminatory, and environmentally friendly AI systems, overseen by people, not automation. The regulations also ask for transparency in the content used to train generative AIs, particularly with copyrighted data, and also disclosing that the content is AI-generated. “This European regulation is setting the example for other regions and countries in the world,” Oliver says. But, she adds, such transparencies are hard to achieve.
Ironically, AI plays an important role in mitigating its own harms—by plowing through mountains of data about weather changes, extreme weather events and human displacement. “The only way to make sense of this data is using machine learning methods,” Oliver says.
Milner believes that the best way to expose AI-caused systemic inequalities is through people's stories. “In these last five years, so much of our work [at D4BL] has been creating new datasets, new data tools, bringing the data to life. To show the harms but also to continue to reclaim it as a tool for social change and for political change.” This change, she adds, will depend on whose hands it is in.
When David M. Kurtz was doing his clinical fellowship at Stanford University Medical Center in 2009, specializing in lymphoma treatments, he found himself grappling with a question no one could answer. A typical regimen for these blood cancers prescribed six cycles of chemotherapy, but no one knew why. "The number seemed to be drawn out of a hat," Kurtz says. Some patients felt much better after just two doses, but had to endure the toxic effects of the entire course. For some elderly patients, the side effects of chemo are so harsh, they alone can kill. Others appeared to be cancer-free on the CT scans after the requisite six but then succumbed to it months later.
"Anecdotally, one patient decided to stop therapy after one dose because he felt it was so toxic that he opted for hospice instead," says Kurtz, now an oncologist at the center. "Five years down the road, he was alive and well. For him, just one dose was enough." Others would return for their one-year check up and find that their tumors grew back. Kurtz felt that while CT scans and MRIs were powerful tools, they weren't perfect ones. They couldn't tell him if there were any cancer cells left, stealthily waiting to germinate again. The scans only showed the tumor once it was back.
Blood cancers claim about 68,000 people a year, with a new diagnosis made about every three minutes, according to the Leukemia Research Foundation. For patients with B-cell lymphoma, which Kurtz focuses on, the survival chances are better than for some others. About 60 percent are cured, but the remaining 40 percent will relapse—possibly because they will have a negative CT scan, but still harbor malignant cells. "You can't see this on imaging," says Michael Green, who also treats blood cancers at University of Texas MD Anderson Medical Center.
The new blood test is sensitive enough to spot one cancerous perpetrator amongst one million other DNA molecules.
Kurtz wanted a better diagnostic tool, so he started working on a blood test that could capture the circulating tumor DNA or ctDNA. For that, he needed to identify the specific mutations typical for B-cell lymphomas. Working together with another fellow PhD student Jake Chabon, Kurtz finally zeroed-in on the tumor's genetic "appearance" in 2017—a pair of specific mutations sitting in close proximity to each other—a rare and telling sign. The human genome contains about 3 billion base pairs of nucleotides—molecules that compose genes—and in case of the B-cell lymphoma cells these two mutations were only a few base pairs apart. "That was the moment when the light bulb went on," Kurtz says.
The duo formed a company named Foresight Diagnostics, focusing on taking the blood test to the clinic. But knowing the tumor's mutational signature was only half the process. The other was fishing the tumor's DNA out of patients' bloodstream that contains millions of other DNA molecules, explains Chabon, now Foresight's CEO. It would be like looking for an escaped criminal in a large crowd. Kurtz and Chabon solved the problem by taking the tumor's "mug shot" first. Doctors would take the biopsy pre-treatment and sequence the tumor, as if taking the criminal's photo. After treatments, they would match the "mug shot" to all DNA molecules derived from the patient's blood sample to see if any molecular criminals managed to escape the chemo.
Foresight isn't the only company working on blood-based tumor detection tests, which are dubbed liquid biopsies—other companies such as Natera or ArcherDx developed their own. But in a recent study, the Foresight team showed that their method is significantly more sensitive in "fishing out" the cancer molecules than existing tests. Chabon says that this test can detect circulating tumor DNA in concentrations that are nearly 100 times lower than other methods. Put another way, it's sensitive enough to spot one cancerous perpetrator amongst one million other DNA molecules.
They also aim to extend their test to detect other malignancies such as lung, breast or colorectal cancers.
"It increases the sensitivity of detection and really catches most patients who are going to progress," says Green, the University of Texas oncologist who wasn't involved in the study, but is familiar with the method. It would also allow monitoring patients during treatment and making better-informed decisions about which therapy regimens would be most effective. "It's a minimally invasive test," Green says, and "it gives you a very high confidence about what's going on."
Having shown that the test works well, Kurtz and Chabon are planning a new trial in which oncologists would rely on their method to decide when to stop or continue chemo. They also aim to extend their test to detect other malignancies such as lung, breast or colorectal cancers. The latest genome sequencing technologies have sequenced and catalogued over 2,500 different tumor specimens and the Foresight team is analyzing this data, says Chabon, which gives the team the opportunity to create more molecular "mug shots."
The team hopes that that their blood cancer test will become available to patients within about five years, making doctors' job easier, and not only at the biological level. "When I tell patients, "good news, your cancer is in remission', they ask me, 'does it mean I'm cured?'" Kurtz says. "Right now I can't answer this question because I don't know—but I would like to." His company's test, he hopes, will enable him to reply with certainty. He'd very much like to have the power of that foresight.
This article is republished from our archives to coincide with Blood Cancer Awareness Month, which highlights progress in cancer diagnostics and treatment.
Lina Zeldovich has written about science, medicine and technology for Popular Science, Smithsonian, National Geographic, Scientific American, Reader’s Digest, the New York Times and other major national and international publications. A Columbia J-School alumna, she has won several awards for her stories, including the ASJA Crisis Coverage Award for Covid reporting, and has been a contributing editor at Nautilus Magazine. In 2021, Zeldovich released her first book, The Other Dark Matter, published by the University of Chicago Press, about the science and business of turning waste into wealth and health. You can find her on http://linazeldovich.com/ and @linazeldovich.