digital medicine

Artificial intelligence in medicine, still in an early phase, stands to transform how doctors and nurses spend their time.

(© Tex vector/Adobe)

There's a quiet revolution going on in medicine. It's driven by artificial intelligence, but paradoxically, new technology may put a more human face on healthcare.

AI's usefulness in healthcare ranges far and wide.

Artificial intelligence is software that can process massive amounts of information and learn over time, arriving at decisions with striking accuracy and efficiency. It offers greater accuracy in diagnosis, exponentially faster genome sequencing, the mining of medical literature and patient records at breathtaking speed, a dramatic reduction in administrative bureaucracy, personalized medicine, and even the democratization of healthcare.

The algorithms that bring these advantages won't replace doctors; rather, by offloading some of the most time-consuming tasks in healthcare, providers will be able to focus on personal interactions with patients—listening, empathizing, educating and generally putting the care back in healthcare. The relationship can focus on the alleviation of suffering, both the physical and emotional kind.

Challenges of Getting AI Up and Running

The AI revolution, still in its early phase in medicine, is already spurring some amazing advances, despite the fact that some experts say it has been overhyped. IBM's Watson Health program is a case in point. IBM capitalized on Watson's ability to process natural language by designing algorithms that devour data like medical articles and analyze images like MRIs and medical slides. The algorithms help diagnose diseases and recommend treatment strategies.

But Technology Review reported that a heavily hyped partnership with the MD Anderson Cancer Center in Houston fell apart in 2017 because of a lack of data in the proper format. The data existed, just not in a way that the voraciously data-hungry AI could use to train itself.

The hiccup certainly hasn't dampened the enthusiasm for medical AI among other tech giants, including Google and Apple, both of which have invested billions in their own healthcare projects. At this point, the main challenge is the need for algorithms to interpret a huge diversity of data mined from medical records. This can include everything from CT scans, MRIs, electrocardiograms, x-rays, and medical slides, to millions of pages of medical literature, physician's notes, and patient histories. It can even include data from implantables and wearables such as the Apple Watch and blood sugar monitors.

None of this information is in anything resembling a standard format across and even within hospitals, clinics, and diagnostic centers. Once the algorithms are trained, however, they can crunch massive amounts of data at blinding speed, with an accuracy that matches and sometimes even exceeds that of highly experienced doctors.

Genome sequencing, for example, took years to accomplish as recently as the early 2000s. The Human Genome Project, the first sequencing of the human genome, was an international effort that took 13 years to complete. In April of this year, Rady Children's Institute for Genomic Medicine in San Diego used an AI-powered genome sequencing algorithm to diagnose rare genetic diseases in infants in about 20 hours, according to ScienceDaily.

"Patient care will always begin and end with the doctor."

Dr. Stephen Kingsmore, the lead author of an article published in Science Translational Medicine, emphasized that even though the algorithm helped guide the treatment strategies of neonatal intensive care physicians, the doctor was still an indispensable link in the chain. "Some people call this artificial intelligence, we call it augmented intelligence," he says. "Patient care will always begin and end with the doctor."

One existing trend is helping to supply a great amount of valuable data to algorithms—the electronic health record. Initially blamed for exacerbating the already crushing workload of many physicians, the EHR is emerging as a boon for algorithms because it consolidates all of a patient's data in one record.

Examples of AI in Action Around the Globe

If you're a parent who has ever taken a child to the doctor with flulike symptoms, you know the anxiety of wondering if the symptoms signal something serious. Kang Zhang, M.D., Ph.D., the founding director of the Institute for Genomic Medicine at the University of California at San Diego, and colleagues developed an AI natural language processing model that used deep learning to analyze the EHRs of 1.3 million pediatric visits to a clinic in Guanzhou, China.

The AI identified common childhood diseases with about the same accuracy as human doctors, and it was even able to split the diagnoses into two categories—common conditions such as flu, and serious, life-threatening conditions like meningitis. Zhang has emphasized that the algorithm didn't replace the human doctor, but it did streamline the diagnostic process and could be used in a triage capacity when emergency room personnel need to prioritize the seriously ill over those suffering from common, less dangerous ailments.

AI's usefulness in healthcare ranges far and wide. In Uganda and several other African nations, AI is bringing modern diagnostics to remote villages that have no access to traditional technologies such as x-rays. The New York Times recently reported that there, doctors are using a pocket-sized, hand-held ultrasound machine that works in concert with a cell phone to image and diagnose everything from pneumonia (a common killer of children) to cancerous tumors.

The beauty of the highly portable, battery-powered device is that ultrasound images can be uploaded on computers so that physicians anywhere in the world can review them and weigh in with their advice. And the images are instantly incorporated into the patient's EHR.

Jonathan Rothberg, the founder of Butterfly Network, the Connecticut company that makes the device, told The New York Times that "Two thirds of the world's population gets no imaging at all. When you put something on a chip, the price goes down and you democratize it." The Butterfly ultrasound machine, which sells for $2,000, promises to be a game-changer in remote areas of Africa, South America, and Asia, as well as at the bedsides of patients in developed countries.

AI algorithms are rapidly emerging in healthcare across the U.S. and the world. China has become a major international player, set to surpass the U.S. this year in AI capital investment, the translation of AI research into marketable products, and even the number of often-cited research papers on AI. So far the U.S. is still the leader, but some experts describe the relationship between the U.S. and China as an AI cold war.

"The future of machine learning isn't sentient killer robots. It's longer human lives."

The U.S. Food and Drug Administration expanded its approval of medical algorithms from two in all of 2017 to about two per month throughout 2018. One of the first fields to be impacted is ophthalmology.

One algorithm, developed by the British AI company DeepMind (owned by Alphabet, the parent company of Google), instantly scans patients' retinas and is able to diagnose diabetic retinopathy without needing an ophthalmologist to interpret the scans. This means diabetics can get the test every year from their family physician without having to see a specialist. The Financial Times reported in March that the technology is now being used in clinics throughout Europe.

In Copenhagen, emergency service dispatchers are using a new voice-processing AI called Corti to analyze the conversations in emergency phone calls. The algorithm analyzes the verbal cues of callers, searches its huge database of medical information, and provides dispatchers with onscreen diagnostic information. Freddy Lippert, the CEO of EMS Copenhagen, notes that the algorithm has already saved lives by expediting accurate diagnoses in high-pressure situations where time is of the essence.

Researchers at the University of Nottingham in the UK have even developed a deep learning algorithm that predicts death more accurately than human clinicians. The algorithm incorporates data from a huge range of factors in a chronically ill population, including how many fruits and vegetables a patient eats on a daily basis. Dr. Stephen Weng, lead author of the study, published in PLOS ONE, said in a press release, "We found machine learning algorithms were significantly more accurate in predicting death than the standard prediction models developed by a human expert."

New digital technologies are allowing patients to participate in their healthcare as never before. A feature of the new Apple Watch is an app that detects cardiac arrhythmias and even produces an electrocardiogram if an abnormality is detected. The technology, approved by the FDA, is helping cardiologists monitor heart patients and design interventions for those who may be at higher risk of a cardiac event like a stroke.

If having an algorithm predict your death sends a shiver down your spine, consider that algorithms may keep you alive longer. In 2018, technology reporter Tristan Greene wrote for Medium that "…despite the unending deluge of panic-ridden articles declaring AI the path to apocalypse, we're now living in a world where algorithms save lives every day. The future of machine learning isn't sentient killer robots. It's longer human lives."

The Risks of AI Compiling Your Data

To be sure, the advent of AI-infused medical technology is not without its risks. One risk is that the use of AI wearables constantly monitoring our vital signs could turn us into a nation of hypochondriacs, racing to our doctors every time there's a blip in some vital sign. Such a development could stress an already overburdened system that suffers from, among other things, a shortage of doctors and nurses. Another risk has to do with the privacy protections on the massive repository of intimately personal information that AI will have on us.

In an article recently published in the Journal of the American Medical Association, Australian researcher Kit Huckvale and colleagues examined the handling of data by 36 smartphone apps that assisted people with either depression or smoking cessation, two areas that could lend themselves to stigmatization if they fell into the wrong hands.

Out of the 36 apps, 33 shared their data with third parties, despite the fact that just 25 of those apps had a privacy policy at all and out of those, only 23 stated that data would be shared with third parties. The recipients of all that data? It went almost exclusively to Facebook and Google, to be used for advertising and marketing purposes. But there's nothing to stop it from ending up in the hands of insurers, background databases, or any other entity.

Even when data isn't voluntarily shared, any digital information can be hacked. EHRs and even wearable devices share the same vulnerability as any other digital record or device. Still, the promise of AI to radically improve efficiency and accuracy in healthcare is hard to ignore.

AI Can Help Restore Humanity to Medicine

Eric Topol, director of the Scripps Research Translational Institute and author of the new book Deep Medicine, says that AI gives doctors and nurses the most precious gift of all: time.

Topol welcomes his patients' use of the Apple Watch cardiac feature and is optimistic about the ways that AI is revolutionizing medicine. He says that the watch helps doctors monitor how well medications are working and has already helped to prevent strokes. But in addition to that, AI will help bring the humanity back to a profession that has become as cold and hard as a stainless steel dissection table.

"When I graduated from medical school in the 1970s," he says, "you had a really intimate relationship with your doctor." Over the decades, he has seen that relationship steadily erode as medical organizations demanded that doctors see more and more patients within ever-shrinking time windows.

"Doctors have no time to think, to communicate. We need to restore the mission in medicine."

In addition to that, EHRs have meant that doctors and nurses are getting buried in paperwork and administrative tasks. This is no doubt one reason why a recent study by the World Health Organization showed that worldwide, about 50 percent of doctors suffer from burnout. People who are utterly exhausted make more mistakes, and medical clinicians are no different from the rest of us. Only medical mistakes have unacceptably high stakes. According to its website, Johns Hopkins University recently announced that in the U.S. alone, 250,000 people die from medical mistakes each year.

"Doctors have no time to think, to communicate," says Topol. "We need to restore the mission in medicine." AI is giving doctors more time to devote to the thing that attracted them to medicine in the first place—connecting deeply with patients.

There is a real danger at this juncture, though, that administrators aware of the time-saving aspects of AI will simply push doctors to see more patients, read more tests, and embrace an even more crushing workload.

"We can't leave it to the administrators to just make things worse," says Topol. "Now is the time for doctors to advocate for a restoration of the human touch. We need to stand up for patients and for the patient-doctor relationship."

AI could indeed be a game changer, he says, but rather than squander the huge benefits of more time, "We need a new equation going forward."

Eve Herold
Eve Herold is a science writer specializing in issues at the intersection of science and society. She has written and spoken extensively about stem cell research and regenerative medicine and the social and bioethical aspects of leading-edge medicine. Her 2007 book, Stem Cell Wars, was awarded a Commendation in Popular Medicine by the British Medical Association. Her 2016 book, Beyond Human, has been nominated for the Kirkus Prize in Nonfiction, and a forthcoming book, Robots and the Women Who Love Them, will be released in 2019.

A close up of a doctor pointing at a smart phone, heralding the new era of prescription digital therapeutics.


You may be familiar with Moore's Law, the prediction made by Intel co-founder Gordon Moore that computer chips would get faster and cheaper with each passing year. That's been borne out by the explosive growth of the tech industry, but you may not know that there is an inverse Moore's Law for drug development.

What if there were a way to apply the fast-moving, low-cost techniques of software development to drug discovery?

Eroom's Law—yes that's "Moore" spelled backward—is the observation that drug discovery has become slower and more expensive over time, despite technological improvements. And just like Moore's Law, it's been borne out by experience—from the 1950s to today, the number of drugs that can be developed per billion dollars in spending has steadily decreased, contributing to the continued growth of health care costs.

But what if there were a way to apply the fast-moving, low-cost techniques of software development to drug discovery? That's what a group of startups in the new field of digital therapeutics are promising. They develop apps that are used—either on their own or in conjunction with conventional drugs—to treat chronic disorders like addiction, diabetes and mental health that have so far resisted a pharmaceutical approach. Unlike the thousands of wellness and health apps that can be downloaded to your phone, digital therapeutics are developed and are meant to be used like drugs, complete with clinical trials, FDA approval and doctor prescriptions.

The field is hot—in 2017 global investment in digital therapeutics jumped to $11.5 billion, a fivefold increase from 2012, and major pharma companies like Novartis are developing their own digital products or partnering with startups. One such startup is the bicoastal Pear Therapeutics. Last month, Pear's reSET-O product became the first digital therapeutic to be approved for use by the millions of Americans who struggle with opioid use disorder, and the company has other products addressing addiction and mental illness in the pipeline.

I spoke with Dr. Corey McCann, Pear's CEO, about the company's efforts to meld software and medicine, designing clinical trials for an entirely new kind of treatment, and the future of digital therapeutics.

The interview has been edited and condensed for clarity and length.

"We're looking at conditions that currently can't be cured with drugs."

BRYAN WALSH: What makes a digital therapeutic different than a wellness app?

COREY MCCANN: What we do is develop therapeutics that are designed to be used under the auspices of a physician, just as a drug developed under good manufacturing would be. We do clinical studies for both safety and efficacy, and then they go through the development process you'd expect for a drug. We look at the commercial side, at the role of doctors. Everything we do is what would be done with a traditional medical product. It's a piece of software developed like a drug.

WALSH: What kind of conditions are you first aiming to treat with digital therapeutics?

MCCANN: We're looking at conditions that currently can't be cured with drugs. A good example is our reSET product, which is designed to treat addiction to alcohol, cannabis, stimulants, cocaine. There really aren't pharmaceutical products that are approved to treat people addicted to these substances. What we're doing is functional therapy, the standard of care for addiction treatment, but delivered via software. But we can also work with medication—our reSET-O product is a great example. It's for patients struggling with opioid addiction, and it's delivered in concert with the drug buprenorphine.

WALSH: Walk me through what the patient experience would be like for someone on a digital therapeutic like reSET.

MCCANN: Imagine you're a patient who has been diagnosed with cocaine addiction by a doctor. You would then receive a prescription for reSET during the same office visit. Instead of a pharmacy, the script is sent to the reSET Connect Patient Service Center, where you are onboarded and given an access code that is used to unlock the product after downloading it onto your device. The product has 60 different modules—each one requiring about a 10 to 15-minute interaction—all derived from a form of cognitive behavioral therapy called community reinforcement approach. The treatment takes place over 90 days.

"The patients receiving the digital therapeutic were more than twice as likely to remain abstinent as those receiving standard care."

Patients report their substance abuse, cravings and triggers, and they are also tested on core proficiencies through the therapy. Physicians have access to all of their data, which helps facilitate their one-on-one meetings. We know from regular urine tests how effective the treatment is.

WALSH: What kind of data did you find when you did clinical studies on reSET?

MCCANN: We had 399 patients in 10 centers taking part in a randomized clinical trial run by the National Institute on Drug Abuse. Every patient enrolled in the study had an active substance abuse disorder. The study was randomized so that patients either received the best current standard of care, which is three hours a week of face-to-face therapy, or they received the digital therapeutic. The primary endpoint was abstinence in weeks 9 to 12—if the patient had a single dirty urine screen in the last month, they counted as a failure.

In the end, the patients receiving the digital therapeutic were more than twice as likely to remain abstinent as those receiving standard care—40 percent versus 17 percent. Those receiving reSET were also much more likely to remain in treatment through the entire trial.

WALSH: Why start by focusing your first digital therapeutics on addiction?

MCCANN: We have tried to build a company that is poised to make a difference in medicine. If you look at addiction, there is little to nothing in the drug pipeline to address this. More than 30 million people in the U.S. suffer from addiction disorders, and not only is efficacy a concern, but so is access. Many patients aren't able to receive anything like the kind of face-to-face therapy our control group received. So we think digital therapeutics can make a difference there as well.

WALSH: reSET was the first digital therapeutic approved by the FDA to treat a specific disorder. What has the approval process been like?

MCCANN: It's been a learning process for all involved, including the FDA. Our philosophy is to work within the clinical trials structure, which has specific disease targets and endpoints, and develop quality software, and bring those two strands together to generate digital therapeutics. We now have two products that have been FDA-approved, and four more in development. The FDA is appropriately cautious about all of this, balancing the tradeoff between patient risk and medical value. As we see it, our company is half tech and half biotech, and we follow regulatory trials that are as rigorous as they would be with any drug company.

"This is a new space, but when you look back in 10 years there will be an entire industry of prescription digital therapeutics."

WALSH: How do you balance those two halves, the tech side and the biology side? Tech companies are known for iterating rapidly and cheaply, while pharma companies develop drugs slowly and expensively.

MCCANN: This is a new space, but when you look back in 10 years there will be an entire industry of prescription digital therapeutics. Right now for us we're combining the rigor of the pharmaceutical model with the speed and agility of a tech company. Our product takes longer to develop than an unverified health app, but less time and with less clinical risk than a new molecular entity. This is still a work in progress and not a day goes by where we don't notice the difference between those disciplines.

WALSH: Who's going to pay for these treatments? Insurers are traditionally slow to accept new innovations in the therapeutic space.

MCCANN: This is just like any drug launch. We need to show medical quality and value, and we need to get clinician demand. We want to focus on demonstrating as many scripts as we can in 2019. And we know we'll need to be persistent—we live in a world where payers will say no to anything three times before they say yes. Demonstrating value is how you get there.

WALSH: Is part of that value the possibility that digital therapeutics could be much cheaper than paying someone for multiple face-to-face therapy sessions?

MCCANN: I believe the cost model is very compelling here, especially when you can treat diseases that were not treatable before. That is something that creates medical value. Then you have the data aspect, which makes our product fundamentally different from a drug. We know everything about every patient that uses our product. We know engagement, we can push patient self-reports to clinicians. We can measure efficiency out in the real world, not just in a measured clinical trial. That is the holy grail in the pharma world—to understand compliance in practice.

WALSH: What's the future of digital therapeutics?

MCCANN: In 10 years, what we think of as digital medicine will just be medicine. This is something that will absolutely become standard of care. We are working on education to help partners and payers figure out where go from here, and to incorporate digital therapeutics into standard care. It will start in 2019 and 2020 with addiction medicine, and then in three to five years you'll see treatments designed to address disorders of the brain. And then past the decade horizon you'll see plenty of products that aim at every facet of medicine.

Bryan Walsh
Bryan Walsh is the former international editor at TIME magazine. He spent several years as a foreign correspondent for TIME in Hong Kong and Tokyo, and also covered climate change and energy for the magazine. He has written cover stories on subjects ranging from psychology to infectious disease to fracking. He is now at work on a book for the publisher Hachette about existential risk, emerging technologies and the end of the world.
Get our top stories twice a month
Follow us on

A woman prepares to swallow a digital pill that can track whether she has taken her medication.

(© vectorfusionart/Fotolia)

Dr. Sara Browne, an associate professor of clinical medicine at the University of California, San Diego, is a specialist in infectious diseases and, less formally, "a global health person." She often travels to southern Africa to meet with colleagues working on the twin epidemics of HIV and tuberculosis.

"This technology, in my opinion, is an absolute slam dunk for tuberculosis."

Lately she has asked them to name the most pressing things she can help with as a researcher based in a wealthier country. "Over and over and over again," she says, "the only thing they wanted to know is whether their patients are taking the drugs."

Tuberculosis is one of world's deadliest diseases; every year there are 10 million new infections and more than a million deaths. When a patient with tuberculosis is prescribed medicine to combat the disease, adherence to the regimen is important not just for the individual's health, but also for the health of the community. Poor adherence can lead to lengthier and more costly treatment and, perhaps more importantly, to drug-resistant strains of the disease -- an increasing global threat.

Browne is testing a new method to help healthcare workers track their patients' adherence with greater precision—close to exact precision even. They're called digital pills, and they involve a patient swallowing medicine as they normally would, only the capsule contains a sensor that—when it contacts stomach acid—transmits a signal to a small device worn on or near the body. That device in turn sends a signal to the patient's phone or tablet and into a cloud-based database. The fact that the pill has been swallowed has therefore been recorded almost in real time, and notice is available to whoever has access to the database.

"This technology, in my opinion, is an absolute slam dunk for tuberculosis," Browne says. TB is much more prevalent in poorer regions of the world—in Sub-Saharan Africa, for example—than in richer places like the U.S., where Browne's studies thus far have taken place. But when someone is diagnosed in the U.S., because of the risk to others if it spreads, they will likely have to deal with "directly observed therapy" to ensure that they take their medicines correctly.

DOT, as it's called, requires the patient to meet with a healthcare worker several days a week, or every day, so that the medicine intake can be observed in person -- an expensive and time-consuming process. Still, the Centers for Disease Control and Prevention website says (emphasis theirs), "DOT should be used for ALL patients with TB disease, including children and adolescents. There is no way to accurately predict whether a patient will adhere to treatment without this assistance."

Digital pills can help with both the cost and time involved, and potentially improve adherence in places where DOT is impossibly expensive. With the sensors, you can monitor a patient's adherence without a healthcare worker physically being in the room. Patients can live their normal lives and if they miss a pill, they can receive a reminder by text or a phone call from the clinic or hospital. "They can get on with their lives," said Browne. "They don't need the healthcare system to interrupt them."

A 56-year-old patient who participated in one of Browne's studies when he was undergoing TB treatment says that before he started taking the digital pills, he would go to the clinic at least once every day, except weekends. Once he switched to digital pills, he could go to work and spend time with his wife and children instead of fighting traffic every day to get to the clinic. He just had to wear a small patch on his abdomen, which would send the signal to a tablet provided by Browne's team. When he returned from work, he could see the results—that he'd taken the pill—in a database accessed via the tablet. (He could also see his heart rate and respiratory rate.) "I could do my daily activities without interference," he said.

Dr. Peter Chai, a medical toxicologist and emergency medicine physician at Brigham and Women's Hospital in Boston, is studying digital pills in a slightly different context, to help fight the country's opioid overdose crisis. Doctors like Chai prescribe pain medicine, he says, but then immediately put the onus on the patient to decide when to take it. This lack of guidance can lead to abuse and addiction. Patients are often told to take the meds "as needed." Chai and his colleagues wondered, "What does that mean to patients? And are people taking more than they actually need? Because pain is such a subjective experience."

The patients "liked the fact that somebody was watching them."

They wanted to see what "take as needed" actually led to, so they designed a study with patients who had broken a bone and come to the hospital's emergency department to get it fixed. Those who were prescribed oxycodone—a pharmaceutical opioid for pain relief—got enough digital pills to last one week. They were supposed to take the pills as needed, or as many as three pills per day. When the pills were ingested, the sensor sent a signal to a card worn on a lanyard around the neck.

Chai and his colleagues were able to see exactly when the patients took the pills and how many, and to detect patterns of ingestion more precisely than ever before. They talked to the patients after the seven days were up, and Chai said most were happy to be taking digital pills. The patients saw it as a layer of protection from afar. "They liked the fact that somebody was watching them," Chai said.

Both doctors, Browne and Chai, are in early stages of studies with patients taking pre-exposure prophylaxis, medicines that can protect people with a high-risk of contracting HIV, such as injectable drug users. Without good adherence, patients leave themselves open to getting the virus. If a patient is supposed to take a pill at 2 p.m. but the digital pill sensor isn't triggered, the healthcare provider can have an automatic message sent as a reminder. Or a reminder to one of the patient's friends or loved ones.

"Like Swallowing Your Phone"?

Deven Desai, an associate professor of law and ethics at Georgia Tech, says that digital pills sound like a great idea for helping with patient adherence, a big issue that self-reporting doesn't fully solve. He likes the idea of a physician you trust having better information about whether you're taking your medication on time. "On the surface that's just cool," he says. "That's a good thing." But Desai, who formerly worked as academic research counsel at Google, said that some of the same questions that have come up in recent years with social media and the Internet in general also apply to digital pills.

"Think of it like your phone, but you swallowed it," he says. "At first it could be great, simple, very much about the user—in this case, the patient—and the data is going between you and your doctor and the medical people it ought to be going to. Wonderful. But over time, phones change. They become 'smarter.'" And when phones and other technologies become smarter, he says, the companies behind them tend to expand the type of data they collect, because they can. Desai says it will be crucial that prescribers be completely transparent about who is getting the patients' data and for what purpose.

"We're putting stuff in our body in good faith with our medical providers, and what if it turned out later that all of a sudden someone was data mining or putting in location trackers and we never knew about that?" Desai asks. "What science has to realize is if they don't start thinking about this, what could be a wonderful technology will get killed."

Leigh Turner, an associate professor at the University of Minnesota's Center for Bioethics, agrees with Desai that digital pills have great promise, and also that there are clear reasons to be concerned about their use. Turner compared the pills to credit cards and social media, in that the data from them can potentially be stolen or leaked. One question he would want answered before the pills were normalized: "What kind of protective measures are in place to make sure that personal information isn't spilling out and being acquired by others or used by others in unexpected and unwanted ways?"

If digital pills catch on, some experts worry that they may one day not be a voluntary technology.

Turner also wonders who will have access to the pills themselves. Only those who can afford both the medicine plus the smartphones that are currently required for their use? Or will people from all economic classes have access? If digital pills catch on, he also worries they may one day not be a voluntary technology.

"When it comes to digital pills, it's not something that's really being foisted on individuals. It's more something that people can be informed of and can choose to take or not to take," he says. "But down the road, I can imagine a scenario where we move away from purely voluntary agreements to it becoming more of an expectation."

He says it's easy to picture a scenario in which insurance companies demand that patient medicinal intake data be tracked and collected or else. Refuse to have your adherence tracked and you risk higher rates or even overall coverage. Maybe patients who don't take the digital pills suffer dire consequences financially or medically. "Maybe it becomes beneficial as much to health insurers and payers as it is to individual patients," Turner says.

In November 2017, the FDA approved the first-ever digital pill that includes a sensor, a drug called Abilify MyCite, made by Otsuka Pharmaceutical Company. The drug, which is yet to be released, is used to treat schizophrenia, bipolar disorder, and depression. With a built-in sensor developed by Proteus Digital Health, patients can give their doctors permission to see when exactly they are taking, or not taking, their meds. For patients with mental illness, the ability to help them stick to their prescribed regime can be life-saving.

But Turner wonders if Abilify is the best drug to be a forerunner for digital pills. Some people with schizophrenia might be suffering from paranoia, and perhaps giving them a pill developed by a large corporation that sends data from their body to be tracked by other people might not be the best idea. It could in fact exacerbate their sense of paranoia.

The Bottom Line: Protect the Data

We all have relatives who have pillboxes with separate compartments for each day of the week, or who carry pillboxes that beep when it's time to take the meds. But that's not always good enough for people with dementia, mental illness, drug addiction, or other life situations that make it difficult to remember to take their pills. Digital pills can play an important role in helping these people.

"The absolute principle here is that the data has to belong to the patient."

The one time the patient from Browne's study forgot to take his pills, he got a beeping reminder from his tablet that he'd missed a dose. "Taking a medication on a daily basis, sometimes we just forget, right?" he admits. "With our very accelerated lives nowadays, it helps us to remember that we have to take the medications. So patients are able to be on top of their own treatment."

Browne is convinced that digital pills can help people in developing countries with high rates of TB and HIV, though like Turner and Desai she cautions that patients' data must be protected. "I think it can be a tremendous technology for patient empowerment and I also think if properly used it can help the medical system to support patients that need it," she said. "But the absolute principle here is that the data has to belong to the patient."

Shaun Raviv
Shaun Raviv is a freelance journalist based in Atlanta. You can read his work at

Artificial neurons in a concept of artificial intelligence.

(© ktsdesign/Fotolia)

Artificial intelligence is everywhere, just not in the way you think it is.

These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn."

"There's the perception of AI in the glossy magazines," says Anders Kofod-Petersen, a professor of Artificial Intelligence at the Norwegian University of Science and Technology. "That's the sci-fi version. It resembles the small guy in the movie AI. It might be benevolent or it might be evil, but it's generally intelligent and conscious."

"And this is, of course, as far from the truth as you can possibly get."

What Exactly Is Artificial Intelligence, Anyway?

Let's start with how you got to this piece. You likely came to it through social media. Your Facebook account, Twitter feed, or perhaps a Google search. AI influences all of those things, machine learning helping to run the algorithms that decide what you see, when, and where. AI isn't the little humanoid figure; it's the system that controls the figure.

"AI is being confused with robotics," Eleonore Pauwels, Director of the Anticipatory Intelligence Lab with the Science and Technology Innovation Program at the Wilson Center, says. "What AI is right now is a data optimization system, a very powerful data optimization system."

The revolution in recent years hasn't come from the method scientists and other researchers use. The general ideas and philosophies have been around since the late 1960s. Instead, the big change has been the dramatic increase in computing power, primarily due to the development of neural networks. These networks, loosely designed after the human brain, are interconnected computers that have the ability to "learn." An AI, for example, can be taught to spot a picture of a cat by looking at hundreds of thousands of pictures that have been labeled "cat" and "learning" what a cat looks like. Or an AI can beat a human at Go, an achievement that just five years ago Kofod-Petersen thought wouldn't be accomplished for decades.

"It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn."

Medicine is the field where this expertise in perception tasks might have the most influence. It's already having an impact as iPhones use AI to detect cancer, Apple watches alert the wearer to a heart problem, AI spots tuberculosis and the spread of breast cancer with a higher accuracy than human doctors, and more. Every few months, another study demonstrates more possibility. (The New Yorker published an article about medicine and AI last year, so you know it's a serious topic.)

But this is only the beginning. "I personally think genomics and precision medicine is where AI is going to be the biggest game-changer," Pauwels says. "It's going to completely change how we think about health, our genomes, and how we think about our relationship between our genotype and phenotype."

The Fundamental Breakthrough That Must Be Solved

To get there, however, researchers will need to make another breakthrough, and there's debate about how long that will take. Kofod-Petersen explains: "If we want to move from this narrow intelligence to this broader intelligence, that's a very difficult problem. It basically boils down to that we haven't got a clue about what intelligence actually is. We don't know what intelligence means in a biological sense. We think we might recognize it but we're not completely sure. There isn't a working definition. We kind of agree with the biologists that learning is an aspect of it. It's very difficult to argue that something is intelligent if it can't learn, and these algorithms are getting pretty good at learning stuff. What they are not good at is learning how to learn. They can learn specific tasks but we haven't approached how to teach them to learn to learn."

In other words, current AI is very, very good at identifying that a picture of a cat is, in fact, a cat – and getting better at doing so at an incredibly rapid pace – but the system only knows what a "cat" is because that's what a programmer told it a furry thing with whiskers and two pointy ears is called. If the programmer instead decided to label the training images as "dogs," the AI wouldn't say "no, that's a cat." Instead, it would simply call a furry thing with whiskers and two pointy ears a dog. AI systems lack the explicit inference that humans do effortlessly, almost without thinking.

Pauwels believes that the next step is for AI to transition from supervised to unsupervised learning. The latter means that the AI isn't answering questions that a programmer asks it ("Is this a cat?"). Instead, it's almost like it's looking at the data it has, coming up with its own questions and hypothesis, and answering them or putting them to the test. Combining this ability with the frankly insane processing power of the computer system could result in game-changing discoveries.

In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions present themselves before the person gets sick in real life.

One company in China plans to develop a way to create a digital avatar of an individual person, then simulate that person's health and medical information into the future. In the not-too-distant future, a doctor could run diagnostics on a digital avatar, watching which medical conditions presented themselves – cancer or a heart condition or anything, really – and help the real-life version prevent those conditions from beginning or treating them before they became a life-threatening issue.

That, obviously, would be an incredibly powerful technology, and it's just one of the many possibilities that unsupervised AI presents. It's also terrifying in the potential for misuse. Even the term "unsupervised AI" brings to mind a dystopian landscape where AI takes over and enslaves humanity. (Pick your favorite movie. There are dozens.) This is a concern, something for developers, programmers, and scientists to consider as they build the systems of the future.

The Ethical Problem That Deserves More Attention

But the more immediate concern about AI is much more mundane. We think of AI as an unbiased system. That's incorrect. Algorithms, after all, are designed by someone or a team, and those people have explicit or implicit biases. Intentionally, or more likely not, they introduce these biases into the very code that forms the basis for the AI. Current systems have a bias against people of color. Facebook tried to rectify the situation and failed. These are two small examples of a larger, potentially systemic problem.

It's vital and necessary for the people developing AI today to be aware of these issues. And, yes, avoid sending us to the brink of a James Cameron movie. But AI is too powerful a tool to ignore. Today, it's identifying cats and on the verge of detecting cancer. In not too many tomorrows, it will be on the forefront of medical innovation. If we are careful, aware, and smart, it will help simulate results, create designer drugs, and revolutionize individualize medicine. "AI is the only way to get there," Pauwels says.

Noah Davis
Noah Davis is a writer living in Brooklyn. Visit his website at