People with life-threatening allergies live in constant fear of coming into contact with deadly allergens. Researchers estimate that about 32 million Americans have food allergies, with the most severe being milk, egg, peanut, tree nuts, wheat, soy, fish, and shellfish.
"It is important to understand that just several years ago, this would not have been possible."
Every three minutes, a food allergy reaction sends someone to the emergency room, and 200,000 people in the U.S. require emergency medical care each year for allergic reactions, according to Food Allergy Research and Education.
But what if there was a way you could easily detect if something you were about to eat contains any harmful allergens? Thanks to Israeli scientists, this will soon be the case — at least for peanuts. The team has been working to develop a handheld device called Allerguard, which analyzes the vapors in your meal and can detect allergens in 30 seconds.
Leapsmag spoke with the founder and CTO of Allerguard, Guy Ayal, about the groundbreaking technology, how it works, and when it will be available to purchase.
What prompted you to create this device? Do you have a personal connection with severe food allergies?
Guy Ayal: My eldest daughter's best friend suffers from a severe food allergy, and I experienced first-hand the effect it has on the person and their immediate surroundings. Most notable for me was the effect on the quality of life – the experience of living in constant fear. Everything we do at Allerguard is basically to alleviate some of that fear.
How exactly does the device work?
The device is built on two main pillars. The first is the nano-chemical stage, in which we developed specially attuned nanoparticles that selectively adhere only to the specific molecules that we are looking for. Those molecules, once bound to the nanoparticles, induce a change in their electrical behavior, which is measured and analyzed by the second main pillar -- highly advanced machine learning algorithms, which can surmise which molecules were collected, and thus whether or not peanuts (or in the future, other allergens) were detected.
It is important to understand that just several years ago, this would not have been possible, because both the nano-chemistry, and especially the entire world of machine learning, big data, and what is commonly known as AI only started to exist in the '90s, and reached applicability for handheld devices only in the past few years.
Where are you at in the development process and when will the device be available to consumers?
We have concluded the proof of concept and proof of capability phase, when we demonstrated successful detection of the minimal known clinical amount that may cause the slightest effect in the most severely allergic person – less than 1 mg of peanut (actually it is 0.7 mg). Over the next 18 months will be productization, qualification, and validation of our device, which should be ready to market in the latter half of 2021. The sensor will be available in the U.S., and after a year in Europe and Canada.
The Allerguard was made possible through recent advances in machine learning, big data, and AI.
How much will it cost?
Our target price is about $200 for the device, with a disposable SenseCard that will run for at least a full day and cost about $1. That card is for a specific allergen and will work for multiple scans in a day, not just one time.
[At a later stage, the company will have sensors for other allergens like tree nuts, eggs, and milk, and they'll develop a multi-SenseCard that works for a few allergens at once.]
Are there any other devices on the market that do something similar to Allerguard?
No other devices are even close to supplying the level of service that we promise. All known methods for allergen detection rely on sampling of the food, which is a viable solution for homogenous foodstuffs, such as a factory testing their raw ingredients, but not for something as heterogenous as an actual dish – especially not for solid allergens such as peanuts, treenuts, or sesame.
If there is a single peanut in your plate, and you sample from anywhere on that plate which is not where that peanut is located, you will find that your sample is perfectly clean – because it is. But the dish is not. That dish is a death trap for an allergic person. Allerguard is the only suggested solution that could indeed detect that peanut, no matter where in that plate it is hiding.
Anything else readers should know?
Our first-generation product will be for peanuts only. You have to understand, we are still a start-up company, and if we don't concentrate our limited resources to one specific goal, we will not be able to achieve anything at all. Once we are ready to market our first device, the peanut detector, we will be able to start the R&D for the 2nd product, which will be for another allergen – most likely tree nuts and/or sesame, but that will probably be in debate until we actually start it.
The Friday Five covers five stories in health research that you may have missed this week. There are plenty of controversies and troubling ethical issues in science – and we get into many of them in our online magazine – but this news roundup focuses on scientific creativity and progress to give you a therapeutic dose of inspiration headed into the weekend.
One day in recent past, scientists at Columbia University’s Creative Machines Lab set up a robotic arm inside a circle of five streaming video cameras and let the robot watch itself move, turn and twist. For about three hours the robot did exactly that—it looked at itself this way and that, like toddlers exploring themselves in a room full of mirrors. By the time the robot stopped, its internal neural network finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. In other words, the robot built a spatial self-awareness, just like humans do. “We trained its deep neural network to understand how it moved in space,” says Boyuan Chen, one of the scientists who worked on it.
For decades robots have been doing helpful tasks that are too hard, too dangerous, or physically impossible for humans to carry out themselves. Robots are ultimately superior to humans in complex calculations, following rules to a tee and repeating the same steps perfectly. But even the biggest successes for human-robot collaborations—those in manufacturing and automotive industries—still require separating the two for safety reasons. Hardwired for a limited set of tasks, industrial robots don't have the intelligence to know where their robo-parts are in space, how fast they’re moving and when they can endanger a human.
Over the past decade or so, humans have begun to expect more from robots. Engineers have been building smarter versions that can avoid obstacles, follow voice commands, respond to human speech and make simple decisions. Some of them proved invaluable in many natural and man-made disasters like earthquakes, forest fires, nuclear accidents and chemical spills. These disaster recovery robots helped clean up dangerous chemicals, looked for survivors in crumbled buildings, and ventured into radioactive areas to assess damage.
Now roboticists are going a step further, training their creations to do even better: understand their own image in space and interact with humans like humans do. Today, there are already robot-teachers like KeeKo, robot-pets like Moffin, robot-babysitters like iPal, and robotic companions for the elderly like Pepper.
But even these reasonably intelligent creations still have huge limitations, some scientists think. “There are niche applications for the current generations of robots,” says professor Anthony Zador at Cold Spring Harbor Laboratory—but they are not “generalists” who can do varied tasks all on their own, as they mostly lack the abilities to improvise, make decisions based on a multitude of facts or emotions, and adjust to rapidly changing circumstances. “We don’t have general purpose robots that can interact with the world. We’re ages away from that.”
Robotic spatial self-awareness – the achievement by the team at Columbia – is an important step toward creating more intelligent machines. Hod Lipson, professor of mechanical engineering who runs the Columbia lab, says that future robots will need this ability to assist humans better. Knowing how you look and where in space your parts are, decreases the need for human oversight. It also helps the robot to detect and compensate for damage and keep up with its own wear-and-tear. And it allows robots to realize when something is wrong with them or their parts. “We want our robots to learn and continue to grow their minds and bodies on their own,” Chen says. That’s what Zador wants too—and on a much grander level. “I want a robot who can drive my car, take my dog for a walk and have a conversation with me.”
Columbia scientists have trained a robot to become aware of its own "body," so it can map the right path to touch a ball without running into an obstacle, in this case a square.
Jane Nisselson and Yinuo Qin/ Columbia Engineering
Today’s technological advances are making some of these leaps of progress possible. One of them is the so-called Deep Learning—a method that trains artificial intelligence systems to learn and use information similar to how humans do it. Described as a machine learning method based on neural network architectures with multiple layers of processing units, Deep Learning has been used to successfully teach machines to recognize images, understand speech and even write text.
Trained by Google, one of these language machine learning geniuses, BERT, can finish sentences. Another one called GPT3, designed by San Francisco-based company OpenAI, can write little stories. Yet, both of them still make funny mistakes in their linguistic exercises that even a child wouldn’t. According to a paper published by Stanford’s Center for Research on Foundational Models, BERT seems to not understand the word “not.” When asked to fill in the word after “A robin is a __” it correctly answers “bird.” But try inserting the word “not” into that sentence (“A robin is not a __”) and BERT still completes it the same way. Similarly, in one of its stories, GPT3 wrote that if you mix a spoonful of grape juice into your cranberry juice and drink the concoction, you die. It seems that robots, and artificial intelligence systems in general, are still missing some rudimentary facts of life that humans and animals grasp naturally and effortlessly.
How does one give robots a genome? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create.
It's not exactly the robots’ fault. Compared to humans, and all other organisms that have been around for thousands or millions of years, robots are very new. They are missing out on eons of evolutionary data-building. Animals and humans are born with the ability to do certain things because they are pre-wired in them. Flies know how to fly, fish knows how to swim, cats know how to meow, and babies know how to cry. Yet, flies don’t really learn to fly, fish doesn’t learn to swim, cats don’t learn to meow, and babies don’t learn to cry—they are born able to execute such behaviors because they’re preprogrammed to do so. All that happens thanks to the millions of years of evolutions wired into their respective genomes, which give rise to the brain’s neural networks responsible for these behaviors. Robots are the newbies, missing out on that trove of information, Zador argues.
A neuroscience professor who studies how brain circuitry generates various behaviors, Zador has a different approach to developing the robotic mind. Until their creators figure out a way to imbue the bots with that information, robots will remain quite limited in their abilities. Each model will only be able to do certain things it was programmed to do, but it will never go above and beyond its original code. So Zador argues that we have to start giving robots a genome.
How does one do that? Zador has an idea. We can’t really equip machines with real biological nucleotide-based genes, but we can mimic the neuronal blueprint those genes create. Genomes lay out rules for brain development. Specifically, the genome encodes blueprints for wiring up our nervous system—the details of which neurons are connected, the strength of those connections and other specs that will later hold the information learned throughout life. “Our genomes serve as blueprints for building our nervous system and these blueprints give rise to a human brain, which contains about 100 billion neurons,” Zador says.
If you think what a genome is, he explains, it is essentially a very compact and compressed form of information storage. Conceptually, genomes are similar to CliffsNotes and other study guides. When students read these short summaries, they know about what happened in a book, without actually reading that book. And that’s how we should be designing the next generation of robots if we ever want them to act like humans, Zador says. “We should give them a set of behavioral CliffsNotes, which they can then unwrap into brain-like structures.” Robots that have such brain-like structures will acquire a set of basic rules to generate basic behaviors and use them to learn more complex ones.
Currently Zador is in the process of developing algorithms that function like simple rules that generate such behaviors. “My algorithms would write these CliffsNotes, outlining how to solve a particular problem,” he explains. “And then, the neural networks will use these CliffsNotes to figure out which ones are useful and use them in their behaviors.” That’s how all living beings operate. They use the pre-programmed info from their genetics to adapt to their changing environments and learn what’s necessary to survive and thrive in these settings.
For example, a robot’s neural network could draw from CliffsNotes with “genetic” instructions for how to be aware of its own body or learn to adjust its movements. And other, different sets of CliffsNotes may imbue it with the basics of physical safety or the fundamentals of speech.
At the moment, Zador is working on algorithms that are trying to mimic neuronal blueprints for very simple organisms—such as earthworms, which have only 302 neurons and about 7000 synapses compared to the millions we have. That’s how evolution worked, too—expanding the brains from simple creatures to more complex to the Homo Sapiens. But if it took millions of years to arrive at modern humans, how long would it take scientists to forge a robot with human intelligence? That’s a billion-dollar question. Yet, Zador is optimistic. “My hypotheses is that if you can build simple organisms that can interact with the world, then the higher level functions will not be nearly as challenging as they currently are.”