How Roadside Safety Signs Backfire—and Why Policymakers Don’t Notice

How Roadside Safety Signs Backfire—and Why Policymakers Don’t Notice

Interventions in health and safety often yield results that are the opposite of what policymakers were hoping for. Officials can take a science-based approach by measuring what really works instead of relying on gut intuitions.

You are driving along the highway and see an electronic sign that reads: “3,238 traffic deaths this year.” Do you think this reminder of roadside mortality would change how you drive? According to a recent, peer-reviewed study in Science, seeing that sign would make you more likely to crash. That’s ironic, given that the sign’s creators assumed it would make you safer.

The study, led by a pair of economists at the University of Toronto and University of Minnesota, examined seven years of traffic accident data from 880 electric highway sign locations in Texas, which experienced 4,480 fatalities in 2021. For one week of each month, the Texas Department of Transportation posts the latest fatality messages on signs along select traffic corridors as part of a safety campaign. Their logic is simple: Tell people to drive with care by reminding them of the dangers on the road.

But when the researchers looked at the data, they found that the number of crashes increased by 1.52 percent within three miles of these signs when compared with the same locations during the same month in previous years when signs did not show fatality information. That impact is similar to raising the speed limit by four miles or decreasing the number of highway troopers by 10 percent.

Keep Reading Keep Reading
Gleb Tsipursky
Dr. Gleb Tsipursky is an internationally recognized thought leader on a mission to protect leaders from dangerous judgment errors known as cognitive biases by developing the most effective decision-making strategies. A best-selling author, he wrote Resilience: Adapt and Plan for the New Abnormal of the COVID-19 Coronavirus Pandemic and Pro Truth: A Practical Plan for Putting Truth Back Into Politics. His expertise comes from over 20 years of consulting, coaching, and speaking and training as the CEO of Disaster Avoidance Experts, and over 15 years in academia as a behavioral economist and cognitive neuroscientist. He co-founded the Pro-Truth Pledge project.
Get our top stories twice a month
Follow us on
Podcast: The Friday Five weekly roundup in health research

Researchers are making progress on a vaccine for Lyme disease, sex differences in cancer, new research on reducing your risk of dementia with leisure activities, and more in this week's Friday Five

KPixMining

The Friday Five covers five stories in health research that you may have missed this week. There are plenty of controversies and troubling ethical issues in science – and we get into many of them in our online magazine – but this news roundup focuses on scientific creativity and progress to give you a therapeutic dose of inspiration headed into the weekend.

Keep Reading Keep Reading
Matt Fuchs

Matt Fuchs is the editor-in-chief of Leaps.org. He is also a contributing reporter to the Washington Post and has written for the New York Times, Time Magazine, WIRED and the Washington Post Magazine, among other outlets. Follow him on Twitter @fuchswriter.

Scientists Want to Make Robots with Genomes that Help Grow their Minds

Giving robots self-awareness as they move through space - and maybe even providing them with gene-like methods for storing rules of behavior - could be important steps toward creating more intelligent machines.

phonlamaiphoto

One day in recent past, scientists at Columbia University’s Creative Machines Lab set up a robotic arm inside a circle of five streaming video cameras and let the robot watch itself move, turn and twist. For about three hours the robot did exactly that—it looked at itself this way and that, like toddlers exploring themselves in a room full of mirrors. By the time the robot stopped, its internal neural network finished learning the relationship between the robot’s motor actions and the volume it occupied in its environment. In other words, the robot built a spatial self-awareness, just like humans do. “We trained its deep neural network to understand how it moved in space,” says Boyuan Chen, one of the scientists who worked on it.

For decades robots have been doing helpful tasks that are too hard, too dangerous, or physically impossible for humans to carry out themselves. Robots are ultimately superior to humans in complex calculations, following rules to a tee and repeating the same steps perfectly. But even the biggest successes for human-robot collaborations—those in manufacturing and automotive industries—still require separating the two for safety reasons. Hardwired for a limited set of tasks, industrial robots don't have the intelligence to know where their robo-parts are in space, how fast they’re moving and when they can endanger a human.

Over the past decade or so, humans have begun to expect more from robots. Engineers have been building smarter versions that can avoid obstacles, follow voice commands, respond to human speech and make simple decisions. Some of them proved invaluable in many natural and man-made disasters like earthquakes, forest fires, nuclear accidents and chemical spills. These disaster recovery robots helped clean up dangerous chemicals, looked for survivors in crumbled buildings, and ventured into radioactive areas to assess damage.

Keep Reading Keep Reading
Lina Zeldovich
Lina Zeldovich has written about science, medicine and technology for Scientific American, Reader’s Digest, Mosaic Science and other publications. She’s an alumna of Columbia University School of Journalism and the author of the upcoming book, The Other Dark Matter: The Science and Business of Turning Waste into Wealth, from Chicago University Press. You can find her on http://linazeldovich.com/ and @linazeldovich.