Look at the car in the photo below. If the headlights are eyes, and the upper grille with the car’s emblem is its nose, and the lower grille is a toothy smile, then you know this car is pretty goddamn happy to be chauffeuring you around town.
Before you laugh at me, it’s actually totally normal to see faces in inanimate objects. It’s a kind of anthropomorphism called pareidolia, and while it’s been heavily researched in general, not much has yet been done on the subject when it comes to transportation.
That’s going to change dramatically with self-driving cars, predicts Kerry Gould, senior design project manager at Punchcut, a San Francisco-based digital agency that helps auto manufacturers help imagine future experiences for their vehicles. Autonomous vehicles are scary in a way normal cars aren’t; conceptually, the idea of getting into a driverless car feels unfamiliar and even unsafe. As a result, automakers should consider making AVs look friendlier right out of the gate as a way to win customers—but there lies some danger in making them a little too friendly, too.
We anthropomorphize things—that is, attribute human features to non-human objects—because we have to, Gould told me. “It is the lens we measure risk by. ‘Is this thing/person going to kill me?’ It is literally the first response our brain has to encountering an unfamiliar thing, and quickly analyzing a face is the first bit of data we have to support that decision.”
Gould has spent the past 16 years or so studying car “faces” and their impacts on driving behavior. In that time, she’s noticed more and more cars have adopted angrier-looking front-end styling. Consider a 2000 Mazda vs. a 2018 model.
This 2000 Mazda Protege sedan appears fairly neutral, with big “eyes” (headlights) and a polite half-smile. But the 2018 Mazda 3 sedan’s “eyes” (below) have been narrowed into slits, and it’s no longer smiling. Rather, the corners of its “mouth” are turned down and it appears to be gritting its teeth in preparation for road battle.
A big reason cars are styled this way, Gould explained over Google Hangouts, is because car designers—most of whom are men—design with a sense of machismo. Lone driver aggression also contributes to the problem, she added. “No one sitting on the 101 is commiserating with the plight of their fellow drivers. They see traffic (i.e. other people in cars) as the enemy.”
Gould pointed to some connected-car designs moving to a faceless front end, including those seen on futuristic concept cars by Faraday Future, Lucid Motors, and Mercedes. “They are scary as shit, like Cylons or Robocop,” she noted. “I think they are designed for our current mentality, where the road is just going to be a fiercer battleground.”
Going forward, automakers will have to be mindful of their car designs’ first impressions. Google’s Waymo has strayed from the pack, making a cute-as-a-button AV with big “eyes” and what look like dimples.
According to Adam Waytz, a psychologist and associate professor at Northwestern University’s Kellogg School of Management in Illinois, we tend to anthropomorphize technology when we don’t understand it—so humanizing AVs may be key to their adoption.
“As technology looks and acts more human-like, it [could] lead you to humanize it more,” he said. In a paper he wrote for the Journal of Experimental Social Psychology, Waytz argued that anthropomorphizing AVs helps us trust them.
Looks are just one component of that humanization. Rather, Gould said it’s likely the user interface that will make the biggest difference.
In her vision for the future, we’ll all have one “agent,” like Siri or Alexa, for multiple devices—including AVs. “The vehicle will just be like a coat that your preferred agent wears when it leaves the house. Like, Alexa slaps on a Waymo vehicle skin to move to a new destination,” she explained. This will be particularly useful in car-sharing, where the agent can breed a sense of familiarity in an autonomous vehicle we’ve never previously met.
If that’s the case, we may grow to trust AVs more than we would if they remained cold and impersonal. That could be dangerous.
In a recent Science article, Wendy Ju—formerly the executive director for Stanford’s Center For Design Research and soon-to-be professor at Cornell—floated the idea that if we trust a car (or its human interface) to the point that we think it likes us, we might assume it will try harder to save us in the event of an impending crash.
“The danger of anthropomorphizing AVs could be close to something called ‘GPS death,’ where people follow their GPS to the middle of the desert or into the ocean,” said Waytz. “This over-reliance on technology would lead people to trust it to their peril.”
That said, Waytz and Gould are both convinced AVs will be far safer than human-controlled cars.
“People greatly overestimate their control over their own cars,” said Gould. “Even an AV that fails a fraction of the time is still safer than human drivers who fail regularly.”
Get six of our favorite Motherboard stories every day by signing up for our newsletter.