Understanding and Escaping the Uncanny Valley
How do people respond to Siri, Cortana or chatbots? These days, we see an increasing number of interfaces in which we as consumers are meeting digital counterparts and interact with them socially. That is, as humans, we interact with what we still must see as non-sentient beings. We speak, write and otherwise interact with these entities as if they were almost humans.
Interestingly, at the same time, we see that these digital solutions become increasingly sophisticated. You talk to Siri and Cortana, and they talk back (Google also has this feature, but it is not as personified as what Apple and Microsoft have chosen to do).
However, there is a liability in this Human-Computer Interaction (HCI). Hidden among these incremental improvements is the chance that something becomes too human-like but without being human. This “almost human” response does not produce a positive emotional response in humans. Instead, it creates an odd, eerie feeling that something is not entirely right. Take for example the almost human-like figures in movies like Beowolf and Polar Express. The 2001 movie Final Fantasy: The Spirits Within was the first impressive movie that had photorealistic humans and scenes. However, the movie flopped, losing Columbia Pictures $52 million. Indeed, as Peter Travers wrote in Rolling Stone Magazine, “At first it’s fun to watch the characters, (…) but then you notice a coldness in the eyes, a mechanical quality in the movements.”
At first it’s fun to watch the characters, (…) but then you notice a coldness in the eyes, a mechanical quality in the movements.
What is causing this? In one way we would expect that the more something is human-like, the better we will like it. However, researchers have demonstrated labeled the Uncanny Valley (UV) response, which can be defined as “a computer-generated figure or humanoid robot bearing a near-identical resemblance to a human being arouses a sense of unease or revulsion in the person viewing it.”
In other words, the UV response is a significant drop in emotional response or a switch from a positive to a negative emotional response. What is causing this, and how can it be avoided? The reason it is important is pretty straightforward: for those who design new HCIs like Cortana, Siri, and robots, the UV effect has to be dealt with. Understanding the drivers of the unwanted response will be increasingly crucial for these companies, as a drop in emotions will reduce the likelihood of consumers using the solution, and even demonstrate avoidance behaviors.
The Uncanny Valley response is a significant drop in emotional response or a switch from a positive to a negative emotional response in Human Computer Interaction
At Neurons we have recently been collaborating with Advanced Brain Monitoring and Lowe’s Innovation Labs on a DARPA-funded project to boost the understanding of the brain bases of the UV response. In a series of studies, we have tested how emotional, cognitive and empathetic responses change during the course of phases of HCI. The model used here was OSHbot, now also launched as LoweBot, and how consumers responded to interacting with the robot.
Enclosed below is a video demonstration of what some of the results of the study demonstrated. This was presented at the annual RO-Man IEEE conference.