One of the first pieces I wrote for this site finished with a whimsical suggestion that autistic people were somewhat native to what is known as the uncanny valley, a term that refers to the revulsion many people feel with sophisticated robots or other entities that approach but do not attain the appearance of being human. I was reminded of this in Lili Marlene’s “you might be” list, recently reprinted here, when she wryly suggested that you might be an autistic person if “you have such an odd style of conversation that you would fail the Turing Test if you weren’t obviously human.” The Turing Test, of course, being a way of measuring of how convincingly an automated computer chat program can impersonate an actual human being.
Autism and the uncanny valley is a subject I’ve been wanting to revisit all along; I’ve long had a suspicion that much of society’s seemingly willful ignorance regarding autism, not to mention its fear, stigma, and outright revulsion is a matter of what I’ve thought of as false positives triggering the uncanny valley response. The speculation on the part of the originator of the phrase, Japanese robotics scientist Masahiro Mori, was that the response is an instinctive one having to do with self-preservation. Those who are quick to pounce on circular, self-justifying logic might well see this speculation as validation of their prejudices against autism, so presentation of these ideas has seemed a tricky matter.
Forty years on from Mori’s coinage of the term, it turns out that speculation is indeed the key word, though I’m not the only one to be wondering about the connection between autism and the uncanny valley response. An excellent recent article which surveys the current state of thought and research regarding the uncanny valley finishes with the thought that “human-looking robots could be valuable tools in psychology and neuroscience, helping researchers study human behavior and even disorders like autism.”
If you’d like to test your own uncanny valley response, the article is headed by a wide-ranging, snappily-paced, and yes, beautiful slideshow of various robots bearing varying degrees of human likeness; the meat though is in the quotes from various robotics experts who are intimate with the notion of the uncanny valley, yet find it difficult to define satisfactorily, let alone make use of as a theory. Much, need it be said, like autism itself.
I‘ve no further comparisons to draw right now, but for those who are intrigued, I want to highlight some quotes from Erico Guizzo’s Who’s Afraid of the Uncanny Valley, which appeared earlier this month at IEEE’s Spectrum. Guizzo leads us into the thicket with this handful of questions:
As a kind of benchmark, the uncanny valley could in principle help us understand why some robots are more likable than others. In that way roboticists would be able to create better designs and leap over the creepiness chasm. But what if there’s no chasm? What if you ask a lot of people in controlled experiments how they feel about a wide variety of robots and when you plot the data it doesn’t add up to the uncanny valley graph? What if you can’t even collect meaningful data because terms like “familiarity” and “human likeness” are too vague?
He quotes the conclusion to the 1970 paper by Mori, a roboticist, not an evolutionary biologist:
Why do we humans have such a feeling of strangeness? Is this necessary? I have not considered it deeply, but it may be important to our self-preservation.
We must complete the map of the uncanny valley to know what is human or to establish the design methodology for creating familiar devices through robotics research.
A recent Popular Mechanics article by Erik Sofge which discusses some of the problems with the theory is quoted:
Despite its fame, or because of it, the uncanny valley is one of the most misunderstood and untested theories in robotics. While researching this month’s cover story (“Can Robots Be Trusted?” on stands now) about the challenges facing those who design social robots, we expected to spend weeks sifting through an exhaustive supply of data related to the uncanny valley—data that anchors the pervasive, but only loosely quantified sense of dread associated with robots. Instead, we found a theory in disarray. The uncanny valley is both surprisingly complex and, as a shorthand for anything related to robots, nearly useless.
From the same article:
Cynthia Breazeal, director of the Personal Robots Group at MIT, told [Sofge] that the uncanny valley is “not a fact, it’s a conjecture,” and that there’s “no detailed scientific evidence” to support it.
Sofge also talked to Karl MacDorman, director of the Android Science Center at Indiana University, in Indianapolis, who has long been investigating the uncanny valley. MacDorman’s own view is that there’s something to the idea, but it’s clearly not capturing all the complexity and nuances of human-robot interaction. In fact, MacDorman believes there might be more than one uncanny valley, because many different factors—in particular, odd combinations like a face with realistic skin and cartoonish eyes, for example—can be disconcerting.
From a study by Japanese roboticist Hiroshi Ishiguro and collaborator Christoph Bartneck, Guizzo offers:
The results of this study cannot confirm Mori’s hypothesis of the Uncanny Valley. The robots’ movements and their level of anthropomorphism may be complex phenomena that cannot be reduced to two factors. Movement contains social meanings that may have direct influence on the likeability of a robot. The robot’s level of anthropomorphism does not only depend on its appearance but also on its behavior. A mechanical-looking robot with appropriate social behavior can be anthropomorphized for different reasons than a highly human- like android. Again, Mori’s hypothesis appears to be too simplistic.
The paragraph following this one is troubling if applied to autistics in that it assumes, correctly, that robots are infinitely malleable, endlessly adjustable in terms of behavior modification, while autistics—yea, though these limits and boundaries are tested endlessly—are of course not. Still though, there’s common ground here in terms of challenges faced, potentials for new insights, and more. Elsewhere in his conclusion Guizzo relates that “Ishiguro recently told me that the possibility that his creations might result in revulsion won’t stop him from ‘trying to build the robots of the future as I imagine them.’ I for one admire his conviction.”
Autistics of both the future and the present could do worse than to maintain such resolve—and self-esteem—in the face of societal revulsion that seems at times to be all too similar.
related: The Dwellers on the Plain
related: Notes on Five Spectrums
related: Mountain Goats of the Uncanny Valley