We use cookies on our websites. Information about cookies and how you can object to the use of cookies at any time or end their use can be found in our privacy policy.
Would you risk a life to save a robot?
Huawei Mate 20 Pro Hardware Opinion 6 min read 3 comments

Would you risk a life to save a robot?

It seems like a no-brainer. To any, save but the most hardcore misanthropes, it doesn't many any sense to risk the lives or well-being of humans to preserve a robot. But a new study suggests that many of us would - provided the robot seems human enough. So where do we draw the line?

One of the most desirable uses for automation involves the use of AI robots for dangerous tasks that would pose a risk of injury or death to human beings - for example, the disarming of mines, underwater rescue, mining, industrial work in extreme temperatures or environments. The 'unliving' robot can take on the risk and at the end of the day, all that's lost is a machine, rather than a precious human life. 

The cynical among you may already think that the owners of a sufficiently expensive piece of equipment may rather risk the lives of an underprivileged class of humans, but we're not talking about that here.

What measure is a robot?

A study led by Sari Nijssen of Radboud University in Nijmegen in the Netherlands and Markus Paulus, Professor of Developmental Psychology at Ludwig-Maximilians-Universitaet (LMU) in Munich, aimed to find out the degree to which people show concern for robots and where machines land in terms of our moral reckoning.

The study, published in the journal Social Cognition, set to find out "Under what circumstances and to what extent would adults be willing to sacrifice robots to save human lives?"

This involved posing a hypothetical moral dilemma to the participants based on the classic trolley problem, which also has some traction in the research around self-driving cars: would they be prepared to put a single individual at risk in order to save a group of injured persons? In the scenarios presented the intended sacrificial victim was either a human, a humanoid robot with an anthropomorphic physiognomy that had been humanized to various degrees or a non-humanoid robot that was obviously a machine.

How easy is it to give a robot emotions? In Star Trek, just a simple chip:

The results showed a predictable tendency that still went further than I expected: the more the robot was humanized, the less likely participants were to sacrifice it. This makes sense, we're more likely to feel a robot has value worth preserving if it looks something like Star Trek's Data or Andrew from Bicentennial Man, Blade Runner's replicants etc, than if it looks like a toaster or loudspeaker, no matter how intelligent they get.

Some scenarios included priming stories in which the robot was depicted as having its own thoughts, feelings, perceptions and experiences. In these cases participants were more likely to put anonymous humans at risk than harm the robot. Many subjects were even prepared to sacrifice injured humans to spare the robot. This is despite the fact that thoughts and feelings of the humans involved should go assumed.

According to Paulus: "This result indicates that our study group attributed a certain moral status to the robot. One possible implication of this finding is that attempts to humanize robots should not go too far. Such efforts could come into conflict with their intended function - to be of help to us."

Empathy is the key, but it's also a trap

The development of artificial intelligence won't necessarily result in robots with consciousness in the way we understand it. As I've written about before, even the most sophisticated AIs don't think anything like how our organic brains do. But AI isn't so much about making a machine that thinks and feels like a human, but more one that seems to do so. As AI becomes capable of more and more tasks, we'll have to be more careful about how we apply the term 'intelligence', lest we fall into a trap of over-anthropomorphizing robots.

The idea of making an artificial human is an age old dream of our species, from myth and legend to contemporary science fiction movies with android protagonists with their own inner emotional emotional lives. The emerging wave of virtual beings include not just VR/holographic representations, but also seemingly human-looking android robots in physical space.

The trouble with humanoid robots is that, although there's no evidence that AI can have consciousness, emotion or even anything approaching it, it can get pretty convincing, especially to the layperson. Advances in natural language processing, or NLP, mean that we will more often communicate with machines in our own voices, and hear them talk back in a natural sounding way, with emotional inflections that show that the AI 'remembers' our past interactions and can pick up on the tone of emotional content in our own voices, and mimic emotions back to us...after all, it has a huge amount of video and audio data to learn our mannerisms from.

Human seeming AI is attractive, of course. It triggers our sense of wonder to see such a technological marvel in the first place, and engages our natural empathy that we have for our own kind. In a future where robots could be our butlers, doctors, caregivers and therapists, this could be a fantastic asset to their function, too.

The trouble is, of course, is that it remains emotional manipulation by something that can never really reciprocate our empathy, and such AIs don't necessarily act without bias: from their training data, but also from the corporations or governments that manufacture and sell them. The AI butler of the future may be made by Amazon, for example, a more advanced Alexa that may be biased to charm you into buying products from Amazon rather than competitors.

Creating human seeing AIs may well be a laudable goal, and certainly, something primeval seems to drive us towards it, but, as the study shows, it could be a trap. So long as there's a point where we're willing to sacrifice humans in the out-group (strangers, nameless people) to preserve the robot 'friend' that's more familiar to us, we need to tread carefully and be mindful of the difference between reality and illusion, even if marketing hype will do its best to blur it.

What do you think about humanoid AIs? Should we develop them at all?

3 comments

Write new comment:
All changes will be saved. No drafts are saved when editing