As a fan of science fiction from a young age, I have long been fascinated by tales of artificial intelligence. Isaac Asimov’s extensive use of robots as core characters made famous the Three Laws of Robots:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
This was the simplistic, human-friendly translation of the highly sophisticated programming of ‘intelligent’ robots in Asimov’s world, and the stories are an exploration of ways in which robots appear to defy or transcend them. Asimov’s Three Laws have been used by other authors since, and while they may not be best approach to designing a safe and ethical robot (see, e.g., Why Asimov’s Three Laws Of Robotics Can’t Protect Us), an intelligent, humanoid robot will inevitably need to be programmed with something very similar.
Intelligent, humanoid robots have been popular in film and television for decades – e.g., Michael Crichton’s Westworld (1973), Data in Star Trek: The Next Generation, the T-800 in Terminator 2: Judgment Day (1991), Blade Runner (1982), A.I. Artificial Intelligence (2001) – and in the past couple of years we’ve had the very interesting Ex Machina (2015) and the ongoing series Westworld, both of which focus on human nature in contrast to machine nature.
In both Ex Machina and Westworld, we see the conflict of essential humanity. Is it amoral to treat a machine like a machine, something to be used and disposed of? Is the amorality any different if the machine appears to be human? If the machine appears to be human, is it absurd to treat it as such?The real tragedy, of course, is how easily it is for the human mind to dehumanise humans. Human history is full of the horrors of slavery and genocide, horrors that seem to grow worse every day.
Inevitably, one of the modern drivers for humanoid robots is sex.
“Sex robots are coming,” Leilani says in Sex Robots Are Being Made To Replace Men By 2025. Life-like sex dolls are already on the market [and while they may look plastic and unnatural, the same could be said of many desperate socialites] and RealDoll creator Matt McMullen is working on adding artificial intelligence. (Interestingly, he “believes in crafting the dolls with a slightly unrealistic quality because that would help prevent a case of uncanny valley, when a person responds with unease or revulsion toward a computer generated product, such as a robot, that too closely resembles a human.”)
[Sexbots are a recurring curiosity of mine – see, for example, the villanelle Sexbot and the dark rhyme The Patience of a Saint – mostly from the perspective of how a sentient sexbot would experience the world. Most stories with this theme are one-dimensional and focus on the erotic, but I have difficulty seeing beyond the horror of abuse and non-consensual sex.]
As a mythical future in which sexbots are normalised grows ever closer, already there are concerns. What, for instance, if people make personal robot copies of celebrities? Many of these concerns are being voiced by the Campaign Against Sex Robots.
It’s an interesting site. I don’t fully agree with the arguments made, e.g.: “Rape is acting on a body that is not engaged in simultaneous sex. Rape can be forced, or bought in a brothel in Germany or Holland. Sex is mutually simultaneous, rape is not.” This is the idea that sex without enthusiastic consent is rape, and the implication that sex work is neither fully enthusiastic nor fully consenting.
Enthusiastic consent is a problematic model, however. Charlotte Shane says, “Labeling all of those experiences ‘rape’ erases the truth, my reality, and my agency. It also means … that my ‘yes’ and my ‘no’ while I’m working are equally meaningless, so there would be no difference between my experience with a client who respects my boundaries and one who doesn’t.” (“Getting Away” With Hating It: Consent in the Context of Sex Work)
And it’s largely irrelevant. As long as sexbots are non-sentient machines, consent is not an issue – but nor is rape technically an issue; the sexbot is merely a sex toy. On the other hand, if sexbots ever do achieve sentience, then consent will be a major issue. And it raises an interesting question that I’ve asked before in other contexts:
If you modify a person (using genetics, chemicals, mind control, whatever) so that their desire for sex overrides any thought of denial, that no matter what is done to them the pleasure will outweigh the pain and humiliation,… can sex with such a person ever be consensual?
(“A robot must obey orders given it by human beings” takes priority over “A robot must protect its own existence”.)
We are a long way sentient sexbots, but should we fear non-sentient sexbots? Should we fear them for the same reason we fear immersive video games full of sexual violence? What if paedophiles were given child-sexbots to play with? Would the ability to act out rape fantasies with sexbots encourage rape of real people? These are all important questions that will no doubt lead to new legislation one day.
Maybe in 99% of cases the scenario will be simpler. A lonely person enjoying the company and pleasure of a sexbot. (“Not only is ze great in bed, ze cooks, ze cleans, and ze will even laugh at your jokes – no matter how many times you tell them!”)
“Key to all this,” Rowan Pelling says (Sex robots would give us only what we think we want, and not what we truly desire), “is the desire to be chosen. We want the elusive beloved to say of their own free will ‘Yes, it is you I want,’ against all our previous expectation. This is the miracle most of us seek – not a pre-programmed machine.”
But in the absence of such a miracle, how many of us would be willing to accept the illusion of it? A pre-programmed machine that says, convincingly, “Yes, it is you I want…”