Let’s face it, futurists mostly love robots. The word (from the Czech robota, meaning servitude or drudgery, coined in the 1920s), the history of the idea (back to the Greeks, through Leonardo, to Frankenstein), the associations. So maybe it’s not surprising that one of the most intriguing stories I’ve read recently – and meant to blog before now – is about the rapidly emerging issue of robot ethics, courtesy of SEED magazine.

The SEED article notes that there have been no fewer than three separate government reports published worldwide on robot ethics this year – one from Japan, one from South Korea, and one from the European Union’s EURON project, whose ‘roboethics roadmap’ (which is still open to comments) can be downloaded from here.

As SEED suggests:

“The close timing of these three developments reflects a sudden upswing in international awareness that the pace of progress in robotics is rapidly propelling these fields into uncharted ethical realms. Gianmarco Veruggio, the Genoa University roboticist who organized the first international roboethics conference in 2004, says, “We are close to a robotics invasion.” Across the technologically developed world, we’re building progressively more human-like machines, in part as a result of a need for functional, realistic prosthetics, but also because we just seem to be attracted to the idea of making them.”

Evidence of the pleasure of the challenge of construction is provided by recent coverage of July’s robot soccer world cup on the core 77 blog and not only because of the defeat of the previously Federer-esque Japanese Team Osaka in the final by some large German robots. (There are also pictures). The goal of the RoboCup Federation, as it were, is to field a team of fully autonomous humanoid soccer players by the year 2050 able to defeat the human World Cup winners.

The robotic ethical issues which are famous from science fiction are from Isaac Asimov, and are about ensuring the robots’ continuing service to humans:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

(There’s an amusing parody in Michael Frayn’s 1970 satire The Tin Men, set in the “William Morris Institute of Automation Research”, where the researchers have problems getting the robots to decide when to sacrifice themselves for the humans).

The South Korean charter, in contrast, is more concerned with the risks of people become emotionally attached to their robots. It seems that as a species we’re easily fooled by displays of empathetic behaviour, even when they’re entirely simulated. As the response to the Sony AIBO, and its demise, goes to show.

At the same time, one of the most famous lines in futures work looks at this from the robots’ point of view: “robots will have rights”. As Sohail Inayatullah writes in an article reviewing this projection, this is not an issue about technology, but about power.

And this is the lesson we perhaps need to draw from the news that robots are now being deployed by the US Army in Iraq. It’s not about the technology. The US isn’t the first, although it has the biggest research programme. Israel and South Korea are already deploying robot border guard, and China, Singapre and the UK are also using military robots.

The line in the sand, however, is the switch from remote control to the fully autonomous machine. This is the direction of the US research. As Professor Noel Sharkey wrote recently,

The US National Research Council advises “aggressively exploiting the considerable warfighting benefits offered by autonomous vehicles”. They are cheap to manufacture, require less personnel and, according to the navy, perform better in complex missions. … This is dangerous new territory for warfare, yet there are no new ethical codes or guidelines in place.

The US Army’s response is funding a project – shades of both Asimov and Michael Frayn – to equip robot soldiers with a conscience. Sharkey’s view?

In reality, a robot could not pinpoint a weapon without pinpointing the person using it or even discriminate between weapons and non-weapons. I can imagine a little girl being zapped because she points her ice cream at a robot to share.