Killing machines

There is rising concern about machines such as drones and battlefield robots that could soon be given the decision on whether to kill someone. Since I wrote this and first posted it a couple of weeks ago, the UN has put out their thoughts as the DM writes today:

http://www.dailymail.co.uk/news/article-2318713/U-N-report-warns-killer-robots-power-destroy-human-life.html 

At the moment, drones and robots are essentially just remote controlled devices and a human makes the important decisions. In the sense that a human uses them to dispense death from a distance, they aren’t all that different from a spear or a rifle apart from scale of destruction and the distance from which death can be dealt. Without consciousness, a missile is no different from a spear or bullet, nor is a remote controlled machine that it is launched from. It is the act of hitting the fire button that is most significant, but proximity is important too. If an operator is thousands of miles away and isn’t physically threatened, or perhaps has never even met people from the target population, other ethical issues start emerging. But those are ethical issues for the people, not the machine.

Adding artificial intelligence to let a machine to decide whether a human is to be killed or not isn’t difficult per se. If you don’t care about killing innocent people, it is pretty easy. It is only made difficult because civilised countries value human lives, and because they distinguish between combatants and civilians.

Personally, I don’t fully understand the distinction between combatants and civilians. In wars, often combatants have no real choice but to fight or are conscripted, and they are usually told what to do, often by civilian politicians hiding in far away bunkers, with strong penalties for disobeying. If a country goes to war, on the basis of a democratic mandate, then surely everyone in the electorate is guilty, even pacifists, who accept the benefits of living in the host country but would prefer to avoid the costs. Children are the only innocents.

In my analysis, soldiers in a democratic country are public sector employees like any other, just doing a job on behalf of the electorate. But that depends to some degree on them keeping their personal integrity and human judgement. The many military who take pride in following orders could be thought of as being dehumanised and reduced to killing machines. Many would actually be proud to be thought of as killing machines. A soldier like that, who merely follow orders, deliberately abdicates human responsibility. Having access to the capability for good judgement, but refusing to use it, they reduce themselves to a lower moral level than a drone. At least a drone doesn’t know what it is doing.

On the other hand, disobeying a direct order may soothe issues of conscience but invoke huge personal costs, anything from shaming and peer disapproval to execution. Balancing that is a personal matter, but it is the act of balancing it that is important, not necessarily the outcome. Giving some thought to the matter and wrestling at least a bit with conscience before doing it makes all the difference. That is something a drone can’t yet do.

So even at the start, the difference between a drone and at least some soldiers is not always as big as we might want it to be, for other soldiers it is huge. A killing machine is competing against a grey scale of judgement and morality, not a black and white equation. In those circumstances, in a military that highly values following orders, human judgement is already no longer an essential requirement at the front line. In that case, the leaders might set the drones into combat with a defined objective, the human decision already taken by them, the local judgement of who or what to kill assigned to adaptive AI, algorithms and sensor readings. For a military such as that, drones are no different to soldiers who do what they’re told.

However, if the distinction between combatant and civilian is required, then someone has to decide the relative value of different classes of lives. Then they either have to teach it to the machines so they can make the decision locally, or the costs of potential collateral damage from just killing anyone can be put into the equations at head office. Or thirdly, and most likely in practice, a compromise can be found where some judgement is made in advance and some locally. Finally, it is even possible for killing machines to make decisions on some easier cases and refer difficult ones to remote operators.

We live in an electronic age, with face recognition, friend or foe electronic ID, web searches, social networks, location and diaries, mobile phone signals and lots of other clues that might give some knowledge of a target and potential casualties. How important is it to kill or protect this particular individual or group, or take that particular objective? How many innocent lives are acceptable cost, and from which groups – how many babies, kids, adults, old people? Should physical attractiveness or the victim’s professions be considered? What about race or religion, or nationality, or sexuality, or anything else that could possibly be found out about the target before killing them? How much should people’s personal value be considered, or should everyone be treated equal at point of potential death? These are tough questions, but the means of getting hold of the date are improving fast and we will be forced to answer them. By the time truly intelligent drones will be capable of making human-like decisions, they may well know who they are killing.

In some ways this far future with a smart or even conscious drone or robot making informed decisions before killing people isn’t as scary as the time between now and then. Terminator and Robocop may be nightmare scenarios, but at least in those there is clarity of which one is the enemy. Machines don’t yet have anywhere near that capability. However, if an objective is considered valuable, military leaders could already set a machine to kill people even when there is little certainty about the role or identity of the victims. They may put in some algorithms and crude AI to improve performance or reduce errors, but the algorithmic uncertainty and callous uncaring dispatch of potentially innocent people is very worrying.

Increasing desperation could be expected to lower barriers to use. So could a lower regard for the value of human life, and often in tribal conflicts people don’t consider the lives of the opposition to have a very high value. This is especially true in terrorism, where the objective is often to kill innocent people. It might not matter that the drone doesn’t know who it is killing, as long as it might be killing the right target as part of the mix. I think it is reasonable to expect a lot of battlefield use and certainly terrorist use of semi-smart robots and drones that kill relatively indiscriminatingly. Even when truly smart machines arrive, they might be set to malicious goals.

Then there is the possibility of rogue drones and robots. The Terminator/Robocop scenario. If machines are allowed to make their own decisions and then to kill, can we be certain that the safeguards are in place that they can always be safely deactivated? Could they be hacked? Hijacked? Sabotaged by having their fail-safes and shut-offs deactivated? Have their ‘minds’ corrupted? As an engineer, I’d say these are realistic concerns.

All in all, it is a good thing that concern is rising and we are seeing more debate. It is late, but not too late, to make good progress to limit and control the future damage killing machines might do. Not just directly in loss of innocent life, but to our fundamental humanity as armies get increasingly used to delegating responsibility to machines to deal with a remote dehumanised threat. Drones and robots are not the end of warfare technology, there are far scarier things coming later. It is time to get a grip before it is too late.

When people fought with sticks and stones, at least they were personally involved. We must never allow personal involvement to disappear from the act of killing someone.

2 responses to “Killing machines

  1. it all depends on us wether we want to create something productive or destructive. Robots are the creations of humans, it all depends on us how we want to program them. so it is totally in human hands and a human choice to create killing machines or helping machines

    Like

  2. Pingback: Too late for a pause. Minimal AI consciousness by Xmas. | Futurizon: the future before it comes over the horizon

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.