Debate heats up over Killer Robot technology

By Ann Rogers, co-author with John Hill of Unmanned: drone warfare and global security


Here is an example of what’s being developed right now. This robot is owned by Boston Dynamics, who’ve recently been taken over by Google.

Sci-fi writer William Gibson said, “The future is already here. It’s just unevenly distributed.” While the idea of killer robots fighting wars or maintaining law and order sounds futuristic, the race to ban autonomous weapons systems before they are distributed to the world’s arsenals is about to get underway. The issue will be taken up later this year by at the Convention on Conventional Weapons, an arms control regime that guides the use of controversial technologies considered “excessively injurious” or having “indiscriminate effects.” Any slow-down on the development of “lethal autonomous robotic systems” (LARS) threatens an industry potentially worth billions – advocates are out in force suggesting that these technologies will make war safer and more ethical. Critics counter that these systems will profoundly alter the dynamics of global security and spur a new arms race, lower the threshold for using force, violate international humanitarian law and undermine the basic ethical precepts that govern how wars should be fought.

The concept of autonomy essentially describes the extent to which a machine is able to reason its way towards problem solving when confronted with uncertainty. In weapons systems, different levels of autonomy exist along a continuum. At one end, “humans in the loop” systems are systems that are controlled – albeit remotely – by someone somewhere. For example, the RAF’s Reaper drones that flew over Afghanistan were piloted from Waddington or from the US facility in Nevada, and it was RAF personnel that chose targets and decided when to fire on them. Further along the continuum are “humans on the loop” systems: in these systems, the machine identifies and selects the targets, but a human makes the decision whether to launch an attack. On the path towards fully autonomous systems, the Rubicon is crossed when the decision to use lethal force is delegated to the machine itself, without any human oversight or intervention. There is a clear military advantage is removing the human entirely from the loop: a human weighing life and death decisions takes time, whereas a machine will simply take action.

While experts suggest that we are still perhaps 20 to 30 years away from developing systems that can identify and kill targets independent of human control, some killer robots are already operating in the field. For example, South Korea has trialled a robot system to guard its border with North Korea. The Samsung SRG-A1 “sentry bot” is a robotic machine gun and grenade launcher system equipped with cameras as well as a microphone and speaker. It can identify and shoot a target from two miles away. How does it distinguish between friend and foe? That’s easy: “When you cross the line, you’re automatically an enemy,” Myung Ho Yoo, principal research engineer at Samsung’s Optics & Digital Imaging Division, explained in a 2007 interview with IEEE. Thus far the system has operated with a “human on the loop,” that is, a South Korean soldier sitting somewhere watching the live feed from the bot. It is the soldier that decides if something is a threat, but having a human make the call is not an operational requirement – it is a political one.

The good drone?

Any military commander will tell you that soldiers have drawbacks: you have to train them and pay them. They need to eat and sleep. They blink. They get bored, angry, scared or irrational. They make mistakes, defy orders, they run away rather than stand and fight. At times, they engage in atrocities and commit war crimes. LARS proponents argue that it just makes sense to replace these fallible and mortal human cogs of the war machine with advanced technologies that can fly longer, see further, process and store information more effectively and fire weapons faster and more accurately than any human can. And best of all, robots don’t figure in any one’s body counts. Drones offer countries the ability to “put the warhead on the forehead” of individual enemies without ever risking the lives of their soldiers. This line of reasoning accounts for the current enthusiastic embrace of unmanned systems by military and political leaders, and has also swayed the general public. Both British and American public opinion is already tilted favourably towards existing drones programs. For example, a 2013 RUSI/yougov poll showed 55 per cent of Britons would support a drone strike on a “known terrorist” abroad, with support falling off depending on how much collateral damage may be caused.

The low political cost associated with drone warfare is exactly what makes it so dangerously attractive to policy-makers. The US had a longstanding ban on assassinations, but the hunt for “terrorists” after 9/11 proved to be reason enough to begin to violate it; and the Predator drone became the body-bagless method by which such operations could take place without arousing much public ire, or even interest. Drone strikes in the non-war zones of Pakistan, Yemen and Somalia have led to an estimated 1,000 civilians deaths, but rather than ceasing the practice started under the previous Bush presidency, when Obama took office he dramatically increased the number of strikes. As the saying goes, we shape our tools and then our tools shape us. When governments have the tools to attack “enemies” with little political or military fall-out, they seem to be very willing to do so, destabilizing global security in the process.

Robo-phobic?

The main obstacle to developing and deploying LARS is a skittish public that still isn’t comfortable with the idea of outsourcing life and death decisions to computers. To assuage public concerns in the UK, the Ministry of Defence has clearly stated that its remotely piloted combat missions will “always” keep highly trained and accountable human operators on the loop, a stance reiterated by the Commons Defence Committee in its March report on unmanned aerial systems. This stance is more than likely to crumble however, when the UK faces enemies operating autonomous systems.

The single biggest incentive for adopting killer robots will be that “everyone else is doing it.” Currently some 76 countries are developing unmanned systems, including China and Russia of course, but also Iran, not to mention groups such as Hamas and Hezbollah. Israel, which operates the Iron Dome, a quasi-autonomous defensive system that identifies and shoots down enemy rockets, is a world leader in drone technology. The US Air Force argues that greater autonomy will become a military necessity because taking humans out of the decision-making loop shortens decision time, a factor that could be critical in a confrontation with an autonomous enemy. It has asked for specific guidance from political leaders about how to square such tactical requirements with the moral and legal problems associated with delegating death to machines. The role of the military is to win: it is up to policy-makers to decide at what cost.

The window is closing

Serious problems lie behind the easy claims that thanks to technology, we are being gifted systems that will make war better and safer. LARS are unlikely to be as accurate and efficient as their supporters claim. Whatever the advances in computer programming, robots will never be able to make nuanced ethical decisions about friends and foes, threats and non-threats. Deciding who will be held accountable when a killer robot makes a mistake is also raising thorny questions for international law. The lowering of the political costs associated with military action offends the basic just war tenet that the use of force should always be a last resort.

Getting states to place limits on their ability to wage war has never been easy, but it’s not impossible. The Campaign to Ban Killer Robots hopes to follow in the path of the Nobel-prize winning International Campaign to Ban Landmines, an initiative that greatly reduced and delegitimized the use of weapons that visited indiscriminate violence on civilian populations. The landmine campaign succeeded in putting constraints on weapons that had killed and maimed Unmannedgenerations of civilians. This year at the Convention on Conventional Weapons, governments will have an opportunity to stop such indiscriminate carnage before it ever takes place, by supporting efforts to implement an international legally binding treaty to prohibit research, use and development of autonomous weapons. With such a treaty in hand, a dangerous and disruptive global arms race in autonomous weapons technology can still be avoided.

Ann Rogers teaches International Relations and Media Studies at Royal Roads University, Canada. She is the author of Unmanned: Drone Warfare and Global Security (Pluto 2014) and Secrecy and Power in the British State (Pluto, 1997).

John Hill was formerly the China Watch editor for Jane’s Intelligence Review, and has reported widely on security matters for a range of Jane’s publications.

Unmanned is available to buy on the Pluto Press website, with a 10% discount.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s