Drone Kills Libyan Rebel Autonomously: Why ‘Killer Robots’ Need A Ban

In the midnight hour of the 26 September 1983, a warning algorithm alerted Stansiav Petrov that five missiles were bound for Russia. Petrov alerted his officers at the Kremlin and a salvo of five missiles was sent out to New York, Los Angeles, Washington DC, San Francisco, and Seattle. That is how the nuclear war began in the 1980s. Of course this is an alternative history. Here is what really happened: Petrov chose to wait and see. After 23 minutes, no missiles arrived. The nuclear war initiated by a cutting-edge algorithm’s error never happened thanks to Stansiav Petrov’s intuition. He recounted that five missiles is an illogical number to send if you wish to destroy your enemy; it had to be computer error. 

The parable of this story is that human discretion in the use of dangerous technology is important. If the missiles and response algorithms had been automated back then, nuclear war would have broken out. Fast-forward forty-nine years later however and the USA, Russia, China, and the UK are making ultra-advanced weapons which act and respond of their own accord. Their name among the public, who do not work on or with them, is ‘killer robots’. The name unfortunately invites images of terminators and science fiction stories (often in the banner of news stories). Such a name nevertheless belittles the real-world threat these devices—lethal autonomous weapons—pose to the world.   

The STM Kargu drone, for example, has the capability to home in on targets and shoot them dead without piloted guidance. Think a tiny robot, with guns, that is able to target, fly, and shoot without direction from its masters. Confirming the worries of activists such as Stuart Russell (Professor at Berkeley) and Noel Sharkey (Professor at Sheffield University) who are experts in artificial intelligence and its robotic application, a United Nations report concluded that a Kargu killed a man in Libya, despite the drone having no instruction to pursue him.

Turkish made Kargu Rotary Wing Autonomous Attack Drone Credit: STM

This report is no surprise to researchers in science and technology. The Kargu website advertises its capability, such as its autonomous “guided elimination modes against targets selected on images or coordinates”, on its website. And if autonomous weapons are deployed in warfare, then those weapons will assuredly kill people autonomously. More nightmarishly, Kargu has a self-destruct function, so resistance fighters can never turn the weapons against their masters. The implications for authoritarian states who wish to keep the people in line, through technological means, is chilling.

Houthi rebels nevertheless pre-programmed 18 bomb carrying drones and seven cruise missiles which struck Saudi Arabian oil fields in September 2019. The result halved oil output, inflating world oil prices. And the most worrying part is that these drones were piloted through GPS—and made it past a richly endowed defence system. The capability of weapons that are autonomous is astoundingly frightening, therefore, given what non-state actors, often terrorists, can already do co-opting already existent technologies like drones. Imagine autonomous drones in the hands of civil war and freedom fighters!

Stuart Russell, who advises the UN, laments that non-state actors, terrorists, using autonomous weapons like Kargu drones are practically “inevitable” as the weapons become cheaper and common-place through market competition for lower prices. Turkey has bought 500 of these Kargu drones; these drones fly in the Libyan skies and were behind the automated-murder of the annonymous rebel featured in the United Nations’ report.

Mark Esper, US Defence Secretary, asserted at a conference in November 2019 that “the Chinese government is already exporting some of its most advanced military aerial drones to the Middle East, as it prepares to export its next-generation stealth UAVs when those come online”. The lethal autonomous weapons developments just keep coming. I will tell you about some of them. 

The US Navy has an X-47B unmanned fighter jet which can refuel mid-air, land, and take off without a pilot. The Chinese military has a ‘Dark Sword’ jet which maneuvers in patterns humans could never tolerate; with no brain cargo to suffer g-force side-effects ‘Dark Sword’ has more capability than any human piloted plane ever has had. Russia meanwhile is automating its cutting edge T-14 Armata tank, and Kalashnikov, a Russian tech company, has created automated combat modules for enhancing artillery guns and tanks to perceive, select, and fire on targets autonomously.

The Dark Sword Fighter Jet, Credit: Nationalinterst.org 

The competition between different nations amounts to an arms race. Each country looks to develop weapons able to overwhelm and neutralise others’ which means constant redevelopment and reinvestment in outcompeting the others in capability. Countries with disadvantages in military can make up the differences, in soldier numbers or air superiority, with disruptive levelling technologies. Defence spending is on the increase into these advanced killing machines. And a big investment is in drone swarms fully equipped with both pilot and kill functions, working together with machine learning algorithms to identify weaknesses in enemies’ defences to eliminate them as efficiently as possible. Such technological development inspired Professor Russell. He presented to the United Nations a satire video of their hypothetical use, including a mock Ted-style presentation with tragic reporting.

Despite such illustrations coming true, the United Nations struggles even to set parameters for even the ‘ethical’ use of these weapons. As Ozlem Ulgen, international lawyer, ethicist, and adviser to the UN outlines, the creation of these weapons can be argued to be illegal in Humanitarian Law because the machines cannot discern a surrender scenario or be sure to target correctly. Soldiers in international law must react appropriately to those who surrender by taking them prisoner, giving medical assistance to the injured, and so on. 

Nonetheless the law in word and the law in practice are different. Soldiers sometimes forgo following those rules, and likewise without more explicit wording against lethal autonomous weapons in legislation, countries such as China and Turkey forgo ethical issues and feel free to flaunt their weapons’ capabilities. The Russian ambassador to the UN, according to Stuart Russell, also claims the weapons don’t exist yet so nothing needs to be done. The British Home Office is also against a ban, telling The Guardian in 2015 that “At present, we do not see the need for a prohibition on the use of LAWS [lethal autonomous weapons systems], as international humanitarian law already provides sufficient regulation for this area.” The temerity in the claim is in friction with Ozlem Ulgen’s claims, and the consensus within the roboticist community.

Now, some commentators claim, however, that introducing lethal autonomous weapons to the battlefield would be a good thing, because they could replace real soldiers on the ground who are themselves imperfect. This idea is broached by Melanie Phillips, The Times commentator, on the UK BBC 4 Podcast The Moral Maze who suggests robots fighting robots is a better state of affairs. But that is nonsense.  Real war in Syria, Lybia, and terrorism in Sub-Saharan Africa shows robots targeting humans. Ronald Arkin, a roboticist, concurs  with Philips that autonomous weapons are more accurate and less likely to go haywire than are humans, however, the potentiality for use is far too unpredictable for him to safely make such claims about the relative safety of a technology that is incomparable to scale a reasonable comparison, at least to scale in favour of autonomous weapons.

Because the flaw in Lethal Autonomous Weapons (LAW) systems is actually that they are too efficient and will execute their commands and victims alike without a conscience and zero intuition towards rebellion or reasonable grounds for treason. 

Those in the know, such as roboticist Noel Sharkey and artificial-intelligence professor Stuart Russell want Lethal Autonomous Weapons (LAW) banned. LAW are in Stuart Russell’s words “weapons of mass destruction”. At CogX, an AI festival, in 2020 Russell said a “vanful of lethal autonomous weapons are capable of causing more deaths than nuclear weapons” and at “a cheaper price”. Yet there has been seemingly little public uproar about these killing machines. I for one have petitioned the commons and lobbied my MP; everyone should.

Campaigns like Stop Killer Robots are supported by those worried in academia and industry such as the late Stephen Hawking. An open-letter signed along with 4502 other AI and robotics researchers, 26215 concerned signatories pleaded for firmer regulations on the weapons. It’s remarkable that people most in the know about the technology are at the forefront of regulating it. Given how enthused technologists are about their innovation and their implications it is a red flag that so many wish to close down their development who are closest to developing the application of algorithms elsewhere. 

Just as physicists worked to mitigate nuclear weapons (Geneva Prohibition on Nuclear Weapons, 2017), biologists to ban biowarfare (Biological Weapons Convention, 1972), and chemists to ban poison gas weapons (Geneva Protocol 1925), AI and robotics experts are advocating more action than politicians are currently acting on and the public lobbying for. 

The risked lives from these weapons are too great for citizens to leave in the hands of government officials and military contractors alone. The lethal autonomous weapons are not merely weapons leveraged with robot against robot but competing networked algorithms working in chaotic coordination. Because ‘friendly’-algorithms would be competing against ‘enemy’-algorithms and learning against each other the risks for escalation are too great.

Video demo from Military Technology debating WMD status

The algorithms are too complex for humans to understand; many programmers are writing programs they themselves cannot read. But how algorithms come to conclusions being obscured is not the only danger – speed is the most worrying. Humans are incapable of processing at the speeds autonomous weapon swarms would operate at. 

Therefore a narrative as expounded by the British Home Office claiming regulations already cover lethal autonomous weapons—with humans retaining control—is definitively weak. As Peter Lee, computer scientist and ethicist at Birmingham City University claims, “artificial intelligence is definitely a great tool if we want to help a human but again and again we see it’s a very poor tool when trying to replace a human”. 

The discretion Stlansilav Petrov exerted in ignoring the algorithm which mistakenly alerted a nonexistent missile strike would be absent in an autonomous weapons scenario. Given that Russell claims these weapons will cause more damage than nuclear weapons, the implications of their unchecked proliferation are grave. A narrative on risk reduction and managing such devices is therefore arguably misplaced. Without the presence of a hazard, there is no risk to manage. Without lethal autonomous weapons there would be no weapon to use, regulate and re-negotiate, for different battlefields—or terrorist opportunity. The real discretion comes collectively, now, by campaigning to ban these weapons before they get out of hand. For Libyan victims of Turkey’s Kargu drones, lethal autonomous weapons already have.

One thought on “Drone Kills Libyan Rebel Autonomously: Why ‘Killer Robots’ Need A Ban

Leave a Reply

Your email address will not be published.

This site uses Akismet to reduce spam. Learn how your comment data is processed.