Ex-Google Engineer: ‘Killer Robots’ Could Start War

Please Share This Story!
image_pdfimage_print
As governments ramp up the AI arms race to direct autonomous robots with a ‘licensed to kill’, the risk horrible mistakes rises as well, including the possibility of starting a robot war.  ⁃ TN Editor

A new generation of autonomous weapons or “killer robots” could accidentally start a war or cause mass atrocities, a former top Google software engineer has warned.

Laura Nolan, who resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned.

Nolan said killer robots not guided by human remote control should be outlawed by the same type of international treaty that bans chemical weapons.

Unlike drones, which are controlled by military teams often thousands of miles away from where the flying weapon is being deployed, Nolan said killer robots have the potential to do “calamitous things that they were not originally programmed for”.

There is no suggestion that Google is involved in the development of autonomous weapons systems. Last month a UN panel of government experts debated autonomous weapons and found Google to be eschewing AI for use in weapons systems and engaging in best practice.

Nolan, who has joined the Campaign to Stop Killer Robots and has briefed UN diplomats in New York and Geneva over the dangers posed by autonomous weapons, said: “The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once. What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed.

“There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”

Google recruited Nolan, a computer science graduate from Trinity College Dublin, to work on Project Maven in 2017 after she had been employed by the tech giant for four years, becoming one of its top software engineers in Ireland.

She said she became “increasingly ethically concerned” over her role in the Maven programme, which was devised to help the US Department of Defense drastically speed up drone video recognition technology.

Instead of using large numbers of military operatives to spool through hours and hours of drone video footage of potential enemy targets, Nolan and others were asked to build a system where AI machines could differentiate people and objects at an infinitely faster rate.

Google allowed the Project Maven contract to lapse in March this year after more than 3,000 of its employees signed a petition in protest against the company’s involvement.

“As a site reliability engineer my expertise at Google was to ensure that our systems and infrastructures were kept running, and this is what I was supposed to help Maven with. Although I was not directly involved in speeding up the video footage recognition I realised that I was still part of the kill chain; that this would ultimately lead to more people being targeted and killed by the US military in places like Afghanistan.”

Although she resigned over Project Maven, Nolan has predicted that autonomous weapons being developed pose a far greater risk to the human race than remote-controlled drones.

She outlined how external forces ranging from changing weather systems to machines being unable to work out complex human behaviour might throw killer robots off course, with possibly fatal consequences.

“You could have a scenario where autonomous weapons that have been sent out to do a job confront unexpected radar signals in an area they are searching; there could be weather that was not factored into its software or they come across a group of armed men who appear to be insurgent enemies but in fact are out with guns hunting for food. The machine doesn’t have the discernment or common sense that the human touch has.

“The other scary thing about these autonomous war systems is that you can only really test them by deploying them in a real combat zone. Maybe that’s happening with the Russians at present in Syria, who knows? What we do know is that at the UN Russia has opposed any treaty let alone ban on these weapons by the way.

“If you are testing a machine that is making its own decisions about the world around it then it has to be in real time. Besides, how do you train a system that runs solely on software how to detect subtle human behaviour or discern the difference between hunters and insurgents? How does the killing machine out there on its own flying about distinguish between the 18-year-old combatant and the 18-year-old who is hunting for rabbits?”

Read full story here…

Join our mailing list!


avatar
1 Comment threads
0 Thread replies
0 Followers
 
Most reacted comment
Hottest comment thread
1 Comment authors
steve Recent comment authors
  Subscribe  
newest oldest most voted
Notify of
steve
Guest

This reminds me of that scene from War Games – a bit old fashioned to watch now – but highly relevant. https://www.youtube.com/watch?v=NHWjlCaIrQo