Goodbye World: AI Arms Race Headed Toward Autonomous Killer Robots

The rapidly emerging AI arms race to create autonomous killing machines is Technocrat insanity at its highest peak. To the Technocrat mind, every problem has a scientific solution; so why not let an armed and lethal AI robot do all the work of human soldiers? ⁃ TN Editor

When it comes to deciding to kill a human in a time of war, should a machine make that decision or should another human?

The question is a moral one, brought to the foreground by the techniques and incentives of modern technology. It is a question whose scope falls squarely under the auspices of international law, and one which nations have debated for years. Yet it’s also a collective action problem, one that requires not just states, but also companies and workers in companies to come to an agreement to forgo a perceived advantage. The danger is not so much in making a weapon, but in making a weapon that can choose targets independently of the human responsible initiating its action.

In a May 8 report by Pax — a nonprofit with the explicit goal of protecting civilians from violence, reducing armed conflict, and building a just peace — authors look at the existing state of artificial intelligence in weaponry and urge nations, companies and workers to think about how to prevent an AI arms race, instead of thinking about how to win one. Without corrective action, the report warns, the status quo could lead all participants into a no-win situation, with any advantage gained from developing an autonomous weapon temporary and limited.

“We see this emerging AI arms race and we think if nothing happens that that is a major threat to humanity,” said Frank Slijper, one of the authors on the report. “There is a window of opportunity to stop an AI arms race from happening. States should try to prevent an AI arms race and work toward international regulation. In the meantime, companies and research institutes have a major responsibility themselves to make sure that that work in AI and related fields is not contributing to potential lethal autonomous weapons.”

The report is written with a specific eye toward the seven leading AI powers. These include the five permanent members of the UN security council: China, France, Russia, the United Kingdom and the United States. In addition, the report details the artificial intelligence research of Israel and South Korea, both countries whose geographic and political postures have encouraged the development of military AI.

“We identified the main players in terms of use and research and development efforts on both AI and military use of AI in increasingly autonomous weapons. I couldn’t think of anyone, any state we would have missed out from these seven,” says Slijper. “Of course, there’s always a number eight and the number nine.”

For each covered AI power, the report examines the state of AI, the role of AI in the military, and what is known of cooperation between AI developers in the private sector or universities and the military. With countries like the United States, where military AI programs are named, governing policies can be pointed to, and debates over the relationship of commercial AI to military use is known, the report details that process. The thoroughness of the research is used to underscore Pax’s explicitly activist mission, though it also provides a valuable survey of the state of AI in the world.

As the report maintains throughout, this role of AI in weaponry isn’t just a question for governments. It’s a question for the people in charge of companies, and a question for the workers creating AI for companies.

“Much of it has to do with the rather unique character of AI-infused weapons technology,” says Slijper. “Traditionally, a lot of the companies now working on AI were working on it from a purely civilian perspective to do good and to help humanity. These companies weren’t traditionally military producers or dominant suppliers to the military. If you work for an arms company, you know what you’re working for.”

In the United States, there’s been expressed resistance to contributing to Pentagon contracts from laborers in the tech sector. After Google worker outcry after learning of the company’s commitment to Project Maven, which developed a drone-footage processing AI for the military, the company’s leadership agreed to sunset the project. (Project Maven is now managed by the Peter Thiel-backed Andruil.)

Microsoft, too, experienced worker resistance to military use of its augmented reality tool HoloLens, with some workers writing a letter stating that in the Pentagon’s hands, the sensors and processing of the headset made it dangerously close to a weapon component. The workers specifically noted that they had built HoloLens “to help teach people how to perform surgery or play the piano, to push the boundaries of gaming, and to connect with the Mars Rover,” all of which is a far cry from aiding the military in threat identification on patrol.

“And I think it is for a lot of people working in the tech sector quite disturbing that, while initially, that company was mainly or only working on civilian applications of that technology, now more and more they see these technologies also been used for military projects or even lethal weaponry,” said Slijper.

Slijper points to the Protocol on Blind Weapons as a way the international community regulated a technology with both civilian and military applications to ensure its use fell within the laws of war.

Read full story here…