Ex-Google Engineer: ‘Killer Robots’ Could Start War

As governments ramp up the AI arms race to direct autonomous robots with a ‘licensed to kill’, the risk horrible mistakes rises as well, including the possibility of starting a robot war.  ⁃ TN Editor

A new generation of autonomous weapons or “killer robots” could accidentally start a war or cause mass atrocities, a former top Google software engineer has warned.

Laura Nolan, who resigned from Google last year in protest at being sent to work on a project to dramatically enhance US military drone technology, has called for all AI killing machines not operated by humans to be banned.

Nolan said killer robots not guided by human remote control should be outlawed by the same type of international treaty that bans chemical weapons.

Unlike drones, which are controlled by military teams often thousands of miles away from where the flying weapon is being deployed, Nolan said killer robots have the potential to do “calamitous things that they were not originally programmed for”.

There is no suggestion that Google is involved in the development of autonomous weapons systems. Last month a UN panel of government experts debated autonomous weapons and found Google to be eschewing AI for use in weapons systems and engaging in best practice.

Nolan, who has joined the Campaign to Stop Killer Robots and has briefed UN diplomats in New York and Geneva over the dangers posed by autonomous weapons, said: “The likelihood of a disaster is in proportion to how many of these machines will be in a particular area at once. What you are looking at are possible atrocities and unlawful killings even under laws of warfare, especially if hundreds or thousands of these machines are deployed.

“There could be large-scale accidents because these things will start to behave in unexpected ways. Which is why any advanced weapons systems should be subject to meaningful human control, otherwise they have to be banned because they are far too unpredictable and dangerous.”

Google recruited Nolan, a computer science graduate from Trinity College Dublin, to work on Project Maven in 2017 after she had been employed by the tech giant for four years, becoming one of its top software engineers in Ireland.

She said she became “increasingly ethically concerned” over her role in the Maven programme, which was devised to help the US Department of Defense drastically speed up drone video recognition technology.

Instead of using large numbers of military operatives to spool through hours and hours of drone video footage of potential enemy targets, Nolan and others were asked to build a system where AI machines could differentiate people and objects at an infinitely faster rate.

Google allowed the Project Maven contract to lapse in March this year after more than 3,000 of its employees signed a petition in protest against the company’s involvement.

“As a site reliability engineer my expertise at Google was to ensure that our systems and infrastructures were kept running, and this is what I was supposed to help Maven with. Although I was not directly involved in speeding up the video footage recognition I realised that I was still part of the kill chain; that this would ultimately lead to more people being targeted and killed by the US military in places like Afghanistan.”

Although she resigned over Project Maven, Nolan has predicted that autonomous weapons being developed pose a far greater risk to the human race than remote-controlled drones.

She outlined how external forces ranging from changing weather systems to machines being unable to work out complex human behaviour might throw killer robots off course, with possibly fatal consequences.

“You could have a scenario where autonomous weapons that have been sent out to do a job confront unexpected radar signals in an area they are searching; there could be weather that was not factored into its software or they come across a group of armed men who appear to be insurgent enemies but in fact are out with guns hunting for food. The machine doesn’t have the discernment or common sense that the human touch has.

“The other scary thing about these autonomous war systems is that you can only really test them by deploying them in a real combat zone. Maybe that’s happening with the Russians at present in Syria, who knows? What we do know is that at the UN Russia has opposed any treaty let alone ban on these weapons by the way.

“If you are testing a machine that is making its own decisions about the world around it then it has to be in real time. Besides, how do you train a system that runs solely on software how to detect subtle human behaviour or discern the difference between hunters and insurgents? How does the killing machine out there on its own flying about distinguish between the 18-year-old combatant and the 18-year-old who is hunting for rabbits?”

Read full story here…




NPR Exposes CIA’s MK-Ultra, Mind Control, Torture

It is amazing that NPR would print anything on the CIA’s infamous and horrific MK-Ultra program from the Cold War era. Repatriated Technocrat scientists from Nazi Germany  played a key role after WWII. ⁃ TN Editor
 

During the early period of the Cold War, the CIA became convinced that communists had discovered a drug or technique that would allow them to control human minds. In response, the CIA began its own secret program, called MK-ULTRA, to search for a mind control drug that could be weaponized against enemies.

MK-ULTRA, which operated from the 1950s until the early ’60s, was created and run by a chemist named Sidney Gottlieb. Journalist Stephen Kinzer, who spent several years investigating the program, calls the operation the “most sustained search in history for techniques of mind control.”

Some of Gottlieb’s experiments were covertly funded at universities and research centers, Kinzer says, while others were conducted in American prisons and in detention centers in Japan, Germany and the Philippines. Many of his unwitting subjects endured psychological torture ranging from electroshock to high doses of LSD, according to Kinzer’s research.

“Gottlieb wanted to create a way to seize control of people’s minds, and he realized it was a two-part process,” Kinzer says. “First, you had to blast away the existing mind. Second, you had to find a way to insert a new mind into that resulting void. We didn’t get too far on number two, but he did a lot of work on number one.”

Kinzer notes that the top-secret nature of Gottlieb’s work makes it impossible to measure the human cost of his experiments. “We don’t know how many people died, but a number did, and many lives were permanently destroyed,” he says.

Ultimately, Gottlieb concluded that mind control was not possible. After MK-ULTRA shut down, he went on to lead a CIA program that created poisons and high-tech gadgets for spies to use.

Kinzer writes about Gottlieb and MK-ULTRA in his new book, Poisoner in Chief.


On how the CIA brought LSD to America

As part of the search for drugs that would allow people to control the human mind, CIA scientists became aware of the existence of LSD, and this became an obsession for the early directors of MK-ULTRA. Actually, the MK-ULTRA director, Sidney Gottlieb, can now be seen as the man who brought LSD to America. He was the unwitting godfather of the entire LSD counterculture.

In the early 1950s, he arranged for the CIA to pay $240,000 to buy the world’s entire supply of LSD. He brought this to the United States, and he began spreading it around to hospitals, clinics, prisons and other institutions, asking them, through bogus foundations, to carry out research projects and find out what LSD was, how people reacted to it and how it might be able to be used as a tool for mind control.

Now, the people who volunteered for these experiments and began taking LSD, in many cases, found it very pleasurable. They told their friends about it. Who were those people? Ken Kesey, the author of One Flew Over the Cuckoo’s Nest, got his LSD in an experiment sponsored by the CIA by MK-ULTRA, by Sidney Gottlieb. So did Robert Hunter, the lyricist for the Grateful Dead, which went on to become a great purveyor of LSD culture. Allen Ginsberg, the poet who preached the value of the great personal adventure of using LSD, got his first LSD from Sidney Gottlieb. Although, of course, he never knew that name.

So the CIA brought LSD to America unwittingly, and actually it’s a tremendous irony that the drug that the CIA hoped would be its key to controlling humanity actually wound up fueling a generational rebellion that was dedicated to destroying everything that the CIA held dear and defended.

Whitey Bulger was one of the prisoners who volunteered for what he was told was an experiment aimed at finding a cure for schizophrenia. As part of this experiment, he was given LSD every day for more than a year. He later realized that this had nothing to do with schizophrenia and he was a guinea pig in a government experiment aimed at seeing what people’s long-term reactions to LSD was. Essentially, could we make a person lose his mind by feeding him LSD every day over such a long period?

Read full story here…




Army Developing AI Missiles That Identify Their Own Targets

Technocrats at defense contractors have developed a hybrid targeting system using drones and AI that find their own targets, then coordinate with artillery-launch missiles for destruction.

There has never been a weapon created in the history of mankind that was not used in battle. ⁃ TN Editor

The U.S. Army is working on a new artillery shell capable of locating enemy targets, including moving tanks and armored vehicles. The shell, called Cannon-Delivered Area Effects Munition (C-DAEM), is designed to replace older weapons that leave behind unexploded cluster bomblets on the battlefield that might pose a threat to civilians. The shell is designed to hit targets even in situations where GPS is jammed and friendly forces are not entirely sure where the enemy is.

In the 1980s, the U.S. Army fielded dual purpose improved conventional munition (DPICM) artillery rounds. DPICM was basically the concept of cluster bombs applied to artillery, with a single shell packing dozens of tennis ball-sized grenades or bomblets. DPICM shells were designed to eject the bomblets over the battlefield, dispersing them over a wide area. The bomblets were useful unprotected infantry troops and could knock out a tank or armored vehicle’s treads, weapons, or sensors, disabling it.

DPICM made artillery more lethal than ever, but there was a cost nobody foresaw: unexploded dud bomblets often littered battlefields, becoming a danger to civilians long after the war was over. An international movement to ban cluster bombs and artillery came about, and though the U.S. isn’t a signatory it has pledged not to use munitions with a dud rate greater than one percent. Dud rates for such weapons often reach five percent or more.

Hitting tanks and armored vehicles with artillery from long range is hard, but DPICM made it easy. Now that DPICM is gone the Army wants something new to replace it, something that trades showering an area with bomblets with an artillery round that intelligently seeks out enemy targets on its own. That new weapon is C-DAEM.

C-DAEM is a development of the Army’s Excalibur 155-millimeter artillery round. Excalibur is a GPS-guided artillery round, capable of hitting targets dozens of miles away using the Global Positioning System. Defense contractor Raytheon, maker of the Excalibur, claims it can land within 6.5 feet of the intended target—close enough to hit or damage a stationary armored vehicle.

C-DAEM will be able to hit moving tanks and other armored vehicles—something existing artillery shells can’t do. It will also be able to seek and destroy vehicle targets when their precise location isn’t known. As New Scientist explains, “The weapons will have a range of up to 60 kilometres, taking more than a minute to arrive, and will be able to search an area of more than 28 square kilometres for their targets. They will have a method for slowing down, such as a parachute or small wings, which they will use while scanning and classifying objects below.”

The new artillery round will also be capable of operating in so-called GPS-denied environments, where enemy forces may attempt to locally interfere with the Global Positioning System. Although U.S. forces lean heavily on GPS they are also training to operate without it. Russia, one potential adversary, is developing GPS jamming and spoofing capabilities that could make battlefield GPS useless or unreliable.

Read full story here…




Pentagon: Lasers That Beam Messages Into Your Head

Talking lasers can send audible messages directly into your head from up to hundreds of miles away. When perfected, this technology will be used by military and civilian applications to control crowds and individuals. ⁃ TN Editor

Military scientists at the Pentagon are developing ‘talking’ lasers which can beam warnings straight into the enemy’s head from hundreds of miles away.

Weapons researchers at the Department of Defense say the hi-tech weapon will be able to send brief messages – in the form of audible speech – across combat zones.

The aircraft, ship and truck-mounted devices are being developed as part of a military initiative called the Joint Non-Lethal Weapons Directorate.

The scientists plan to use a phenomenon of physics called the Laser-Induced Plasma formation to make the laser a reality.

First, they fire a powerful laser that creates a ball of plasma. Then, a second laser works to oscillate the plasma creating sound waves.

These intense laser bursts can then perfectly mimic human language, chief scientist Dave Law told the  Military Times.

He added that the technology could be ready for battle in just five years.

A video shared to publicise the Pentagon project shows the weapon saying ‘Stop or we’ll be forced to fire upon you.’

Scientists say these laser-grams will soon be able to beam hundreds of miles away.

The news will send shudders through the conspiracy theorist community who have long claimed the US government uses radio waves as part of a though-control programme.

The Pentagon has revealed it is ploughing tens of millions into developing state-of-the-art laser weapons – to ensure it doesn’t lag behind Russia and China.

Read full story here…




Navy

Navy Unleashed: Laser Weapons Will Change Warfare Forever

Equipped with AI for instantaneous targeting, directed energy weapons are the future of warfare. The Navy is the first adopter but as weapon size shrinks, they will become ubiquitous. ⁃ TN Editor

If swarms of enemy small attack boats armed with guns and explosives approached a Navy ship, alongside missile-armed drones and helicopters closing into strike range, ship commanders would instantly begin weighing defensive options – to include interceptor missiles, electronic warfare, deck-mounted guns or area weapons such as Close-in-Weapons System.

Now, attacks such as these will also be countered with laser weapons being added to the equation, bringing new dimensions to maritime warfare on the open sea.

By 2021, U.S. Navy destroyers will be armed with new ship-fired lasers able to sense and incinerate enemy drones, low-flying aircraft and small boat attacks — all while firing at the speed of light.

Lasers have existed for many years, but the Navy is now adjusting emerging Tactics, Techniques and Procedures to how new high-powered, ship-fired lasers will change ship defenses….and attack options.

Lockheed Martin and the Navy have been working on ground attack tests against mock enemy targets to prepare high-energy lasers for war. The weapon, called High-Energy Laser and Integrated Optical-Dazzler with Surveillance – or HELIOS – is engineered to surveil, track and destroy targets from an integrated ship system consisting of advanced radar, fire control technology and targeting sensors.

Working with the Navy, Lockheed has recently completed its Systems Design Review for HELIOS, a process which examines weapon requirements and prepares subsystems and designs. The intent is to engineer an integrated tactical laser system able to receive “real time operating feedback well in advance, before the system hits the ship,” said Brendan Scanlon, HELIOS Program Director, Lockheed.

The farther away an incoming attack can be detected, the more time commanders have to make time-sensitive combat decisions regarding a possible response. Therefore, having one system that synthesizes sensing and shooting changes the equation for maritime warfare.

Connecting HELIOS’ fire control with ship-based Aegis Radar, used for missile defense, enables a combined system to gather surveillance data from the radar while preparing to destroy the targets.

“Sensors provide cues to laser weapons, with the Aegis operator in the loop. You can use optical sensors to decide what else you are going to do, because the weapon tracks between Aegis and the laser subsystem,” Scanlon added.

This technical range enables some new mission possibilities for the laser weapon, such as an ability to use the laser weapon to “obscure adversaries optical sensors.” This can bring a number of advantages, such as throwing incoming drone fire, helicopter attacks or even anti-ship missiles off course.

Developers are now working on a handful of technical challenges known to make it difficult for mobile lasers to operate on certain platforms, without finding a way to accommodate large amounts of power. The Navy’s Program Manager for the Zumwalt-class destroyers, Capt. Kevin Smith, addressed this recently at Sea Air Space, explaining that a “power surge” is needed to operate lasers on ship.

“For directed energy weapons you need a surge. There is technology we are looking at right now to assess how the ship can have the energy storage that would facilitate that surge capacity,” Smith said.

Read full story here…




Geoengineering

Geoengineering Could Start WWIII As Nations React

Weaponizing weather has been a military goal since WWI, but as global warming hysteria proceeded, it was used to ‘cool’ the earth. However, weather does not respect national borders and what one country does can radically affect the weather in its neighbors. ⁃ TN Editor

Climate change may end up causing World War 3 if individual countries start to try and save themselves by hacking the weather with a process called geoengineering.

Many experts are in favour of geoengineering, which involves manipulating the atmosphere by blocking sunlight or isolating excess carbon, but weather hacking in one region could have negative impacts in another and lead to global conflict, according to scientists.

It is solar geoengineering that appears to be the most problematic and not so much carbon capture because solar geoengineering would involve spraying chemicals into the air that would block some sunlight.

When speaking on the sun blocking topic, geoengineering researcher Juan Moreno-Cruz told Business Insider: “The threat of war never is out of the question.”

If geoengineering is going to happen then all countries would have to be informed and agree because some areas may be more negatively effected than others.

Andrea Flossmann, a scientist at the World Meteorological Organization, explained in a WMO report: “The atmosphere has no walls. What you add may not have the desired effect in your vicinity, but by being transported along might have undesired effects elsewhere.”

Earth’s temperatures are set to soar to dangerous levels so a lot of scientists think the unknown consequences of geoengineering are worth the risk.

The worse case scenario is that Earth’s atmospheric chemistry is irreversibly altered and causes freak weather conditions like monsoons, hurricanes and heatwaves that could kill thousands and increase global tensions.

Read full story here…




Pentagon

Vicious Cycle: Pentagon Buys Services From Tech Giants It Created

Dwight D. Eisenhower warned America about the danger posed by the military-industrial complex but in the same light, he also warned of a technological elite. It is now more apparent than ever that two forces have merged into a single column. ⁃ TN Editor

The US Department of Defense’s bloated budget, along with CIA venture capital, helped to create tech giants, including Amazon, Apple, Facebook, Google and PayPal. The government then contracts those companies to help its military and intelligence operations. In doing so, it makes the tech giants even bigger.

In recent years, the traditional banking, energy and industrial Fortune 500companies have been losing ground to tech giants like Apple and Facebook. But the technology on which they rely emerged from the taxpayer-funded research and development of bygone decades. The internet started as ARPANET, an invention of Honeywell-Raytheon working under a Department of Defense (DoD) contract. The same satellites that enable modern internet communications also enable US jets to bomb their enemies, as does the GPS that enables online retailers to deliver products with pinpoint accuracy. Apple’s touchscreen technology originated as a US Air Force tool. The same drones that record breath-taking video are modified versions of Reapers and Predators.

Tax-funded DoD research is the backbone of the modern, hi-tech economy. But these technologies are dual-use. The companies that many of us take for granted–including Amazon, Apple, Facebook, Google, Microsoft and PayPal–are connected indirectly and sometimes very directly to the US military-intelligence complex.

A recent report by Open the Government, a bipartisan advocate of transparency, reveals the extent of Amazon’s contracts with the Pentagon. Founded in 1994 by Jeff Bezos, the company is now valued at $1 trillion, giving Bezos a personal fortune of $131 billion. Open the Government’s report notes that much of the US government “now runs on Amazon,” so much so that the tech giant is opening a branch near Washington, DC. Services provided by Amazon include cloud contracts, machine learning and biometric data systems. But more than this, Amazon is set to enjoy a lucrative Pentagon IT contract under the $10bn, Joint Enterprise Defense Infrastructure program, or JEDI. The Pentagon says that it hopes Amazon technology will “support lethality and enhanced operational efficiency.”

The report reveals what it can, but much is protected from public scrutiny under the twin veils of national security and corporate secrecy. For instance, all prospective host cities for Amazon’s second headquarters were asked to sign non-disclosure agreements.

But it doesn’t end there. According to the report, Amazon supplied surveillance and facial Rekognition software to the police and FBI, and it has pitched the reportedly inaccurate and race/gender-biased technology to the Department of Homeland Security for its counter-immigration operations. Ten percent of the subsidiary Amazon Web Services’ profits come from government contracts. Departments include the State Department, NASA, Food and Drug Administration and the Centers for Disease Control and Prevention. In 2013, Amazon won a $600m Commercial Cloud Services (C2S) contract with the CIA. C2S will enable deep learning and data fingerprinting. Amazon’s second headquarters will be built in Virginia, the CIA’s home-state. Despite repeated requests, the company refuses to disclose how its personal devices, like Amazon Echo, connect with the CIA.

But Amazon is just the tip of the iceberg.

According to one thorough research article: In the mid-90s, future Google founders Larry Page and Sergey Brin used indirect Pentagon and other government funding to develop web crawlers and page ranking applications. Around the same time, the CIA, Directorate of Intelligence and National Security Agency–under the auspices of the National Science Foundation–funded the Massive Data Digital Systems (MDDS) program. A publication by Sergey Brin acknowledges that he received funding from the MDDS program. According to Professor Bhavani Thuraisingham, who worked on the project, “The intelligence community … essentially provided Brin seed-funding, which was supplemented by many other sources, including the private sector.” The Query Flocks part of Google’s patented PageRank system was developed as part of the MDDS program. Two entrepreneurs, Andreas Bechtolsheim (who set up Sun Microsystems) and David Cheriton, both of whom had previously received Pentagon money, were early investors in Google.

Like Bezos, Brin and Page became billionaires.

Read full story here…




DARPA

DARPA: Funding Wearable Brain-Machine Interfaces

Technocrats at DARPA are intent on creating a non-surgical brain-machine interface as a force-multiplier for soldiers. The research will require “Investigational Device Exemptions” from the Administration. ⁃ TN Editor

DARPA has awarded funding to six organizations to support the Next-Generation Nonsurgical Neurotechnology (N3) program, first announced in March 2018. Battelle Memorial Institute, Carnegie Mellon University, Johns Hopkins University Applied Physics Laboratory, Palo Alto Research Center (PARC), Rice University, and Teledyne Scientific are leading multidisciplinary teams to develop high-resolution, bidirectional brain-machine interfaces for use by able-bodied service members. These wearable interfaces could ultimately enable diverse national security applications such as control of active cyber defense systems and swarms of unmanned aerial vehicles, or teaming with computer systems to multitask during complex missions.

“DARPA is preparing for a future in which a combination of unmanned systems, artificial intelligence, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager. “By creating a more accessible brain-machine interface that doesn’t require surgery to use, DARPA could deliver tools that allow mission commanders to remain meaningfully involved in dynamic operations that unfold at rapid speed.”

Over the past 18 years, DARPA has demonstrated increasingly sophisticated neurotechnologies that rely on surgically implanted electrodes to interface with the central or peripheral nervous systems. The agency has demonstrated achievements such as neural control of prosthetic limbs and restoration of the sense of touch to the users of those limbs, relief of otherwise intractable neuropsychiatric illnesses such as depression, and improvement of memory formation and recall. Due to the inherent risks of surgery, these technologies have so far been limited to use by volunteers with clinical need.

For the military’s primarily able-bodied population to benefit from neurotechnology, nonsurgical interfaces are required. Yet, in fact, similar technology could greatly benefit clinical populations as well. By removing the need for surgery, N3 systems seek to expand the pool of patients who can access treatments such as deep brain stimulation to manage neurological illnesses.

The N3 teams are pursuing a range of approaches that use optics, acoustics, and electromagnetics to record neural activity and/or send signals back to the brain at high speed and resolution. The research is split between two tracks. Teams are pursuing either completely noninvasive interfaces that are entirely external to the body or minutely invasive interface systems that include nanotransducers that can be temporarily and nonsurgically delivered to the brain to improve signal resolution.

  • The Battelle team, under principal investigator Dr. Gaurav Sharma, aims to develop a minutely invasive interface system that pairs an external transceiver with electromagnetic nanotransducers that are nonsurgically delivered to neurons of interest. The nanotransducers would convert electrical signals from the neurons into magnetic signals that can be recorded and processed by the external transceiver, and vice versa, to enable bidirectional communication.
  • The Carnegie Mellon University team, under principal investigator Dr. Pulkit Grover, aims to develop a completely noninvasive device that uses an acousto-optical approach to record from the brain and interfering electrical fields to write to specific neurons. The team will use ultrasound waves to guide light into and out of the brain to detect neural activity. The team’s write approach exploits the non-linear response of neurons to electric fields to enable localized stimulation of specific cell types.
  • The Johns Hopkins University Applied Physics Laboratory team, under principal investigator Dr. David Blodgett, aims to develop a completely noninvasive, coherent optical system for recording from the brain. The system will directly measure optical path-length changes in neural tissue that correlate with neural activity.
  • The PARC team, under principal investigator Dr. Krishnan Thyagarajan, aims to develop a completely noninvasive acousto-magnetic device for writing to the brain. Their approach pairs ultrasound waves with magnetic fields to generate localized electric currents for neuromodulation. The hybrid approach offers the potential for localized neuromodulation deeper in the brain.
  • The Rice University team, under principal investigator Dr. Jacob Robinson, aims to develop a minutely invasive, bidirectional system for recording from and writing to the brain. For the recording function, the interface will use diffuse optical tomography to infer neural activity by measuring light scattering in neural tissue. To enable the write function, the team will use a magneto-genetic approach to make neurons sensitive to magnetic fields.
  • The Teledyne team, under principal investigator Dr. Patrick Connolly, aims to develop a completely noninvasive, integrated device that uses micro optically pumped magnetometers to detect small, localized magnetic fields that correlate with neural activity. The team will use focused ultrasound for writing to neurons.

Throughout the program, the research will benefit from insights provided by independent legal and ethical experts who have agreed to provide insights on N3 progress and consider potential future military and civilian applications and implications of the technology. Additionally, federal regulators are cooperating with DARPA to help the teams better understand human-use clearance as research gets underway. As the work progresses, these regulators will help guide strategies for submitting applications for Investigational Device Exemptions and Investigational New Drugs to enable human trials of N3 systems during the last phase of the four-year program.

“If N3 is successful, we’ll end up with wearable neural interface systems that can communicate with the brain from a range of just a few millimeters, moving neurotechnology beyond the clinic and into practical use for national security,” Emondi said. “Just as service members put on protective and tactical gear in preparation for a mission, in the future they might put on a headset containing a neural interface, use the technology however it’s needed, then put the tool aside when the mission is complete.”

Read full story here…




Experts: The Only Defense Against Killer AI Is Not Developing It

Out of control killer AI in warfare is inevitable because it will become too complex for human management and control. The only real answer is to not develop it in the first place. ⁃ TN Editor

A recent analysis on the future of warfare indicates that countries that continue to develop AI for military use risk losing control of the battlefield. Those that don’t risk eradication. Whether you’re for or against the AI arms race: it’s happening. Here’s what that means, according to a trio of experts.

Researchers from ASRC Federal, a private company that provides support for the intelligence and defense communities, and the University of Maryland recently published a paper on pre-print server ArXivdiscussing the potential ramifications of integrating AI systems into modern warfare.

The paper – read here – focuses on the near-future consequences for the AI arms race under the assumption that AI will not somehow run amok or takeover. In essence it’s a short, sober, and terrifying look at how all this various machine learning technology will play out based on analysis of current cutting-edge military AI technologies and predicted integration at scale.

The paper begins with a warning about impending catastrophe, explaining there will almost certainly be a “normal accident,” concerning AI – an expected incident of a nature and scope we cannot predict. Basically, the militaries of the world will break some civilian eggs making the AI arms race-omelet:

Study of this field began with accidents such as Three Mile Island, but AI technologies embody similar risks. Finding and exploiting these weaknesses to induce defective behavior will become a permanent feature of military strategy.

If you’re thinking killer robots duking it out in our cities while civilians run screaming for shelter, you’re not wrong – but robots as a proxy for soldiers isn’t humanity’s biggest concern when it comes to AI warfare. This paper discusses what happens after we reach the point at which it becomes obvious humans are holding machines back in warfare.

According to the researchers, the problem isn’t one we can frame as good and evil. Sure it’s easy to say we shouldn’t allow robots to murder humans with autonomy, but that’s not how the decision-making process of the future is going to work.

The researchers describe it as a slippery slope:

If AI systems are effective, pressure to increase the level of assistance to the warfighter would be inevitable. Continued success would mean gradually pushing the human out of the loop, first to a supervisory role and then finally to the role of a “killswitch operator” monitoring an always-on LAWS.

LAWS, or lethal autonomous weapons systems, will almost immediately scale beyond humans’ ability to work with computers and machines — and probably sooner than most people think. Hand-to-hand combat between machines, for example, will be entirely autonomous by necessity:

Over time, as AI becomes more capable of reflective and integrative thinking, the human component will have to be eliminated altogether as the speed and dimensionality become incomprehensible, even accounting for cognitive assistance.

And, eventually, the tactics and responsiveness required to trade blows with AI will be beyond the ken of humans altogether:

Given a battlespace so overwhelming that humans cannot manually engage with the system, the human role will be limited to post-hoc forensic analysis, once hostilities have ceased, or treaties have been signed.

If this sounds a bit grim, it’s because it is. As Import AI’s Jack Clark points out, “This is a quick paper that lays out the concerns of AI+War from a community we don’t frequently hear from: people that work as direct suppliers of government technology.”

It might be in everyone’s best interest to pay careful attention to how both academics and the government continue to frame the problem going forward.

Read full story here…




DARPA: AI Mosaic Warfare And Multi-Domain Battle Strategy

Technocrats at DARPA are racing to apply Artificial Intelligence to engaged warfare,  coordinating all battlefield components into a coordinated killing machine.  Success depends on engineers and computer programmers. ⁃ TN Editor

DARPA is automating air-to-air combat, enabling reaction times at machine speeds and freeing pilots to concentrate on the larger air battle and directing an air wing of drones.

Dogfighting will still be rare in the future but it is part of AI and automation taking over all high-end fighting. New human fighter pilots learn to dogfight because it represents a crucible where pilot performance and trust can be refined. To accelerate the transformation of pilots from aircraft operators to mission battle commanders — who can entrust dynamic air combat tasks to unmanned, semi-autonomous airborne assets from the cockpit — the AI must first prove it can handle the basics.

The vision is AI handles the split-second maneuvering during within-visual-range dogfights and pilots become orchestra conductors or higher level managers over large numbers of unmanned systems.

DARPA wants mosaic warfare. Mosaic warfare shifts from expensive manned systems to a mix of manned and less-expensive unmanned systems that can be rapidly developed, fielded, and upgraded with the latest technology to address changing threats. Linking together manned aircraft with significantly cheaper unmanned systems creates a “mosaic” where the individual “pieces” can easily be recomposed to create different effects or quickly replaced if destroyed, resulting in a more resilient warfighting capability.

Read full story here…