Navy

Navy Unleashed: Laser Weapons Will Change Warfare Forever

Equipped with AI for instantaneous targeting, directed energy weapons are the future of warfare. The Navy is the first adopter but as weapon size shrinks, they will become ubiquitous. ⁃ TN Editor

If swarms of enemy small attack boats armed with guns and explosives approached a Navy ship, alongside missile-armed drones and helicopters closing into strike range, ship commanders would instantly begin weighing defensive options – to include interceptor missiles, electronic warfare, deck-mounted guns or area weapons such as Close-in-Weapons System.

Now, attacks such as these will also be countered with laser weapons being added to the equation, bringing new dimensions to maritime warfare on the open sea.

By 2021, U.S. Navy destroyers will be armed with new ship-fired lasers able to sense and incinerate enemy drones, low-flying aircraft and small boat attacks — all while firing at the speed of light.

Lasers have existed for many years, but the Navy is now adjusting emerging Tactics, Techniques and Procedures to how new high-powered, ship-fired lasers will change ship defenses….and attack options.

Lockheed Martin and the Navy have been working on ground attack tests against mock enemy targets to prepare high-energy lasers for war. The weapon, called High-Energy Laser and Integrated Optical-Dazzler with Surveillance – or HELIOS – is engineered to surveil, track and destroy targets from an integrated ship system consisting of advanced radar, fire control technology and targeting sensors.

Working with the Navy, Lockheed has recently completed its Systems Design Review for HELIOS, a process which examines weapon requirements and prepares subsystems and designs. The intent is to engineer an integrated tactical laser system able to receive “real time operating feedback well in advance, before the system hits the ship,” said Brendan Scanlon, HELIOS Program Director, Lockheed.

The farther away an incoming attack can be detected, the more time commanders have to make time-sensitive combat decisions regarding a possible response. Therefore, having one system that synthesizes sensing and shooting changes the equation for maritime warfare.

Connecting HELIOS’ fire control with ship-based Aegis Radar, used for missile defense, enables a combined system to gather surveillance data from the radar while preparing to destroy the targets.

“Sensors provide cues to laser weapons, with the Aegis operator in the loop. You can use optical sensors to decide what else you are going to do, because the weapon tracks between Aegis and the laser subsystem,” Scanlon added.

This technical range enables some new mission possibilities for the laser weapon, such as an ability to use the laser weapon to “obscure adversaries optical sensors.” This can bring a number of advantages, such as throwing incoming drone fire, helicopter attacks or even anti-ship missiles off course.

Developers are now working on a handful of technical challenges known to make it difficult for mobile lasers to operate on certain platforms, without finding a way to accommodate large amounts of power. The Navy’s Program Manager for the Zumwalt-class destroyers, Capt. Kevin Smith, addressed this recently at Sea Air Space, explaining that a “power surge” is needed to operate lasers on ship.

“For directed energy weapons you need a surge. There is technology we are looking at right now to assess how the ship can have the energy storage that would facilitate that surge capacity,” Smith said.

Read full story here…




Geoengineering

Geoengineering Could Start WWIII As Nations React

Weaponizing weather has been a military goal since WWI, but as global warming hysteria proceeded, it was used to ‘cool’ the earth. However, weather does not respect national borders and what one country does can radically affect the weather in its neighbors. ⁃ TN Editor

Climate change may end up causing World War 3 if individual countries start to try and save themselves by hacking the weather with a process called geoengineering.

Many experts are in favour of geoengineering, which involves manipulating the atmosphere by blocking sunlight or isolating excess carbon, but weather hacking in one region could have negative impacts in another and lead to global conflict, according to scientists.

It is solar geoengineering that appears to be the most problematic and not so much carbon capture because solar geoengineering would involve spraying chemicals into the air that would block some sunlight.

When speaking on the sun blocking topic, geoengineering researcher Juan Moreno-Cruz told Business Insider: “The threat of war never is out of the question.”

If geoengineering is going to happen then all countries would have to be informed and agree because some areas may be more negatively effected than others.

Andrea Flossmann, a scientist at the World Meteorological Organization, explained in a WMO report: “The atmosphere has no walls. What you add may not have the desired effect in your vicinity, but by being transported along might have undesired effects elsewhere.”

Earth’s temperatures are set to soar to dangerous levels so a lot of scientists think the unknown consequences of geoengineering are worth the risk.

The worse case scenario is that Earth’s atmospheric chemistry is irreversibly altered and causes freak weather conditions like monsoons, hurricanes and heatwaves that could kill thousands and increase global tensions.

Read full story here…




Pentagon

Vicious Cycle: Pentagon Buys Services From Tech Giants It Created

Dwight D. Eisenhower warned America about the danger posed by the military-industrial complex but in the same light, he also warned of a technological elite. It is now more apparent than ever that two forces have merged into a single column. ⁃ TN Editor

The US Department of Defense’s bloated budget, along with CIA venture capital, helped to create tech giants, including Amazon, Apple, Facebook, Google and PayPal. The government then contracts those companies to help its military and intelligence operations. In doing so, it makes the tech giants even bigger.

In recent years, the traditional banking, energy and industrial Fortune 500companies have been losing ground to tech giants like Apple and Facebook. But the technology on which they rely emerged from the taxpayer-funded research and development of bygone decades. The internet started as ARPANET, an invention of Honeywell-Raytheon working under a Department of Defense (DoD) contract. The same satellites that enable modern internet communications also enable US jets to bomb their enemies, as does the GPS that enables online retailers to deliver products with pinpoint accuracy. Apple’s touchscreen technology originated as a US Air Force tool. The same drones that record breath-taking video are modified versions of Reapers and Predators.

Tax-funded DoD research is the backbone of the modern, hi-tech economy. But these technologies are dual-use. The companies that many of us take for granted–including Amazon, Apple, Facebook, Google, Microsoft and PayPal–are connected indirectly and sometimes very directly to the US military-intelligence complex.

A recent report by Open the Government, a bipartisan advocate of transparency, reveals the extent of Amazon’s contracts with the Pentagon. Founded in 1994 by Jeff Bezos, the company is now valued at $1 trillion, giving Bezos a personal fortune of $131 billion. Open the Government’s report notes that much of the US government “now runs on Amazon,” so much so that the tech giant is opening a branch near Washington, DC. Services provided by Amazon include cloud contracts, machine learning and biometric data systems. But more than this, Amazon is set to enjoy a lucrative Pentagon IT contract under the $10bn, Joint Enterprise Defense Infrastructure program, or JEDI. The Pentagon says that it hopes Amazon technology will “support lethality and enhanced operational efficiency.”

The report reveals what it can, but much is protected from public scrutiny under the twin veils of national security and corporate secrecy. For instance, all prospective host cities for Amazon’s second headquarters were asked to sign non-disclosure agreements.

But it doesn’t end there. According to the report, Amazon supplied surveillance and facial Rekognition software to the police and FBI, and it has pitched the reportedly inaccurate and race/gender-biased technology to the Department of Homeland Security for its counter-immigration operations. Ten percent of the subsidiary Amazon Web Services’ profits come from government contracts. Departments include the State Department, NASA, Food and Drug Administration and the Centers for Disease Control and Prevention. In 2013, Amazon won a $600m Commercial Cloud Services (C2S) contract with the CIA. C2S will enable deep learning and data fingerprinting. Amazon’s second headquarters will be built in Virginia, the CIA’s home-state. Despite repeated requests, the company refuses to disclose how its personal devices, like Amazon Echo, connect with the CIA.

But Amazon is just the tip of the iceberg.

According to one thorough research article: In the mid-90s, future Google founders Larry Page and Sergey Brin used indirect Pentagon and other government funding to develop web crawlers and page ranking applications. Around the same time, the CIA, Directorate of Intelligence and National Security Agency–under the auspices of the National Science Foundation–funded the Massive Data Digital Systems (MDDS) program. A publication by Sergey Brin acknowledges that he received funding from the MDDS program. According to Professor Bhavani Thuraisingham, who worked on the project, “The intelligence community … essentially provided Brin seed-funding, which was supplemented by many other sources, including the private sector.” The Query Flocks part of Google’s patented PageRank system was developed as part of the MDDS program. Two entrepreneurs, Andreas Bechtolsheim (who set up Sun Microsystems) and David Cheriton, both of whom had previously received Pentagon money, were early investors in Google.

Like Bezos, Brin and Page became billionaires.

Read full story here…




DARPA

DARPA: Funding Wearable Brain-Machine Interfaces

Technocrats at DARPA are intent on creating a non-surgical brain-machine interface as a force-multiplier for soldiers. The research will require “Investigational Device Exemptions” from the Administration. ⁃ TN Editor

DARPA has awarded funding to six organizations to support the Next-Generation Nonsurgical Neurotechnology (N3) program, first announced in March 2018. Battelle Memorial Institute, Carnegie Mellon University, Johns Hopkins University Applied Physics Laboratory, Palo Alto Research Center (PARC), Rice University, and Teledyne Scientific are leading multidisciplinary teams to develop high-resolution, bidirectional brain-machine interfaces for use by able-bodied service members. These wearable interfaces could ultimately enable diverse national security applications such as control of active cyber defense systems and swarms of unmanned aerial vehicles, or teaming with computer systems to multitask during complex missions.

“DARPA is preparing for a future in which a combination of unmanned systems, artificial intelligence, and cyber operations may cause conflicts to play out on timelines that are too short for humans to effectively manage with current technology alone,” said Al Emondi, the N3 program manager. “By creating a more accessible brain-machine interface that doesn’t require surgery to use, DARPA could deliver tools that allow mission commanders to remain meaningfully involved in dynamic operations that unfold at rapid speed.”

Over the past 18 years, DARPA has demonstrated increasingly sophisticated neurotechnologies that rely on surgically implanted electrodes to interface with the central or peripheral nervous systems. The agency has demonstrated achievements such as neural control of prosthetic limbs and restoration of the sense of touch to the users of those limbs, relief of otherwise intractable neuropsychiatric illnesses such as depression, and improvement of memory formation and recall. Due to the inherent risks of surgery, these technologies have so far been limited to use by volunteers with clinical need.

For the military’s primarily able-bodied population to benefit from neurotechnology, nonsurgical interfaces are required. Yet, in fact, similar technology could greatly benefit clinical populations as well. By removing the need for surgery, N3 systems seek to expand the pool of patients who can access treatments such as deep brain stimulation to manage neurological illnesses.

The N3 teams are pursuing a range of approaches that use optics, acoustics, and electromagnetics to record neural activity and/or send signals back to the brain at high speed and resolution. The research is split between two tracks. Teams are pursuing either completely noninvasive interfaces that are entirely external to the body or minutely invasive interface systems that include nanotransducers that can be temporarily and nonsurgically delivered to the brain to improve signal resolution.

  • The Battelle team, under principal investigator Dr. Gaurav Sharma, aims to develop a minutely invasive interface system that pairs an external transceiver with electromagnetic nanotransducers that are nonsurgically delivered to neurons of interest. The nanotransducers would convert electrical signals from the neurons into magnetic signals that can be recorded and processed by the external transceiver, and vice versa, to enable bidirectional communication.
  • The Carnegie Mellon University team, under principal investigator Dr. Pulkit Grover, aims to develop a completely noninvasive device that uses an acousto-optical approach to record from the brain and interfering electrical fields to write to specific neurons. The team will use ultrasound waves to guide light into and out of the brain to detect neural activity. The team’s write approach exploits the non-linear response of neurons to electric fields to enable localized stimulation of specific cell types.
  • The Johns Hopkins University Applied Physics Laboratory team, under principal investigator Dr. David Blodgett, aims to develop a completely noninvasive, coherent optical system for recording from the brain. The system will directly measure optical path-length changes in neural tissue that correlate with neural activity.
  • The PARC team, under principal investigator Dr. Krishnan Thyagarajan, aims to develop a completely noninvasive acousto-magnetic device for writing to the brain. Their approach pairs ultrasound waves with magnetic fields to generate localized electric currents for neuromodulation. The hybrid approach offers the potential for localized neuromodulation deeper in the brain.
  • The Rice University team, under principal investigator Dr. Jacob Robinson, aims to develop a minutely invasive, bidirectional system for recording from and writing to the brain. For the recording function, the interface will use diffuse optical tomography to infer neural activity by measuring light scattering in neural tissue. To enable the write function, the team will use a magneto-genetic approach to make neurons sensitive to magnetic fields.
  • The Teledyne team, under principal investigator Dr. Patrick Connolly, aims to develop a completely noninvasive, integrated device that uses micro optically pumped magnetometers to detect small, localized magnetic fields that correlate with neural activity. The team will use focused ultrasound for writing to neurons.

Throughout the program, the research will benefit from insights provided by independent legal and ethical experts who have agreed to provide insights on N3 progress and consider potential future military and civilian applications and implications of the technology. Additionally, federal regulators are cooperating with DARPA to help the teams better understand human-use clearance as research gets underway. As the work progresses, these regulators will help guide strategies for submitting applications for Investigational Device Exemptions and Investigational New Drugs to enable human trials of N3 systems during the last phase of the four-year program.

“If N3 is successful, we’ll end up with wearable neural interface systems that can communicate with the brain from a range of just a few millimeters, moving neurotechnology beyond the clinic and into practical use for national security,” Emondi said. “Just as service members put on protective and tactical gear in preparation for a mission, in the future they might put on a headset containing a neural interface, use the technology however it’s needed, then put the tool aside when the mission is complete.”

Read full story here…




Experts: The Only Defense Against Killer AI Is Not Developing It

Out of control killer AI in warfare is inevitable because it will become too complex for human management and control. The only real answer is to not develop it in the first place. ⁃ TN Editor

A recent analysis on the future of warfare indicates that countries that continue to develop AI for military use risk losing control of the battlefield. Those that don’t risk eradication. Whether you’re for or against the AI arms race: it’s happening. Here’s what that means, according to a trio of experts.

Researchers from ASRC Federal, a private company that provides support for the intelligence and defense communities, and the University of Maryland recently published a paper on pre-print server ArXivdiscussing the potential ramifications of integrating AI systems into modern warfare.

The paper – read here – focuses on the near-future consequences for the AI arms race under the assumption that AI will not somehow run amok or takeover. In essence it’s a short, sober, and terrifying look at how all this various machine learning technology will play out based on analysis of current cutting-edge military AI technologies and predicted integration at scale.

The paper begins with a warning about impending catastrophe, explaining there will almost certainly be a “normal accident,” concerning AI – an expected incident of a nature and scope we cannot predict. Basically, the militaries of the world will break some civilian eggs making the AI arms race-omelet:

Study of this field began with accidents such as Three Mile Island, but AI technologies embody similar risks. Finding and exploiting these weaknesses to induce defective behavior will become a permanent feature of military strategy.

If you’re thinking killer robots duking it out in our cities while civilians run screaming for shelter, you’re not wrong – but robots as a proxy for soldiers isn’t humanity’s biggest concern when it comes to AI warfare. This paper discusses what happens after we reach the point at which it becomes obvious humans are holding machines back in warfare.

According to the researchers, the problem isn’t one we can frame as good and evil. Sure it’s easy to say we shouldn’t allow robots to murder humans with autonomy, but that’s not how the decision-making process of the future is going to work.

The researchers describe it as a slippery slope:

If AI systems are effective, pressure to increase the level of assistance to the warfighter would be inevitable. Continued success would mean gradually pushing the human out of the loop, first to a supervisory role and then finally to the role of a “killswitch operator” monitoring an always-on LAWS.

LAWS, or lethal autonomous weapons systems, will almost immediately scale beyond humans’ ability to work with computers and machines — and probably sooner than most people think. Hand-to-hand combat between machines, for example, will be entirely autonomous by necessity:

Over time, as AI becomes more capable of reflective and integrative thinking, the human component will have to be eliminated altogether as the speed and dimensionality become incomprehensible, even accounting for cognitive assistance.

And, eventually, the tactics and responsiveness required to trade blows with AI will be beyond the ken of humans altogether:

Given a battlespace so overwhelming that humans cannot manually engage with the system, the human role will be limited to post-hoc forensic analysis, once hostilities have ceased, or treaties have been signed.

If this sounds a bit grim, it’s because it is. As Import AI’s Jack Clark points out, “This is a quick paper that lays out the concerns of AI+War from a community we don’t frequently hear from: people that work as direct suppliers of government technology.”

It might be in everyone’s best interest to pay careful attention to how both academics and the government continue to frame the problem going forward.

Read full story here…




DARPA: AI Mosaic Warfare And Multi-Domain Battle Strategy

Technocrats at DARPA are racing to apply Artificial Intelligence to engaged warfare,  coordinating all battlefield components into a coordinated killing machine.  Success depends on engineers and computer programmers. ⁃ TN Editor

DARPA is automating air-to-air combat, enabling reaction times at machine speeds and freeing pilots to concentrate on the larger air battle and directing an air wing of drones.

Dogfighting will still be rare in the future but it is part of AI and automation taking over all high-end fighting. New human fighter pilots learn to dogfight because it represents a crucible where pilot performance and trust can be refined. To accelerate the transformation of pilots from aircraft operators to mission battle commanders — who can entrust dynamic air combat tasks to unmanned, semi-autonomous airborne assets from the cockpit — the AI must first prove it can handle the basics.

The vision is AI handles the split-second maneuvering during within-visual-range dogfights and pilots become orchestra conductors or higher level managers over large numbers of unmanned systems.

DARPA wants mosaic warfare. Mosaic warfare shifts from expensive manned systems to a mix of manned and less-expensive unmanned systems that can be rapidly developed, fielded, and upgraded with the latest technology to address changing threats. Linking together manned aircraft with significantly cheaper unmanned systems creates a “mosaic” where the individual “pieces” can easily be recomposed to create different effects or quickly replaced if destroyed, resulting in a more resilient warfighting capability.

Read full story here…




Goodbye World: AI Arms Race Headed Toward Autonomous Killer Robots

The rapidly emerging AI arms race to create autonomous killing machines is Technocrat insanity at its highest peak. To the Technocrat mind, every problem has a scientific solution; so why not let an armed and lethal AI robot do all the work of human soldiers? ⁃ TN Editor

When it comes to deciding to kill a human in a time of war, should a machine make that decision or should another human?

The question is a moral one, brought to the foreground by the techniques and incentives of modern technology. It is a question whose scope falls squarely under the auspices of international law, and one which nations have debated for years. Yet it’s also a collective action problem, one that requires not just states, but also companies and workers in companies to come to an agreement to forgo a perceived advantage. The danger is not so much in making a weapon, but in making a weapon that can choose targets independently of the human responsible initiating its action.

In a May 8 report by Pax — a nonprofit with the explicit goal of protecting civilians from violence, reducing armed conflict, and building a just peace — authors look at the existing state of artificial intelligence in weaponry and urge nations, companies and workers to think about how to prevent an AI arms race, instead of thinking about how to win one. Without corrective action, the report warns, the status quo could lead all participants into a no-win situation, with any advantage gained from developing an autonomous weapon temporary and limited.

“We see this emerging AI arms race and we think if nothing happens that that is a major threat to humanity,” said Frank Slijper, one of the authors on the report. “There is a window of opportunity to stop an AI arms race from happening. States should try to prevent an AI arms race and work toward international regulation. In the meantime, companies and research institutes have a major responsibility themselves to make sure that that work in AI and related fields is not contributing to potential lethal autonomous weapons.”

The report is written with a specific eye toward the seven leading AI powers. These include the five permanent members of the UN security council: China, France, Russia, the United Kingdom and the United States. In addition, the report details the artificial intelligence research of Israel and South Korea, both countries whose geographic and political postures have encouraged the development of military AI.

“We identified the main players in terms of use and research and development efforts on both AI and military use of AI in increasingly autonomous weapons. I couldn’t think of anyone, any state we would have missed out from these seven,” says Slijper. “Of course, there’s always a number eight and the number nine.”

For each covered AI power, the report examines the state of AI, the role of AI in the military, and what is known of cooperation between AI developers in the private sector or universities and the military. With countries like the United States, where military AI programs are named, governing policies can be pointed to, and debates over the relationship of commercial AI to military use is known, the report details that process. The thoroughness of the research is used to underscore Pax’s explicitly activist mission, though it also provides a valuable survey of the state of AI in the world.

As the report maintains throughout, this role of AI in weaponry isn’t just a question for governments. It’s a question for the people in charge of companies, and a question for the workers creating AI for companies.

“Much of it has to do with the rather unique character of AI-infused weapons technology,” says Slijper. “Traditionally, a lot of the companies now working on AI were working on it from a purely civilian perspective to do good and to help humanity. These companies weren’t traditionally military producers or dominant suppliers to the military. If you work for an arms company, you know what you’re working for.”

In the United States, there’s been expressed resistance to contributing to Pentagon contracts from laborers in the tech sector. After Google worker outcry after learning of the company’s commitment to Project Maven, which developed a drone-footage processing AI for the military, the company’s leadership agreed to sunset the project. (Project Maven is now managed by the Peter Thiel-backed Andruil.)

Microsoft, too, experienced worker resistance to military use of its augmented reality tool HoloLens, with some workers writing a letter stating that in the Pentagon’s hands, the sensors and processing of the headset made it dangerously close to a weapon component. The workers specifically noted that they had built HoloLens “to help teach people how to perform surgery or play the piano, to push the boundaries of gaming, and to connect with the Mars Rover,” all of which is a far cry from aiding the military in threat identification on patrol.

“And I think it is for a lot of people working in the tech sector quite disturbing that, while initially, that company was mainly or only working on civilian applications of that technology, now more and more they see these technologies also been used for military projects or even lethal weaponry,” said Slijper.

Slijper points to the Protocol on Blind Weapons as a way the international community regulated a technology with both civilian and military applications to ensure its use fell within the laws of war.

Read full story here…




If Seeing Is Believing, Get Ready To Be Deceived

When the eco-world gets their hands on ‘deep fake’ AI software, they can make earth images that cannot be detected as false, inserting things that are not there and removing things that are there. Forget ‘lying with statistics’; Now it’s lying with images. ⁃ TN Editor

Step 1: Use AI to make undetectable changes to outdoor photos. Step 2: release them into the open-source world and enjoy the chaos.

Worries about deep fakes—machine-manipulated videos of celebrities and world leaders purportedly saying or doing things that they really didn’t—are quaint compared to a new threat: doctored images of the Earth itself.

China is the acknowledged leader in using an emerging technique called generative adversarial networks to trick computers into seeing objects in landscapes or in satellite images that aren’t there, says Todd Myers, automation lead and Chief Information Officer in the Office of the Director of Technology at the National Geospatial-Intelligence Agency.

“The Chinese are well ahead of us. This is not classified info,” Myers said Thursday at the second annual Genius Machines summit, hosted by Defense One and Nextgov. “The Chinese have already designed; they’re already doing it right now, using GANs—which are generative adversarial networks—to manipulate scenes and pixels to create things for nefarious reasons.”

For example, Myers said, an adversary might fool your computer-assisted imagery analysts into reporting that a bridge crosses an important river at a given point.

“So from a tactical perspective or mission planning, you train your forces to go a certain route, toward a bridge, but it’s not there. Then there’s a big surprise waiting for you,” he said.

First described in 2014, GANs represent a big evolution in the way neural networks learn to see and recognize objects and even detect truth from fiction.

Say you ask your conventional neural network to figure out which objects are what in satellite photos. The network will break the image into multiple pieces, or pixel clusters, calculate how those broken pieces relate to one another, and then make a determination about what the final product is, or, whether the photos are real or doctored. It’s all based on the experience of looking at lots of satellite photos.

GANs reverse that process by pitting two networks against one another—hence the word“adversarial.” A conventional network might say, “The presence of x, y, and z in these pixel clusters means this is a picture of a cat.” But a GAN network might say, “This is a picture of a cat, so x, y, and z must be present. What are x, y, and z and how do they relate?” The adversarial network learns how to construct, or generate, x, y, and z in a way that convinces the first neural network, or the discriminator, that something is there when, perhaps, it is not.

A lot of scholars have found GANs useful for spotting objects and sorting valid images from fake ones. In 2017, Chinese scholars used GANs to identify roads, bridges, and other features in satellite photos.

The concern, as AI technologists told Quartz last year, is that the same technique that can discern real bridges from fake ones can also help create fake bridges that AI can’t tell from the real thing.

Myers worries that as the world comes to rely more and more on open-source images to understand the physical terrain, just a handful of expertly manipulated data sets entered into the open-source image supply line could create havoc. “Forget about the [Department of Defense] and the [intelligence community]. Imagine Google Maps being infiltrated with that, purposefully? And imagine five years from now when the Tesla [self-driving] semis are out there routing stuff?” he said.

When it comes to deep fake videos of people, biometric indicators like pulse and speech can defeat the fake effect. But faked landscape isn’t vulnerable to the same techniques.

Read full story here…




Pentagon Pursues AI-Powered Weapons Despite Public Outcry

Technocrats in the military who build simply because they can, are a threat to the entire world. Technology that serves and helps humans, not kill or control them, is what mankind desires. ⁃ TN Editor

The controversy surrounding military artificial intelligence is rooted in “grave misperceptions” about what the department is actually trying to do, according to current and former Defense officials.

Protecting the U.S. in the decades ahead will require the Pentagon to make “substantial, sustained” investments in military artificial intelligence, and critics need to realize it doesn’t take that task lightly, according to current and former Defense Department officials.

Efforts to expand the department’s use of AI systems have been met with public outcry among many in the tech and policy communities who worry the U.S will soon entrust machines to make life-and-death decisions on the battlefield. Last year, employee protests led Google to pull out an Air Force project that used machine-learning to sort through surveillance footage.

On Wednesday, officials said the Pentagon is going to great lengths to ensure any potential applications of AI adhere to strict ethical standards and international norms. Even if the U.S. military balks on deploying the tech, they warned, global adversaries like Russia and China certainly will not, and their ethical framework will likely be lacking.

“The Department of Defense is absolutely unapologetic about pursuing this new generation of AI-enabled weapons,” former Deputy Defense Secretary Robert Work said Wednesday at an event hosted by AFCEA’s Washington, D.C. chapter. “If we’re going to succeed against a competitor like China that’s all in on this competition … we’re going to have to grasp the inevitability of AI.”

Released in February, the Pentagon’s AI strategy explicitly requires human operators to have the ability to override any decisions made by a military AI system and ensures the tech abides by the laws of armed conflict.

“I would argue the U.S. military is the most ethical military force in the history of warfare, and we think the shift to AI-enabled weapons will continue this trend,” Work said. And despite the criticism, he added, the tech could potentially save lives by reducing friendly fire and avoiding civilian casualties.

Lt. Gen. Jack Shanahan, who leads the department’s newly minted Joint Artificial Intelligence Center, told the audience much of the criticism he’s heard directed at military AI efforts is rooted in “grave misperceptions about what [the department] is actually working on.” While some may envision a general AI system “that’s going to roam indiscriminately across the battlefield,” he said, the tech will only be narrowly applied, and humans will always stay in the loop.

If anything, the outcry shows the Pentagon isn’t engaging enough with industry about the projects it’s pursuing, according to Shanahan.

Read full story here…




drone swarms

Killer Robots: Russia Races To Build ‘Ground Force’ Of Self-Driving Tanks And ‘Drone Swarms’

Talk of a ban on autonomous killer robots is useless chatter as the arms race between superpowers progresses. AI and networking are enabling fearsome weapons systems that can kill without human intervention. ⁃ TN Editor

A terrifying new video showcases some of Russia’s latest killer robot technology.

AI-controlled mini-tanks and swarms of autonomous cat-sized drones obliterate targets in the propaganda clip released by the Kremlin.

The robots are designed to assist Russian infantry, and are currently controlled by a human remotely.

However, in future the tech will be fully autonomous, meaning it can target and kill enemies without needing help from a human.

Russia’s Advanced Research Foundation (ARF), the military research division behind the new technology, said the ultimate goal is to have an army of robots controlled by AI.

https://www.youtube.com/watch?v=7ZM8HqjmCgE

“The evolution of combat robots is on the path of increasing the ability to perform tasks in autonomous mode with a gradual reduction in the role of the operator,” a spokesperson told C4ISRNET.

The video, uploaded to YouTube by the ARF, shows off the terrifying capabilities of the killer tech.

In it, a mini-tank is shown dashing over snow while targeting and firing at targets.

The deadly vehicle lines up with a soldier and is follow his line of sight, pointing its huge guns in whatever direction his rifle turns.

It seems to suggest that Russia’s AI tanks will one day autonomously follow their handler’s aim to dish out extra firepower.

As well as mini-tanks, the video also shows off Russia’s military drone technology.

A swarm of quadrocopters rises up in a coordinated shape and whizzes across the firing range.

They appear to drop explosives on targets, leaving them smouldering in the snow.

Russia is not the only country developing autonomous weapons.

Read full story here…