Google Providing AI Technology To Defense Department’s Algorithmic Warfare

A U.S. Air Force pilot, left, and a censor operator, right, prepare to launch a MQ-1B Predator unmanned aerial vehicle, from a ground control station at a secret air base in the Persian Gulf region on Jan. 7, 2016.
Google censors everyone not sympathetic to its Technocrat ideology, but has no problem working with fellow Technocrats at the Department of Defense. In his farewell speech, President Dwight Eisenhower warned America about the Military-Industrial Complex and then about the Scientific Elite exerting power over society. This story indicates that they have merged together quite naturally. ⁃ TN Editor

Google has quietly secured a contract to work on the Defense Department’s new algorithmic warfare initiative, providing assistance with a pilot project to apply its artificial intelligence solutions to drone targeting.

The military contract with Google is routed through a Northern Virginia technology staffing company called ECS Federal, obscuring the relationship from the public.

The contract, first reported Tuesday by Gizmodo, is part of a rapid push by the Pentagon to deploy state-of-the-art artificial intelligence technology to improve combat performance.

Google, which has made strides in applying its proprietary deep learning tools to improve language translation, and vision recognition, has a cross-team collaboration within the company to work on the AI drone project.

The team, The Intercept has learned, is working to develop deep learning technology to help drone analysts interpret the vast image data vacuumed up from the military’s fleet of 1,100 drones to better target bombing strikes against the Islamic State.

The race to adopt cutting-edge AI technology was announced in April 2017 by then-Deputy Defense Secretary Robert Work, who unveiled an ambitious plan called the Algorithmic Warfare Cross-Functional Team, code-named Project Maven. The initiative, Work wrote in an agency-wide memo, is designed to “accelerate DoD’s integration of big data and machine learning” and “turn the enormous volume of data available to DoD into actionable intelligence and insights at speed.”

The first phase of Project Maven, which incorporates multiple teams from across the Defense Department, is an effort to automate the identification and classification of images taken by drones — cars, buildings, people — providing analysts with increased ability to make informed decisions on the battlefield.

“The technology flags images for human review, and is for non-offensive uses only,” a Google spokesperson told Bloomberg. “Military use of machine learning naturally raises valid concerns. We’re actively discussing this important topic internally and with others as we continue to develop policies and safeguards around the development and use of our machine learning technologies.”

The idea is to essentially provide a recommendation tool, so that the AI program can quickly single out points of interest around a type of event or target so that drone analysts can work more efficiently.

The department announced last year that the AI initiative, just over six months after being announced, was used by intelligence analysts for drone strikes against ISIS in an undisclosed location in the Middle East.

Gregory C. Allen, an adjunct fellow with the Center for New American Security, says the initiative has a number of unusual characteristics, from its rapid development to the level of integration with contractors.

“The developers had access to the end-users very early on in the process. They recognized that [with] AI systems … you had to understand what your end-user was going to do with them,” Allen said. “The military has an awful lot of experts in analyzing drone imagery: ‘These are the parts of my job I hate, here’s what I’d like to automate.’ There was this iterative development process that was very familiar in the commercial software world, but unfamiliar in the defense world.”

“They were proud of how fast the development went, they were proud of the quality they were getting,” added Allen, co-author of “Artificial Intelligence and National Security,” a report on behalf of the U.S. Intelligence Advanced Research Projects Activity.

Read full story here…

Related Articles That You Might Like

Leave a Reply

avatar
  Subscribe  
Notify of
x
Follow Technocracy.News?

The only Authoritative source for

Exposing Technocracy

Stories curated daily from around the world

Subscribe and get the digest!

No SPAM! We will not share your email with any 3rd party.

Thank You for Subscribing!

 

If you don't receive a confirmation email within a few

minutes, please check  your spam/junk folder.

Wath for a confirmation email.