The Pentagon made public for the first time on Feb. 12 the outlines of its master plan for speeding the injection of artificial intelligence (AI) into military equipment, including advanced technologies destined for the battlefield.
By declassifying key elements of a strategy it had adopted last summer, the Defense Department appeared to be trying to address disparate criticism that it was not being heedful enough of the risks of using AI in its weaponry or not being aggressive enough in the face of rival nations’ efforts to embrace AI.
The 17-page strategy summary said that AI — a shorthand term for machine-driven learning and decision-making — held out great promise for military applications, and that it “is expected to impact every corner of the Department, spanning operations, training, sustainment, force protection, recruiting, healthcare, and many others.”
It depicted AI’s embrace in solely positive terms, asserting that “with the application of AI to defense, we have an opportunity to improve support for and protection of U.S. service members, safeguard our citizens, defend our allies and partners, and improve the affordability and speed of our operations.”
Stepping back from AI in the face of aggressive AI research efforts by potential rivals would have dire — even apocalyptic — consequences, it further warned. It would “result in legacy systems irrelevant to the defense of our people, eroding cohesion among allies and partners, reduced access to markets that will contribute to a decline in our prosperity and standard of living, and growing challenges to societies that have been built upon individual freedoms.”
The publication of the Pentagon strategy’s core concepts comes eight months after a Silicon Valley revolt against the military’s premier AI research program. After thousands of Google employees signed a petition protesting the company’s involvement in an effort known as Project Maven, meant to speed up the analysis of videos taken by a drone so that military personnel could more readily identify potential targets, Google announced on June 1 that it would back out of it.
But the release of the strategy makes clear that the Trump administration isn’t having second thoughts
about the utility of AI. It says the focus of the Defense Department’s Joint Artificial Intelligence Center (JAIC), created last June, will be on “near-term execution and AI adoption.” And in a section describing image analysis, the document suggests there are some things machines can do better than humans can. It says that “AI can generate and help commanders explore new options so that they can select courses of action that best achieve mission outcomes, minimizing risks to both deployed forces and civilians.”