Artificial Intelligence Arms Race is Inevitable But Can It Be Controlled?

The World Economic Forum understands that AI has sparked an international arms race, but the outcome is still unclear. From a Technocrat mind, AI is the solution to all of mankind’s societal and personal problems.  TN Editor

The machines rise, subjugating humanity. It’s a science fiction trope that’s almost as old as machines themselves. The doomsday scenarios spun around this theme are so outlandish – like The Matrix, in which human-created artificial intelligence plugs humans into a simulated reality to harvest energy from their bodies – it’s difficult to visualize them as serious threats.

Meanwhile, artificially intelligent systems continue to develop apace. Self-driving cars are beginning to share our roads; pocket-sized devices respond to our queries and manage our schedules in real-time; algorithms beat us at Go; robots become better at getting up when they fall over. It’s obvious how developing these technologies will benefit humanity. But, then – don’t all the dystopian sci-fi stories start out this way?

Any discussion about the dystopian potential of AI risks gravitating towards one of two extremes. One is overly credulous scare-mongering. Of course, Siri isn’t about to transmogrify into murderous HAL from 2001: A Space Odyssey. But the other extreme is equally dangerous – complacency that we don’t need to think about these issues, because humanity-threatening AI is decades or more away.

It is true that the artificial “superintelligence” beloved of sci-fi may be many decades in the future, if it is possible at all. However, a recent survey of leading AI researchers by TechEmergence found a wide variety of concerns about the security dangers of AI in a much more realistic, 20-year timeframe – including financial system meltdown as algorithms interact unexpectedly, and the potential for AI to help malicious actors optimize biotechnological weapons.

These examples show how, alongside technological progress on many fronts, the Fourth Industrial Revolution is promising a rapid and massive democratization of the capacity to wreak havoc on a very large scale. On the dark side of the “deep web”, where information is hidden from search engines, destructive tools across a range of emerging technologies already exist for sale, from 3D-printed weapons to fissile material and equipment for genetic engineering in home laboratories. In each case, AI exacerbates the potential for harm.

Consider another possibility mentioned in the TechEmergence survey. If we combine a gun, a quadrocopter drone, a high-resolution camera, and a facial recognition algorithm that wouldn’t need to be much more advanced than the current best in class, we could in theory make a machine we can program to fly over crowds, seeking particular faces and assassinating targets on sight.

Such a device would require no superintelligence. It is conceivable using current, “narrow” AI that cannot yet make the kind of creative leaps of understanding across distinct domains that humans can. When “artificial general intelligence”, or AGI, is developed – as seems likely, sooner or later – it will significantly increase both the potential benefits of AI and, in the words of Jeff Goddell, its security risks, “forcing a new kind of accounting with the technological genie”.

But not enough thinking is being done about the weaponizable potential of AI. As Wendell Wallach puts it: “Navigating the future of technological possibilities is a hazardous venture,” he observes. “It begins with learning to ask the right questions—questions that reveal the pitfalls of inaction, and more importantly, the passageways available for plotting a course to a safe harbor.”

Non-proliferation challenges

Prominent scholars including Stuart Russell have issued a call for action to avoid “potential pitfalls” in the development of AI that has been backed by leading technologists including Elon Musk, Demis Hassabis, Steve Wozniak and Bill Gates. One high-profile pitfall could be “lethal autonomous weapons systems” (LAWS) – or, more colloquially, “killer robots”. Technological advances in robotics and the digital tranformation of security has already changed the fundamental paradigm of warfare. According to Christopher Coker ” 21st-century technology is changing our understanding of war in deeply disturbing ways.” Fully developed LAWS will likely transform modern warfare as dramatically as gunpowder and nuclear arms.

The U.N. Human Rights Council has called for a moratorium on the further development of LAWS, while other activist groups and campaigns have advocated for a full ban, drawing an analogy with chemical and biological weapons, which the international community considers beyond the pale. For the third year in a row, the United Nations Member States met last month to debate the call for a ban, and how to ensure that any further development of LAWS stays within international humanitarian law. However, when ground breaking weapons technology is no longer confined to a few large militaries, non-proliferation efforts become much more difficult.

The debate is complicated by the fact that definitions remain mired in confusion. Platforms, such as drones, are commonly confused with weapons that can be loaded on platforms. The idea of systems being asked to execute narrowly defined tasks, such as to identify and eliminate armoured vehicles moving in a specific geographical area, is not always distinguished from the idea of systems being given discretionary scope to interpret more general missions, such as “win the war” .

Read full story here…

Related Articles That You Might Like

Leave a Reply

Be the First to Comment!

Notify of
avatar
wpDiscuz

The only Internet source for

Technocracy

Stories curated daily from around the world

Subscribe and get the digest!

No SPAM! We will not share your email with any 3rd party.

Thank You for Subscribing!

 

If you don't receive a confirmation email within a few

minutes, please check  your spam/junk folder.

Wath for a confirmation email.