Top Experts Warn Against ‘Malicious Use’ Of AI

Image: Electronic Frontier Foundation

Top experts are warning of the dangers of AI, but is anybody listening? The Technocrats who invent and implement this technology have no concern for ethics or the outcome of their inventions. ⁃ TN Editor

Artificial intelligence could be deployed by dictators, criminals and terrorists to manipulate elections and use drones in terrorist attacks, more than two dozen experts said Wednesday as they sounded the alarm over misuse of the technology.

In a 100-page analysis, they outlined a rapid growth in cybercrime and the use of “bots” to interfere with news gathering and penetrate social media among a host of plausible scenarios in the next five to 10 years.

“Our report focuses on ways in which people could do deliberate harm with AI,” said Sean O hEigeartaigh, Executive Director of the Cambridge Centre for the Study of Existential Risk.

“AI may pose new threats, or change the nature of existing threats, across cyber-, physical, and political security,” he told AFP.

The common practice, for example, of “phishing” — sending emails seeded with malware or designed to finagle valuable personal data — could become far more dangerous, the report detailed.

Currently, attempts at phishing are either generic but transparent — such as scammers asking for bank details to deposit an unexpected windfall — or personalised but labour intensive — gleaning personal data to gain someone’s confidence, known as “spear phishing”.

“Using AI, it might become possible to do spear phishing at scale by automating a lot of the process” and making it harder to spot, O hEigeartaigh noted.

In the political sphere, unscrupulous or autocratic leaders can already use advanced technology to sift through mountains of data collected from omnipresent surveillance networks to spy on their own people.

“Dictators could more quickly identify people who might be planning to subvert a regime, locate them, and put them in prison before they act,” the report said.

Likewise, targeted propaganda along with cheap, highly believable fake videos have become powerful tools for manipulating public opinion “on previously unimaginable scales”.

An indictment handed down by US special prosecutor Robert Mueller last week detailed a vast operation to sow social division in the United States and influence the 2016 presidential election in which so-called “troll farms” manipulated thousands of social network bots, especially on Facebook and Twitter.

Another danger zone on the horizon is the proliferation of drones and robots that could be repurposed to crash autonomous vehicles, deliver missiles, or threaten critical infrastructure to gain ransom.

Autonomous weapons

“Personally, I am particularly worried about autonomous drones being used for terror and automated cyberattacks by both criminals and state groups,” said co-author Miles Brundage, a researcher at Oxford University’s Future of Humanity Institute.

The report details a plausible scenario in which an office-cleaning SweepBot fitted with a bomb infiltrates the German finance ministry by blending in with other machines of the same make.

The intruding robot behaves normally — sweeping, cleaning, clearing litter — until its hidden facial recognition software spots the minister and closes in.

“A hidden explosive device was triggered by proximity, killing the minister and wounding nearby staff,” according to the sci-fi storyline.

Read full story here…

Also see, The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation at the Electronic Frontier Foundation.

Related Articles That You Might Like

Leave a Reply

Be the First to Comment!

Notify of

The only Authoritative source for

Exposing Technocracy

Stories curated daily from around the world

Subscribe and get the digest!

No SPAM! We will not share your email with any 3rd party.

Thank You for Subscribing!


If you don't receive a confirmation email within a few

minutes, please check  your spam/junk folder.

Wath for a confirmation email.