newspeak

Newspeak AI: Totally Fake News Could Destroy Internet

OpenAI was founded in 2015 to promote and develop AI that would benefit humanity as a whole. Instead, they built the most dangerous AI conceivable that could destroy the Internet and manipulate the minds of every person on earth.

OpenAI was originally funded, in part, by Elon Musk, a consummate Technocrat whose grandfather was head of Technocracy, Inc., in Canada during the 1930s and 40s. ⁃ TN Editor

An artificial intelligence system that generates realistic stories, poems and articles has been updated, with some claiming it is now almost as good as a human writer.

The text generator, built by research firm OpenAI, was originally considered “too dangerous” to make public because of the potential for abuse.

But now a new, more powerful version of the system – that could be used to create fake news or abusive spam on social media – has been released.

The BBC, along with some AI experts, decided to try it out.

The model, called GPT-2, was trained on a dataset of eight million web pages, and is able to adapt to the style and content of the initial text given to it.

It can finish a Shakespeare poem as well as write articles and epithets.

At the time, the firm said: “Due to our concerns about malicious applications of the technology, we are not releasing the trained model. As an experiment in responsible disclosure, we are instead releasing a much smaller model for researchers to experiment with.”

As a result, the released version had far fewer parameters – phrases and sentences – than used during training.

This month, OpenAI decided to expand the parameters, offering a much broader database of training data.

Tech news site The Next Web said of the update: “This one works almost good enough to use as a general artificial intelligence for text generation – almost.”

Article author Tristan Greene added: I’m terrified of GPT-2 because it represents the kind of technology that evil humans are going to use to manipulate the population – and in my opinion that makes it more dangerous than any gun.”

The BBC decided to test the system by inputting the first few sentences of a recent story, appropriately one about AI.

We began with the first sentence of the real story:

Chinese search engine Sogou is creating artificial intelligence lookalikes to read popular novels in authors’ voices.

Here’s what the text generator came up with to complete it:

According to the researchers, this method of reading is not just the brainchild of Amazon Kindle users. Users of Android devices are also a notable market for fake reading. There are several theories to explain this.

Firstly, the device in a user’s hand is different from the tablet or computer they are reading on. Secondly, the device used to access the Internet might be tampered with in some way. A third reason is the user may just copy and paste content directly into their virtual characters.

However, another explanation is that the device does not have voice recognition capabilities at all. If this is the case then voice recognition is more advanced than Google’s translation engine. As such, these users are tricked into thinking they are reading from a real author – the book is made to sound authentic as soon as they try and read.

Now click here to read the real story for comparison.

Noel Sharkey, a professor of computer science at the University of Sheffield, conducted his own tests on the generator and was not too impressed.

Man walks into a bar

“If the software worked as intended by Open AI, it would be a very useful tool for easily generating fake news and clickbait spam. Fortunately, in its present form, it generates incoherent and ridiculous text with little relation to the input ‘headlines’,” he said.

He added: ” I tested the software with numerous headlines and got laughable results. For example, for ‘Turkeys hate Christmas’, I got that ‘Turkey is the only nation in the world that doesn’t celebrate Christmas’ and a number of unrelated sentences.

“For ‘Boris Johnson loves the backstop’, it produced incoherent gibberish and some stuff about AI and sport. When I input the statement that ‘Boris Johnson hates the backstop’, I got a more coherent story that appears to have been pulled off a Google search.”

Dave Coplin, founder of AI consultancy the Envisioners, also had a play with the system, inputting the first line of a classic joke: A man walks into a bar…

The suggestion from the AI was not what he was expecting: “…And ordered two pints of beer and two scotches. When he tried to pay the bill, he was confronted by two men – one of whom shouted “This is for Syria”. The man was then left bleeding and stabbed in the throat”.

This “overwhelmingly dystopian reflection of our society” was a lesson in how any AI system will reflect the bias found in training data, he said.

“From my brief experiments with the model, it’s pretty clear that a large portion of the data has been trained by internet news stories,” he said.

Read full story here…




china

China Extends Social Credit Scoring To Corporations

As a Technocracy, China must apply social engineering to everything, everyone, everywhere, and that includes corporations as well. Corporate punishment for perceived bad behavior could be devastating to western firms.

When Technocrats in the U.S. pick up on this idea, firms who don’t measure up on Sustainable Development, for instance, could be forced to comply by threats of being shunned out of business: permits could be denied or revoked, access to critical services could be shut off, etc. ⁃ TN Editor

Foreign businesses in China are ill-prepared for the tough sanctions and constant surveillance demanded by a social credit system to be rolled out this year, a European business group warned Wednesday.

Under this new system for ranking businesses, both foreign and domestic companies will be required to install surveillance cameras in their premises and share the data with the government.

They will also be rated on their tax record and compliance with a range of existing laws, including customs or environmental regulations.

Those who violate rules will be placed in “blacklists” and subjected to “immediate and severe punishments”, the EU Chamber of Commerce in China said in a report published Wednesday.

The sanctions are not limited to penalties but also include more frequent inspections, customs delays, not getting subsidies or tax rebates and public shaming, the report added.

“The corporate social credit system could mean life or death for individual companies,” said Jorg Wuttke, president of the EU chamber.

“The overwhelming absence of preparation by the European business community is deeply concerning.”

Each company operating in China is already being assessed against at least 300 different “specific rules” ranging from emissions levels to workplace safety and complaints against their products on e-commerce platforms, government documents showed.

“Beijing plans to combine all these different ratings into a single database by the end of the year,” said Bjorn Conrad, head of the Berlin-based consultancy Sinolytics that co-authored the report.

A single score could mean that a company is penalised across China for a slip by one of its regional branches.

Companies will also be rapped for working with suppliers or partners with bad social credit.

The system will also involve the unprecedented demand that all businesses have to install surveillance cameras in their premises and transfer huge amounts of data and footage to government officials.

“Dozens of companies have raised concerns about the sheer volume and depth of data that needs to be shared with the government,” said Conrad.

Read full story here…




Ring

Snitch City: Ring Camera Has Partnered With 400 Police Forces

Amazon owns Ring and has sole million of the camera-embedded cameras to homeowners nationwide. Next, it offered the surveillance to police forces in every community. So far, 400 police forces are signed up.

Homeowners are snitching on people who may or may not be evil-doers, putting entire neighborhoods at risk for privacy violations. It’s one thing to film someone who is on your property, but quite another to film someone walking or driving by on the street. ⁃ TN Editor

The doorbell-camera company Ring has quietly forged video-sharing partnerships with more than 400 police forces across the United States, granting them access to homeowners’ camera footage and a powerful role in what the company calls America’s “new neighborhood watch.”

The partnerships let police automatically request the video recorded by homeowners’ cameras within a specific time and area, helping officers see footage from the company’s millions of Internet-connected cameras installed nationwide, the company said. Officers don’t receive ongoing or live-video access, and homeowners can decline the requests, which are sent via emails that thank them for “making your neighborhood a safer place.”

The number of police deals, which has not previously been reported, will likely fuel broader questions about privacy, surveillance and the expanding reach of tech giants and local police. The rapid growth of the program, which launched last spring, surprised some civil-liberties advocates, who believed fewer than 300 agencies had signed on.

Ring is owned by Amazon, which bought the firm last year for more than $800 million, financial filings show. Amazon founder Jeff Bezos also owns The Washington Post.

Ring officials and law-enforcement partners portray the vast camera network as an irrepressible shield for American neighborhoods, saying it can assist police investigators and protect homes from criminals, intruders and thieves.

“The mission has always been making the neighborhood safer,” said Eric Kuhn, the general manager of Neighbors, Ring’s crime-focused companion app. “We’ve had a lot of success in terms of deterring crime and solving crimes that would otherwise not be solved as quickly.”

But legal experts and privacy advocates have voiced alarm over the company’s eyes-everywhere ambitions and increasingly close relationship with police, saying the program could threaten civil liberties, turn residents into informants and subject innocent people, including those who Ring users have flagged as “suspicious,” to greater surveillance and potential risk.

“If the police demanded every citizen put a camera at their door and give officers access to it, we might all recoil,” said Andrew Guthrie Ferguson, a law professor and author of “The Rise of Big Data Policing.”

By tapping into “a perceived need for more self-surveillance and by playing on consumer fears about crime and security,” he added, Ring has found “a clever workaround for the development of a wholly new surveillance network, without the kind of scrutiny that would happen if it was coming from the police or government.”

Read full story here…