Facebook Shareholders Slam Zuckerberg As The ‘Dictator”

The head of Facebook Mark Zuckerberg is actually a Technocrat, but Technocrats are essentially dictators, as in China. Inwardly, Technocrats are motivated and guided by their Utopian goals of transforming the world for its own good, even if it requires forcing compliance. ⁃ TN Editor

Facebook is run like a “dictatorship,” according to some of the company’s shareholders, who slammed CEO Mark Zuckerberg at the annual shareholder meeting in Menlo Park, California, Thursday.

In the last few months Zuckerberg has faced criticism from users and regulators, as well as lawmakers in the Senate, the House and the European Parliament. The U.N. has even accused his company of facilitating genocide in Myanmar.

This week it was the turn of Facebook’s own investors, who laid into Zuckerberg for his poor leadership and the myriad failings at the world’s biggest social network.

The omens were bad from the start: a plane flew overhead bearing the sign: “YOU BROKE DEMOCRACY,” courtesy of the anti-monopoly activist group Freedom From Facebook, who want the network broken up.

David Kling, Facebook’s deputy general counsel, chaired the meeting. Three minutes in, while explaining that shareholders would not be able to speak until the voting was over, Kling was interrupted: “Shareholder democracy is already lacking at Facebook,” the shareholder shouted. She was told to sit down repeatedly by Kling and was eventually ejected from the meeting.

It didn’t get much better for Facebook’s leadership.

Christine Jantz from NorthStar Asset Management was first up, proposing to change the voting structure so that one share equals one vote, an issue she said was more important now given the “highly concerning issues the company is currently facing.”

“If privacy is a human right — as stated by Microsoft CEO — then we condemn that Facebook’s poor stewardship of customer data is tantamount to a human rights violation,” Jantz said.

“The shareholder revolt is yet another sign that Facebook can’t get ahead of the curve in terms of trying to address the wider issues that its business faces,” Andy Barr, founder of 10 Yetis, a digital media agency, told VICE News.

“At a time where global businesses are fighting to try and demonstrate transparency and democracy, Facebook, seemingly driven by Zuckerberg himself, is trying to shy away from any kind of accountability, especially with rules like his shares carrying 10 times more weight than normal shareholders,” Barr said.

Read full story here…




Hate: Google Listed ‘Nazism’ As The Ideology Of California Republican Party

Don’t blame it on Wikipedia, either.

The Google culture absolutely detests republicans and conservative thought. Its own employees are shunned, redlined and even fired for the conservative beliefs.

When Vice News discovered that searching for “California Republican Party” on Google displayed an infobox listing its number one ideology as ‘Nazism’, even before Conservatism, Market Liberalism and Fiscal Conservatism, the Internet caught wind and was immediately enraged.

Google yanked the entire Ideology text area faster than the blink of an eye, but the picture was already posted all over the Internet.

Incredulously, Google blamed the incident on Wikipedia as ‘vandalism’:

We regret that vandalism on Wikipedia briefly appeared on our search results. This was not the the result of a manual change by Google. We have systems in place that catch vandalism before it impacts search results, but occasionally errors get through, and that happened here.

While there is no proof that a Google employee edited the Wikipedia page for the California Republican Party, how is it that the global leader of AI could not spot such a glaring misstatement?

Here’s how: I have stated several times that AI has been demonstrated to take on the same biases as its creators. If Google’s AI missed this, it was because of the bias inherited from its programmers.

The level of vitriol in Silicon Valley against conservatives is hard to fathom. Recently, my friend Michael Shaw erected an electronic billboard on the northbound 101 at the 880 interchange in San Jose. This was an expression of free speech and the sign rotated several images. Within hours of showing a pro-Trump message, the sign was vandalized with eggs, causing extensive damage to the sign, as seen by the white dots in the sign.

A new electronic billboard on northbound 101 at the 880 interchange in San Jose, Calif., shows signs of vandalism, Monday, April 9, 2018. The sign has been displaying political ads including one last week that supported the reelection of President Trump. (Karl Mondon/Bay Area News Group)

 

 

If you are thinking, ‘What’s a few eggs?’, just consider that this sign is on one of the busiest freeways in California. The vandals risked life and limb to stop and launch these calcified missiles.

Of course, there is no proof that any Google employees participated in destroying this sign, but the point of culture is made. The Technocrats in Silicon Valley hate conservative thought.




Tommy Robinson: Free Speech Is Critically Wounded In London

Free Speech has been dealt a lethal blow in England, where an activist reporter was railroaded into prison after covering a trial while broadcasting to social media.  His hour-long livestream to Facebook was watched over 250,000 time within hours of posting. The judge was enraged that Robinson encouraged people to share: “I regard it as a serious aggravating feature that he was encouraging others to share it and it had been shared widely. That is the nature of the contempt.” ⁃ TN Editor

The rule of law is fragile, and relies on the self-restraint of the majority. In a just society, the majority obey the law because they believe it represents universal values – moral absolutes. They obey the law not for fear of punishment, but for fear of the self-contempt that comes from doing wrong.

As children, we are told that the law is objective, fair and moral. As we grow up, though, it becomes increasingly impossible to avoid the feeling that the actual law has little to do with the Platonic stories we were told as children. We begin to suspect that the law may in fact – or at least at times – be a coercive mechanism designed to protect the powerful, appease the aggressive, and bully the vulnerable.

The arrest of Tommy Robinson is a hammer-blow to the fragile base of people’s respect for British law. The reality that he could be grabbed off the street and thrown into a dangerous jail – in a matter of hours – is deeply shocking.

Tommy was under a suspended sentence for filming on courthouse property in the past. On May 25, 2018,  while live-streaming his thoughts about the sentencing of alleged Muslim child rapists, Tommy very consciously stayed away from the court steps, constantly used the word “alleged,” and checked with the police to ensure that he was not breaking the law.

Tommy yelled questions at the alleged criminals on their way into court – so what? How many times have you watched reporters shouting questions at people going in and out of courtrooms? You can find pictures of reporters pointing cameras and microphones at Rolf Harris and Gary Glitter, who were accused of similar crimes against children.

Tommy Robinson was arrested for “breaching the peace,” which is a civil proceeding that requires proof beyond a reasonable doubt. Was imminent violence about to erupt from his reporting? How can Tommy Robinson have been “breaching the peace” while wandering around in the rain on a largely empty street sharing his thoughts on criminal proceedings? There were several police officers present during his broadcast, why did they allow him to break the law for so long?

Was Tommy wrong to broadcast the names of the alleged criminals? The mainstream media, including the state broadcaster, the BBC, had already named them. Why was he punished, but not them?

These are all questions that demand answers.

Even if everything done by the police or the court was perfectly legitimate and reasonable, the problem is that many people in England believe that Tommy Robinson is being unjustly persecuted by his government. The fact that he was arrested so shortly after his successful Day for Freedom event, where he gathered thousands of people in support of free speech, strikes many as a little bit more than a coincidence.

Is the law being applied fairly? Tommy Robinson has received countless death threats over the years, and has reported many of them. Did the police leap into action to track down and prosecute anyone sending those threats?

If the British government truly believes that incarcerating Tommy Robinson is legitimate, then they should call a press conference, and answer as many questions as people have, explaining their actions in detail.

As we all know, there has been no press conference. Instead of transparency, the government has imposed a publication ban – not just on the trial of the alleged child rapists, but on the arrest and incarceration of Tommy Robinson. Not only are reporters unable to ask questions, they are forbidden from even reporting the bare facts about Tommy Robinson’s incarceration.

Why? British law strains – perhaps too hard – to prevent publication of information that might influence a jury, but Tommy’s incarceration was on the order of a judge. He will not get a jury trial for 13 months imprisonment. Since there is no jury to influence, why ban reports on his arrest and punishment?

Do these actions strike you as the actions of a government with nothing to hide?

Free societies can only function with a general respect for the rule of law. If the application of the law appears selective, unjust, or political, people begin to believe that the law no longer represents universal moral values. If so, what is their relationship to unjust laws? Should all laws be blindly obeyed, independent of conscience or reason? The moral progress of mankind has always manifested as resistance to injustice. Those who ran the Underground Railroad that helped escaped slaves get from America to Canada were criminals according to the law of their day. We now think of them as heroes defying injustice, because the law was morally wrong.

The inescapable perception that various ethnic and religious groups are accorded different treatment under the Western law is one of the most dangerous outcomes of the cult of diversity.

Diversity of thought, opinion, arguments and culture can be beneficial – diversity of treatment under the law fragments societies.

The blind mantra that “diversity is a strength” is an attempt to ignore the most fundamental challenge of multiculturalism, which is: if diversity is a value, what is our relationship to belief systems which do not value diversity?

If tolerance of homosexuality is a virtue, what is our relationship to belief systems that are viciously hostile to homosexuality? If equality of opportunity for women is a virtue, what about cultures and religions which oppose such equality?

And if freedom of speech is a value, what is our relationship to those who violently oppose freedom of speech?

Diversity is a value only if moral values remain constant. We need freedom of speech in part because robust debate in a free arena of ideas is our best chance of approaching the truth.

You need a team with diverse skills to build a house, but everything must rest on a strong foundation. Diversity is only a strength if it rests on universal moral values.

Is Tommy Robinson being treated fairly? If gangs of white men had spent decades raping and torturing little  Muslim girls, and a justly outraged Muslim reporter was covering the legal proceedings, would he be arrested?

We all know the answer to that question. And we all know why.

Diversity of opinion is the path to truth – diversity of legal systems is the path to ruin.

If the arrest and incarceration of Tommy Robinson is just, then the government must throw open the doors and invite cross-examination from sceptics. Honestly explain what happened, and why.

Explain why elderly white men accused of pedophilia are allowed to be photographed and questioned by reporters on court steps, while Pakistani Muslims are not.

Explain why a police force that took three decades to start dealing with Muslim rape gangs was able to arrest and incarcerate a journalist within a few scant hours.

Explain why a man can be arrested for breaching the peace when no violence has taken place – or appears about to take place.

To the British government: explain your actions, or open Tommy Robinson’s cell and let him walk free.

Read full story here…




Scientists: Data And AI Can Tell Who Is Lying

As AI facial analysis algorithms proliferate, they will be implemented in every conceivable application and circumstance. However, such software will never be ‘certified’ as 100% effective, creating social chaos as accusations fly. Technocrat scientists who invent this stuff have no view of ethics or societal implications. ⁃ TN Editor

Someone is fidgeting in a long line at an airport security gate. Is that person simply nervous about the wait?

Or is this a passenger who has something sinister to hide?

Even highly trained Transportation Security Administration (TSA) airport security officers still have a hard time telling whether someone is lying or telling the truth – despite the billions of dollars and years of study that have been devoted to the subject.

Now, University of Rochester researchers are using data science and an online crowdsourcing framework called ADDR (Automated Dyadic Data Recorder) to further our understanding of deception based on facial and verbal cues.

They also hope to minimize instances of racial and ethnic profiling that TSA critics contend occurs when passengers are pulled aside under the agency’s Screening of Passengers by Observation Techniques (SPOT) program.

“Basically, our system is like Skype on steroids,” says Tay Sen, a PhD student in the lab of Ehsan Hoque, an assistant professor of computer science. Sen collaborated closely with Karmul Hasan, another PhD student in the group, on two papers in IEEE Automated Face and Gesture Recognition and the Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquituous Technologies. The papers describe the framework the lab has used to create the largest publicly available deception dataset so far – and why some smiles are more deceitful than others.

Game reveals the truth behind a smile

Here’s how ADDR works: Two people sign up on Amazon Mechanical Turk, the crowdsourcing internet marketplace that matches people to tasks that computers are currently unable to do. A video assigns one person to be the describer and the other to be the interrogator.

The describer is then shown an image and is instructed to memorize as many of the details as possible. The computer instructs the describer to either lie or tell the truth about what they’ve just seen. The interrogator, who has not been privy to the instructions to the describer, then asks the describer a set of baseline questions not relevant to the image. This is done to capture individual behavioral differences which could be used to develop a  “personalized model.” The routine questions include “what did you wear yesterday?” — to provoke a mental state relevant to retrieving a memory —  and “what is 14 times 4?” — to provoke a mental state relevant to analytical memory.

“A lot of times people tend to look a certain way or show some kind of facial expression when they’re remembering things,” Sen said. “And when they are given a computational question, they have another kind of facial expression.”

They are also questions that the witness would have no incentive to lie about and that provide a baseline of that individual’s “normal” responses when answering honestly.

And, of course, there are questions about the image itself, to which the witness gives either a truthful or dishonest response.

The entire exchange is recorded on a separate video for later analysis.

1 million faces

An advantage of this crowdsourcing approach is that it allows researchers to tap into a far larger pool of research participants – and gather data far more quickly – than would occur if participants had to be brought into a lab, Hoque says. Not having a standardized and consistent dataset with reliable ground truth has been the major setback for deception research, he says. With the ADDR framework,the researchers gathered 1.3 million frames of facial expressions from 151 pairs of individuals playing the game, in a few weeks of effort. More data collection is underway in the lab.

Data science is enabling the researchers to quickly analyze all that data in novel ways. For example, they used automated facial feature analysis software to identify which action units were being used in a given frame, and to assign a numerical weight to each.

The researchers then used an unsupervised clustering technique —  a machine learning method that can automatically find patterns without being assigned any predetermined labels or categories.

It told us there were basically five kinds of smile-related ‘faces’ that people made when responding to questions,” Sen said. The one most frequently associated with lying was a high intensity version of the so-called Duchenne smile involving both cheek/eye and mouth muscles. This is consistent with the “Duping Delight” theory that “when you’re fooling someone, you tend to take delight in it,” Sen explained.

More puzzling was the discovery that honest witnesses would often contract their eyes, but not smile at all with their mouths. “When we went back and replayed the videos, we found that this often happened when people were trying to remember what was in an image,” Sen said. “This showed they were concentrating and trying to recall honestly.”

Read full story here…




EU Data ‘Trade War’ Knocks Out Access For Hundreds Of Millions

While Internet data privacy is something that everyone wants, Technocrats in Europe have legislated doom to whole segments of global information. The penalties for non-conforming web sites are so high, that many have just dropped out by shuttering EU access altogether.

According to Bloomberg,

The Los Angeles Times, the Chicago Tribune, and The New York Daily News are just some telling visitors that, “Unfortunately, our website is currently unavailable in most European countries.”

With about 500 million people living in the European Union, that’s a hard ban on one-and-a-half times the population of the U.S.

Blanket blocking EU internet connections — which will include any U.S. citizens visiting Europe — isn’t limited to newspapers. Popular read-it-later service Instapaper says on its website that it’s “temporarily unavailable for residents in Europe as we continue to make changes in light of the General Data Protection Regulation.”

Google and Facebook have already been ‘accused’ and lawsuits are being prepared.

The only way that Internet companies can protect themselves is to require users to accept their terms of service and if they refuse, deny them access to the website. Thus, users are being inundated across Europe with ‘forced-compliance’ policy screens popping up everywhere.

In short, European citizens now have data privacy, but with thousands of major sites essentially going dark, their leaders have managed to cut off their noses to spite their faces.

 




How Facebook And Google Provide Massive Funding To Journalism Around The World

Google and Facebook have committed over $500 million to various journalism programs around the world. There will be no discussion about what fake news is, but only that it disagrees with Technocracy and societal engineering. This gives these two companies almost total control in controlling public opinion and behavior. ⁃ TN Editor

In March, Google announced with much fanfare the launch of the Google News Initiative, a $300 million program aimed at “building a strong future for journalism,” as the company put it. That came on top of the previous Digital News Initiative, which was set up by Google in 2015 and included a $170 million innovation fund aimed at the European media industry.

Facebook, too, has been funneling money into journalism projects, including the News Integrity Initiative—a $14 million investment in a project run by City University of New York—and the Facebook Journalism Project, a wide-ranging venture the company says is designed to help media companies develop new storytelling tools and ways of promoting news literacy.

Taken together, Facebook and Google have now committed more than half a billion dollars to various journalistic programs and media partnerships over the past three years, not including the money spent internally on developing media-focused products like Facebook’s Instant Articles and Google’s competing AMP mobile project. The result: These mega-platforms are now two of the largest funders of journalism in the world.

The irony is hard to miss. The dismantling of the traditional advertising model—largely at the hands of the social networks, which have siphoned away the majority of industry ad revenue—has left many media companies and journalistic institutions in desperate need of a lifeline. Google and Facebook, meanwhile, are happy to oblige, flush with cash from their ongoing dominance of the digital ad market.

The result is a somewhat dysfunctional alliance. People in the media business (including some on the receiving end of the cash) see the tech donations as guilt money, something journalism deserves because Google and Facebook wrecked their business. The tech giants, meanwhile, are desperate for some good PR and maybe even a few friends in a journalistic community that—especially now—can seem openly antagonistic.

Given that tangled backstory, it’s no surprise the funding issue is contentious. Should media companies really be involved in rehabbing the images of two of the wealthiest companies on earth, especially when they are fundamentally competitors? Yet, given the financial state of journalism, wouldn’t it be irresponsible not to take the funds?

[the_ad id=”11018″]

The reality is that even if the money achieves some good, and even if there are no strings attached (which both companies insist is the case), accepting the largesse of Facebook and Google inevitably pulls the media even further into their orbit. It may not have a direct effect on what someone writes about or how a topic is covered, but it will undoubtedly have a long-term effect on the media and journalism. Are the tradeoffs worth it?

Even some of the people who benefit  from the money say they are torn between the desire for badly needed funding that can be put towards a positive purpose, and the sinking feeling that they are being drawn deeper into a relationship with a tech company that has a huge amount of power, and may ultimately use it in ways that are antithetical to journalism. In other words, they worry about being pawns in a PR game.

A former Google staffer who worked on the company’s media programs says even he feels conflicted about the practice. While many of the funded projects are worthwhile, he says, the result is that “a bunch of well-meaning people with good intentions get the money, and slowly they get sucked into a corporate machine that doesn’t have their best interests at heart.”

University of Virginia media studies professor Siva Vaidhyanathan says both Facebook and Google may care about journalism and want it to be healthy, “but they want that to happen on their own terms, and they want that to happen within an ecosystem dominated by these two companies. The British Empire wanted trains in Kenya and India to run well, too. So their concerns are sincere, but the effect is more often than not a deeper immersion in and dependence on these platforms.”

Vaidhyanathan sees an inherent conflict in accepting money from Facebook or Google because “these are two companies that directly compete with major publications for advertising revenues. So you’re basically going into a partnership with a competitor—a competitor that has a significant competitive advantage in terms of price, in terms of scale, in terms of technological expertise. So is that a good business decision? Increasingly, journalistic institutions are feeding the beasts that are starving them.”

According to one estimate by a media research firm, Google and Facebook will account for close to 85 percent of the global digital ad market this year and will take most of the growth in that market—meaning other players will be forced to shrink. That includes many of the traditional publishers and media outlets who now work with them.

Read full story here…




Cornell University Course Examines ‘Derangement’ Of ‘Climate Denialism’

Prestigious institutions of higher education are full of Technocrats who have hijacked the system to promote their pseudo-science climate belief system. These Technocrats are a tiny minority but they have risen to positions of influence. ⁃ TN Editor

A new seminar at Cornell University is determined to shut down “climate denialism,” claiming that there is “mounting evidence” that “global warming is real.”

Deranged Authority: The Force of Culture in Climate Change, worth four academic credits, is set to be taught in the Fall 2018 semester by cultural anthropologist Jennifer Carlson.

“The point of such courses…is to replace science with belief.” Tweet This

The course description asserts that “climate denialism is on the rise,” suggesting the increase is related to the rise of “reactionary, rightwing [sic] politics in the United States, UK, and Germany.” The proposed solution to combat such denialism and assumed ignorance is “climate justice,” even though over 30,000 scientists reject global warming alarmism.

Richard Lindzen, MIT emeritus professor of meteorology and a senior fellow at the Cato Institute, found the course “an insult to the intelligence of the students.”

He clarified to Campus Reform that many scientists do not argue against slight warming of the Earth after the Little Ice Age (the unusually cool period of the Earth around the 1700s A.D.), nor do those critical of anthropogenic climate change argue that humans have made no impact on the planet, merely that the effect has been small and largely beneficial.

“The point of such courses as are proposed for Cornell, is to replace science with belief,” Lindzen argued, adding that students are “encouraged to replace understanding with virtue signaling.”

Course readings will focus on the question of “authority” in the field of climate science, exploring “climate research, popular environmentalist texts, and industry campaigns aimed at obfuscating evidence of ecological collapse.”

Read full story here…




Eric Holder’s Old Law Firm To ‘Advise’ Facebook On Anti-Conservative Bias

The radical left is swarming Facebook to help protect its liberal bias and censorship of conservative thought. Facebook has achieved what left-wing politicians could not accomplish in over 40 years of attempts to stamp out conservative thought in the media. ⁃ TN Editor

Facebook has enlisted a team from law firm Covington and Burling to advise them on combating perceptions of bias against conservatives. One minor detail: Covington and Burling is the firm of Barack Obama’s left-wing former attorney general, Eric Holder.

According to Axios, the team will be led by former Senate Republican Whip Jon Kyl. Kyl was a vocal critic of Holder during his time in the Senate, and was rated highly by conservative organizations. After Kyl retired from the Senate, he joined Covington & Burling, where Holder had previously been a high-profile partner.

The former attorney general is still a partner at Covington & Burling, and was recently retained by the state of California for their expected legal showdowns with the Trump Administration. Holder has publicly flirted with the idea of running for president in 2020, telling reporters earlier this year that he would make a decision by the end of 2018.

So, to sum up: Facebook, a California-based company, has enlisted the same firm that is providing legal advice to their state against the Trump administration, through none other than Eric Holder, to advise them on combating perceptions of bias against conservatives.

The Heritage Foundation will also be working with Facebook on the same issue.

According to Axios, the conservative think-tank will “will convene meetings on these issues with Facebook executives.” Klon Kitchen, a former adviser to Sen. Ben Sasse who now works as a tech policy expert at Heritage, has reportedly hosted an event with Facebook’s head of global policy management.

Facebook has recently been engaged in outreach to conservative organizations, but their focus has not been on censorship concerns, but instead on securing free-market allies against the threat of regulation.

One of those allies, Berin Szóka of TechFreedom, told a congressional hearing on social media censorship that regulating social media companies to protect free speech was contrary to conservative values. He did not explain how a market dominated by monopolies like Facebook and Google is still “free.”

Szóka, along with other free-market conservatives, had previously been invited to a meeting with Facebook representatives aimed at fending off the so-called “rush to regulate” the platform, which holds a dominant position in the social media market with over 2 billion users.

Read full story here…




Facebook Creates Special Ethics Team To Examine Bias In AI Software

AI programs inherit the biases of their creators. Thus, the censorship programs that Facebook has created already contain the seeds of technocrat thought and ethics. Are those same people capable of examining their own creations? ⁃ TN Editor

More than ever, Facebook is under pressure to prove that its algorithms are being deployed responsibly.

On Wednesday, at its F8 developer conference, the company revealed that it has formed a special team and developed discrete software to ensure that its artificial intelligence systems make decisions as ethically as possible, without biases.

Facebook, like other big tech companies with products used by large and diverse groups of people, is more deeply incorporating AI into its services. Facebook said this week it will start offering to translate messages that people receive via the Messenger app. Translation systems must first be trained on data, and the ethics push could help ensure that Facebook’s systems are taught to give fair translations.

 “We’ll look back and we’ll say, ‘It’s fantastic that we were able to be proactive in getting ahead of these questions and understand before we launch things what is fairness for any given product for a demographic group,'” Isabel Kloumann, a research scientist at Facebook, told CNBC. She declined to say how many people are on the team.

Facebook said these efforts are not the result of any changes that have taken place in the seven weeks since it was revealed that data analytics firm Cambridge Analytica misused personal data of the social network’s users ahead of the 2016 election. But it’s clear that public sentiment toward Facebook has turned dramatically negative of late, so much so that CEO Mark Zuckerberg had to sit through hours of congressional questioning last month.

Every announcement is now under a microscope.

Facebook stopped short of forming a board focused on AI ethics, as Axon (formerly Taser) did last week. But the moves align with a broader industry recognition that AI researchers have to work to make their systems inclusive. Alphabet’s DeepMind AI group formed an ethics and society team last year. Before that, Microsoft’s research organization established a Fairness Accountability Transparency and Ethics group.

The field of AI has had its share of embarrassments, like when Google Photos was spotted three years ago categorizing black people as gorillas.

Last year, Kloumann’s team developed a piece of software called Fairness Flow, which has since been integrated into Facebook’s widely used FBLearner Flow internal software for more easily training and running AI systems. The software analyzes the data, taking its format into consideration, and then produces a report summarizing it.

Kloumann said she’s been working on the subject of fairness since joining Facebook in 2016, just when people in the tech industry began talking about it more openly and as societal concerns emerged about the power of AI.

Read full story here…

 




AI-Created Fake News: Personalized, Optimized And Even Harder To Stop

Artificial Intelligence is being harnessed to create personalized propaganda, aka fake news, to drive you to decisions that you might not have otherwise made. It will be nearly impossible to tell the difference between real, modified real and outright fake stories. ⁃ TN Editor

Fake news may have already influenced politics in the US, but it’s going to get a lot worse, warns an AI consultant to the CIA.

Sean Gourley, founder and CEO of Primer, a company that uses software to mine data sources and automatically generate reports for the CIA and other clients, told a conference in San Francisco that the next generation of fake news would be far more sophisticated thanks to AI.

“The automation of the generation of fake news is going to make it very effective,” Gourley told the audience at EmTech Digital, organized by MIT Technology Review.

The warning should cause concern at Facebook. The social network has been embroiled in a scandal after failing to prevent fake news, some of it created by Russian operatives, from reaching millions of people in the months before the 2016 presidential election. More recently the company been hit by the revelation that it let Cambridge Analytica, a company tied to the Trump presidential campaign, mine users’ personal data.

In recent interviews, Facebook’s CEO, Mark Zuckerberg, suggested that the company would use AI to spot fake news. According to Gourley, AI could be used in the service of the opposite goal as well.

Gourley noted that the fake news seen to date has been relatively simple, consisting of crude, hand-crafted stories posted to social media at regular intervals. Technology such as Primer’s could easily be used to generate convincing fake stories automatically, he said, and that could mean fake reports tailored to an individual’s interests and sympathies and carefully tested before being released, to maximize their impact. “I can generate a million stories, see which ones get the most traction, double down on those,” Gourley said.

Gourley added that fake news has so far been fed into social-media platforms like Facebook essentially at random. A more sophisticated understanding of network dynamics, as well as the mechanisms used to judge the popularity of content, could amplify a post’s effect.

Read full story here…