Google’s Dystopian Research Censorship, Twisting Knowledge

Youtube
Please Share This Story!
Google’s AI is broken because it reflects the biases of its creators. That’s dangerous. Worse, its legal mob has circled the wagons to protect Google’s reputation and hide its shortcomings. The result? Research papers are being censored by lawyers who know little about what they are censoring. ⁃ TN Editor

In the wake of the firing of Timnit Gebru and other notable AI researchers at Google, Alphabet’s circled the wagons and lawyered up. Reports flow out of Mountain View depicting teams of lawyers censoring scientific research and acting as unnamed collaborators and peer-reviewers.

Most recently, Business Insider managed to interview several researchers who painted a startling and bleak picture of what it’s like to try and conduct research under such an anti-scientific regime.

Per the article, one researcher said:

You’ve got dozens of lawyers — no doubt, highly trained lawyers — who nonetheless actually know very little about this technology … and they’re working their way through your research like English undergrads reading a poem.

The problem here is that Google isn’t censoring research to avoid, say, its secrets getting out. Its lawyers are targeting scientific research that makes the company look bad.

The person quoted above added that they were specifically talking about crossing out references to “fairness” and “bias” and scientists being told to change the results of their work. It’s not only unethical, it’s incredibly dangerous.

The tea: Google’s AI is broken. It might be a trillion-dollar company and the most cutting-edge AI outfit on Earth, but its algorithms are biased. And that’s dangerous.

No matter how you slice it, Google’s AI doesn’t work as well for people who don’t look like the vast majority of Google’s employees (white dudes) as it does for people who do. From Search’s conflation of Black people and animals to the algorithms running the camera on the Pixel 6’s inability to properly process non-white skin tones, Google’s machine-learning woes are well-documented.

This is a big problem and it isn’t easy to fix. Imagine building a car that didn’t work as well for Black people and women as it did for white guys, selling 200 million, and then people slowly learning their automobiles were racist.

There’d be a lot of feelings and emotions about what that would mean.

Google’s current situation is a lot like that. Its products are everywhere. It can’t just recall Search or put Google Ads on hold for a few days while it rethinks the entire world of deep learning to exclude bias. Why not fix world hunger and make puppies immortal while they’re at it?

So what do you do when you’re one of the richest companies in the world and you come up against a truth so awful that its existence makes your model seem evil?

You do what big tobacco did. You find people willing to say what’s in your company’s best interests and you use them to stop the people telling the truth from sharing their research.

The National Institutes of Health released research in 2007 describing the role of lawyers during the big tobacco legal battles of the previous decades.

In the paper, which is titled “Tobacco industry lawyers as a disease vector,” the researchers attribute the spread of diseases associated with long-term tobacco use to the tactics employed by industry lawyers.

Some key takeaways from the paper include:

  • Despite their obligation to do so, tobacco companies often failed to conduct product safety research or, when research was conducted, failed to disseminate the results to the medical community and to the public.
  • Tobacco company lawyers have been involved in activities having little or nothing to do with the practice of law, including gauging and attempting to influence company scientists’ beliefs, vetting in-house scientific research, and instructing in-house scientists not to publish potentially damaging results.
  • Additionally, company lawyers have taken steps to manufacture attorney‐client privilege and work-product cover to assist their clients in protecting sensitive documents from disclosure, have been involved in the concealment of such documents, and have employed litigation tactics that have largely prevented successful lawsuits against their client companies.

Read full story here…

 

About the Author

Patrick Wood
Patrick Wood is a leading and critical expert on Sustainable Development, Green Economy, Agenda 21, 2030 Agenda and historic Technocracy. He is the author of Technocracy Rising: The Trojan Horse of Global Transformation (2015) and co-author of Trilaterals Over Washington, Volumes I and II (1978-1980) with the late Antony C. Sutton.
Subscribe
Notify of
guest
6 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

[…] Sourced from Technocracy News & Trends […]

[…] Sourced from Technocracy News & Trends […]

[…] Google’s Dystopian Research Censorship, Twisting Knowledge […]

Jim Reinhart

AI or heuristic will always be a reflection of the creators personal biases and opinion. I was a Decision Scientist and Systems Analyst and the basis of any model is an art form in itself. From risk analysis, demographics of the population, to level of confidence and variance, cyclicals, trends and the hypothesis testing are those factors that have a certain level of weight behind the variables on the creation of the model. Einstein did not understand wave theory PERIOD. His hypothysis are wrong from the beginning of a micro level of infinite wave and no understanding of density, string… Read more »

elle

F* Google–a tyrannical corporation of criminals in the globalist cartel.

[…] Google’s Dystopian Research Censorship, Twisting Knowledge (technocracy) […]