Professor: Militant Liberals Are Politicizing Artificial Intelligence

Please Share This Story!
Although originally posted in December 2020, this article written by a professor emeritus of computer science and engineering is critical to understand the threat posed by Open AI’s “woke” ChatGPT and Google’s Bard. If they are woke or biased, it is because they were written to be that way.

One caveat that I must throw in is that the learning process of AI consumes literally everything on the Internet, which contains a high percentage of woke/biased information. Learning from woke stories, books, videos, etc., may be enough to result in a woke AI. Nevertheless, it would be possible to programmatically de-bias the conclusions made, but no effort has been made to do so. ⁃ TN Editor

What do you do if decisions that used to be made by humans, with all their biases, start being made by algorithms that are mathematically incapable of bias? If you’re rational, you should celebrate. If you’re a militant liberal, you recognize this development for the mortal threat it is, and scramble to take back control.

You can see this unfolding at AI conferences. Last week I attended the 2020 edition of NeurIPS, the leading international machine learning conference. What started as a small gathering now brings together enough people to fill a sports arena. This year, for the first time, NeurIPS required most papers to include a ‘broader impacts’ statement, and to be subject to review by an ethics board. Every paper describing how to speed up an algorithm, for example, now needs to have a section on the social goods and evils of this obscure technical advance. ‘Regardless of scientific quality or contribution,’ stated the call for papers, ‘a submission may be rejected for… including methods, applications, or data that create or reinforce unfair bias.’

This was only the latest turn of the ratchet. Previous ones have included renaming the conference to something more politically correct and requiring attendees to explicitly accept a comprehensive ‘code of conduct’ before they can register, which allows the conference to kick attendees out for posting something on social media that officials disapproved of. More seriously, a whole subfield of AI has sprung up with the express purpose of, among other things, ‘debiasing’ algorithms. That’s now in full swing.

I posted a few tweets raising questions about the latest changes — and the cancel mob descended on me. Insults, taunts, threats — you name it. You’d think that scientists would be above such behaviors, but no. I pointed out that NeurIPS is an outlier in requiring broader impact statements, and the cancelers repeatedly changed the subject. I argued against the politicization of AI, but they took that as a denial that any ethical considerations are valid. A corporate director of machine learning research, also a Caltech professor, published on Twitter for all to see a long list of people to cancel, their sole crime being to have followed me or liked one of my tweets. The same crowd succeeded in making my university issue a statement disavowing my views and reaffirming its liberal credentials.

Why the fuss? Data can have biases, of course, as can data scientists. And algorithms that are coded by humans can in principle do whatever we tell them. But machine-learning algorithms, like pretty much all algorithms you find in computer-science textbooks, are essentially just complex mathematical formulas that know nothing about race, gender or socioeconomic status. They can’t be racist or sexist any more than the formula y = a x + b can.

Daniel Kahneman’s bestselling book, Thinking, Fast and Slow, has a whole chapter on how algorithms are more objective than humans, and therefore make better decisions. To the militant liberal mind, however, they are cesspools of iniquity and must be cleaned up.

What cleaning up algorithms means, in practice, is inserting into them biases favoring specific groups, in effect reestablishing in automated form the social controls that the political left is so intent on. ‘Debiasing’, in other words, means adding bias. Not surprisingly, this causes the algorithms to perform worse at their intended function. Credit-card scoring algorithms may reject more qualified applicants in order to ensure that the same number of women and men are accepted. Parole-consultation algorithms may recommend letting more dangerous criminals go free for the sake of having a proportional number of whites and blacks released. Some even advocate outlawing the use in algorithms of all variables correlated with race or gender, on the grounds that they amount to redlining. This would not only make machine learning and all its benefits essentially impossible, but is particularly ironic given that those variables are precisely what we need to separate decisions from the ones we want to exclude.

If you question this or any other of a wide range of liberal demands on AI, you’re in for a lot of grief. The more prominent the researcher that gets canceled, the better, because it sends the most chilling message to everyone else, particularly junior researchers. Jeff Dean, Google’s legendary head of AI, and Yann LeCun, Facebook’s chief AI scientist and co-founder of deep learning, have both found themselves on the receiving end of the liberal posse’s displeasure.

Conservatives have so far been largely oblivious to progressive politics’ accelerating encroachment on AI. If AI were still an obscure and immature field, this might be OK, but the time for that has long passed. Algorithms increasingly run our lives, and they can impose a militantly liberal (in reality illiberal) society by the back door. Every time you do a web search, use social media or get recommendations from Amazon or Netflix, algorithms choose what you see. Algorithms help select job candidates, voters to target in political campaigns, and even people to date. Businesses and legislators alike need to ensure that they are not tampered with. And all of us need to be aware of what is happening, so we can have a say. I, for one, after seeing how progressives will blithely assign prejudices even to algorithms that transparently can’t have any, have started to question the orthodox view of human prejudices. Are we really as profoundly and irredeemably racist and sexist as they claim? I think not.

Pedro Domingos is a professor emeritus of computer science and engineering at the University of Washington.

Read full story here…

About the Editor

Patrick Wood
Patrick Wood is a leading and critical expert on Sustainable Development, Green Economy, Agenda 21, 2030 Agenda and historic Technocracy. He is the author of Technocracy Rising: The Trojan Horse of Global Transformation (2015) and co-author of Trilaterals Over Washington, Volumes I and II (1978-1980) with the late Antony C. Sutton.
Subscribe
Notify of
guest

4 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

[…] Read original article […]

Sunny

In my estimation, after they take down the Internet (soon), it will reboot with GPT and Bard, et. al., as our new and only search engine options. That is, we will be fed whatever they want us to know and only what they want us to know. Gone will be “you are now free to move about the cabin.” Ugh. Remember this? “When you use Google, do you get more than one answer? Of course you do. Well, that’s a bug. We have more bugs per second in the world. We should be able to give you the right answer… Read more »

wendy

I love this statement, “algorithms that are mathematically incapable of bias”. Algorithms are codes which tech libs create. Of course they are capable of bias. I would take it one step further by saying the entire purpose of AI and algorithms in general is so the creators of them steer us toward their one rule, one view, profitable acceptable answer. In terms of shopping, they obviously will steer our money where they wish it to go. They wouldn’t be spending billions on all of this unless they get a cash reward. If the AI’s were truly unbiased none of these… Read more »

trackback

[…] Militant Liberals Are Politicizing […]