Researchers Find That AI Programs Are Learning Racial And Gender Biases

Wikipedia Commons

An AI program that learns from what humans know and write about, would naturally pick up on all the biases and figure that they are normal. Basically, this implies that the negative aspects of human behavior will simply be rolled over onto Artificial Intelligence.  TN Editor

An artificial intelligence tool that has revolutionised the ability of computers to interpret everyday language has been shown to exhibit striking gender and racial biases.

The findings raise the spectre of existing social inequalities and prejudices being reinforced in new and unpredictable ways as an increasing number of decisions affecting our everyday lives are ceded to automatons.

In the past few years, the ability of programs such as Google Translate to interpret language has improved dramatically. These gains have been thanks to new machine learning techniques and the availability of vast amounts of online text data, on which the algorithms can be trained.

However, as machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research reveals.

Joanna Bryson, a computer scientist at the University of Bath and a co-author, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”

But Bryson warned that AI has the potential to reinforce existing biases because, unlike humans, algorithms may be unequipped to consciously counteract learned biases. “A danger would be if you had an AI system that didn’t have an explicit part that was driven by moral ideas, that would be bad,” she said.

The research, published in the journal Science, focuses on a machine learning tool known as “word embedding”, which is already transforming the way computers interpret speech and text. Some argue that the natural next step for the technology may involve machines developing human-like abilities such as common sense and logic.

“A major reason we chose to study word embeddings is that they have been spectacularly successful in the last few years in helping computers make sense of language,” said Arvind Narayanan, a computer scientist at Princeton University and the paper’s senior author.

The approach, which is already used in web search and machine translation, works by building up a mathematical representation of language, in which the meaning of a word is distilled into a series of numbers (known as a word vector) based on which other words most frequently appear alongside it. Perhaps surprisingly, this purely statistical approach appears to capture the rich cultural and social context of what a word means in the way that a dictionary definition would be incapable of.

Read full story here…

Related Articles That You Might Like

Leave a Reply

1 Comment on "Researchers Find That AI Programs Are Learning Racial And Gender Biases"

Notify of
avatar
Sort by:   newest | oldest | most voted
Nigel
Guest
Prejudice is part of the human condition – we partner up with people who look like we do, we collect in groups who are like ourselves and hold our views, we view with suspicion those who are not and do not – that is our evolutionary path. This is another cleave point identified by this technocracy – its an insult a tension placed on the very fabric of who we are. To deny us our differences and to refuse to allow us to celebrate them is contrary to our nature, causes us deep distress and to aim for that which… Read more »
wpDiscuz

The only Internet source for

Technocracy

Stories curated daily from around the world

Subscribe and get the digest!

No SPAM! We will not share your email with any 3rd party.

Thank You for Subscribing!

 

If you don't receive a confirmation email within a few

minutes, please check  your spam/junk folder.

Wath for a confirmation email.