Facebook Creates Special Ethics Team To Examine Bias In AI Software

AI programs inherit the biases of their creators. Thus, the censorship programs that Facebook has created already contain the seeds of technocrat thought and ethics. Are those same people capable of examining their own creations? ⁃ TN Editor

More than ever, Facebook is under pressure to prove that its algorithms are being deployed responsibly.

On Wednesday, at its F8 developer conference, the company revealed that it has formed a special team and developed discrete software to ensure that its artificial intelligence systems make decisions as ethically as possible, without biases.

Facebook, like other big tech companies with products used by large and diverse groups of people, is more deeply incorporating AI into its services. Facebook said this week it will start offering to translate messages that people receive via the Messenger app. Translation systems must first be trained on data, and the ethics push could help ensure that Facebook’s systems are taught to give fair translations.

 “We’ll look back and we’ll say, ‘It’s fantastic that we were able to be proactive in getting ahead of these questions and understand before we launch things what is fairness for any given product for a demographic group,'” Isabel Kloumann, a research scientist at Facebook, told CNBC. She declined to say how many people are on the team.

Facebook said these efforts are not the result of any changes that have taken place in the seven weeks since it was revealed that data analytics firm Cambridge Analytica misused personal data of the social network’s users ahead of the 2016 election. But it’s clear that public sentiment toward Facebook has turned dramatically negative of late, so much so that CEO Mark Zuckerberg had to sit through hours of congressional questioning last month.

Every announcement is now under a microscope.

Facebook stopped short of forming a board focused on AI ethics, as Axon (formerly Taser) did last week. But the moves align with a broader industry recognition that AI researchers have to work to make their systems inclusive. Alphabet’s DeepMind AI group formed an ethics and society team last year. Before that, Microsoft’s research organization established a Fairness Accountability Transparency and Ethics group.

The field of AI has had its share of embarrassments, like when Google Photos was spotted three years ago categorizing black people as gorillas.

Last year, Kloumann’s team developed a piece of software called Fairness Flow, which has since been integrated into Facebook’s widely used FBLearner Flow internal software for more easily training and running AI systems. The software analyzes the data, taking its format into consideration, and then produces a report summarizing it.

Kloumann said she’s been working on the subject of fairness since joining Facebook in 2016, just when people in the tech industry began talking about it more openly and as societal concerns emerged about the power of AI.

Read full story here…

 

Related Articles That You Might Like

Leave a Reply

avatar
  Subscribe  
Notify of
x
Follow Technocracy.News?

The only Authoritative source for

Exposing Technocracy

Stories curated daily from around the world

Subscribe and get the digest!

No SPAM! We will not share your email with any 3rd party.

Thank You for Subscribing!

 

If you don't receive a confirmation email within a few

minutes, please check  your spam/junk folder.

Wath for a confirmation email.