A viral study that revealed artificial intelligence could accurately guess whether a person is gay or straight based on their face is receiving harsh backlash from LGBTQ rights groups.
The study, which was conducted by researchers at the Stanford University, reported that the AI correctly distinguished gay men from straight men 81 percent of the time and 74 percent of the time for women.
Advocates called the research “junk science,” claiming that not only could the technology out people, but it could put their lives at risk – especially in brutal regimes that view homosexuality as a punishable offense.
“At a time where minority groups are being targeted, these reckless findings could serve as weapon to harm both heterosexuals who are inaccurately outed, as well as gay and lesbian people who are in situations where coming out is dangerous,” Jim Halloran, GLAAD’s Chief Digital Officer, wrote in a joint statement from GLAAD and The Human Rights Campaign.
However, the author of the study disagrees with the criticism, stating that this type of technology already exists and the purpose of his research was to expose security flaws and develop protections so that someone couldn’t use it for ill will.
“One of my obligations as a scientist is that if I know something that can potentially protect people from falling prey to such risks, I should publish it,” Michal Kosinksi, co-author of the study, told the Guardian. He added that discrediting his research wouldn’t help protect LGBTQ people from the potentially life-threatening implications this kind of technology has.
Advocates also called out the study for not looking at bisexual and transgender people or people of color. The researchers gathered 130,741 images of men and women from public profiles on a dating site for the AI to analyze, all of which were Caucasian. While Kosinksi and his co-author recognized that the lack of diversity in the study was an issue, they didn’t say which dating site they looked at and claimed they couldn’t find enough non-white gay people.