Fifty-two floors below the top of Salesforce Tower, I meet Paula Goldman in a glass-paneled conference room where the words EQUALITY OFFICE are spelled out on a patchwork bunting banner, the kind of decoration you might buy for a child’s birthday party.
Goldman has a master’s degree from Princeton and a Ph.D. from Harvard, where she studied how controversial ideas become mainstream. She arrived at Salesforce just over a year ago to become its first-ever Chief Ethical and Humane Use Officer, taking on an unprecedented and decidedly ambiguous title that was created specifically for her unprecedented, ambiguous, yet highly specific job: see to it that Salesforce makes the world better, not worse.
“I think we’re at a moment in the industry where we’re at this inflection point,” Goldman tells me. “I think the tech industry was here before, with security in the ’80s. All of a sudden there were viruses and worms, and there needed to be a whole new way of thinking about it and dealing with it. And you saw a security industry grow up after that. And now it’s just standard protocol. You wouldn’t ship a major product without red-teaming it or making sure the right security safeguards are in it.”
“I think we’re at a similar moment with ethics,” she says. “It requires not only having a set of tools by which to do the work, but also a set of norms, that it’s important. So how do you scale those norms?”
I ask her how those norms are decided in the first place.
“In some sense, it’s the billion-dollar question,” she says. “All of these issues are extremely complicated, and there’s very few of them where the answer is just absolutely clear. Right? A lot of it does come down to, which values are you holding up highest in your calculus?”
In the wake of the Cambridge Analytica scandal, employee walkouts, and other political and privacy incidents, tech companies faced a wave of calls to hire what researchers at the Data & Society Research Institute call “ethics owners,” people responsible for operationalizing “the ancient, domain-jumping, and irresolvable debates about human values that underlie ethical inquiry” in practical and demonstrable ways.
Salesforce hired Goldman away from the Omidyar Network as the culmination of a seven-month crisis-management process that came after Salesforce employees protested the company’s involvement in the Trump administration’s immigration work. Other companies, responding to their own respective crises and concerns, have hired a small cadre of similar professionals — philosophers, policy experts, linguists and artists — all to make sure that when they promise not to be evil, they actually have a coherent idea of what that entails.
So then what happened?
While some tech firms have taken concrete steps to insert ethical thinking into their processes, Catherine Miller, interim CEO of the ethical consultancy Doteveryone, says there’s also been a lot of “flapping round” the subject.
Critics dismiss it as “ethics-washing,” the practice of merely kowtowing in the direction of moral values in order to stave off government regulation and media criticism. The term belongs to the growing lexicon around technology ethics, or “tethics,” an abbreviation that began as satire on the TV show “Silicon Valley,” but has since crossed over into occasionally earnest usage.
“If you don’t apply this stuff in actual practices and in your incentive structures, if you don’t have review processes, well, then, it becomes like moral vaporware,” says Shannon Vallor, a philosopher of technology at the Markkula Center for Applied Ethics at Santa Clara University. “It’s something that you’ve promised and you meant to deliver, but it never actually arrived.”
Google, infamously, created an AI Council and then, in April of last year, disbanded it after employees protested the inclusion of an anti-LGBTQ advocate. Today, Google’s approach to ethics includes the use of “Model Cards” that aim to explain its AI.
“That’s not anything that has any teeth,” says Michael Brent, a data ethicist at Enigma and a philosophy professor at the University of Denver. “That’s just like, ‘Here’s a really beautiful card.'”
The company has made more-substantial efforts: Vallor just completed a tour of duty at Google, where she taught ethics seminars to engineers and helped the company implement governance structures for product development. “When I talk about ethics in organizational settings, the way I often present it is that it’s the body of moral knowledge and moral skill that helps people and organizations meet their responsibilities to others,” Vallor tells me.
More than 100 Google employees have attended ethics trainings developed at the Markkula center. The company also developed a fairness module as part of its Machine Learning Crash Course, and updates its list of “responsible AI practices” quarterly. “The vast majority of the people who make up these companies want to build products that are good for people,” Vallor says. “They really don’t want to break democracy, and they really don’t want to create threats to human welfare, and they really don’t want to decrease literacy and awareness of reality in society. They want to make things they’re proud of. So am I going to do what I can to help them achieve that? Yes.”
The Markkula center, where Vallor works, is named after Mike Markkula Jr., the “unknown” Apple co-founder who, in 1986, gave the center a starting seed grant in the same manner that he gave the young Steve Jobs an initial loan. He never wanted his name to be on the building — that was a surprise, a token of gratitude, from the university.
Markkula has retreated to living a quiet life, working from his sprawling gated estate in Woodside. These days, he doesn’t have much contact with the company he started — “only when I have something go wrong with my computer,” he tells me. But when he arrived at the Santa Clara campus for an orientation with his daughter in the mid-’80s, he was Apple’s chairman, and he was worried about the way things were going in the Valley. “It was clear to us both, Linda [his wife] and I, that there were quite a few people who were in decision-making positions who just didn’t have ethics on their radar screen,” he says. “It’s not that they were unethical, they just didn’t have any tools to work with.”