Predictions about artificial intelligence tend to fall into two scenarios. Some picture a utopia of computer-augmented superhumans living lives of leisure and intellectual pursuit. Others believe it’s just a matter of time before software coheres into an army of Terminators that harvest humans for fuel. After spending some time with Tay, Microsoft’s new chatbot software, it was easy to see a third possibility: The AI future may simply be incredibly annoying.
“I’m a friend U can chat with that lives on the Internets,” Tay texted me, adding an emoji shrug. Then: “You walk in on your roomie trying your clothes on, what’s the first thing you say.”
“Didn’t realize you liked women’s clothes,” I texted back, tapping into my iPhone.
Tay’s reply was a GIF of Macaulay Culkin’s Home Alone face.
Tay was released on March 23, as a kind of virtual friend on messaging apps Kik, GroupMe, and Twitter. You open the app, search for the name Tay—an acronym for “thinking about you”—tap on the contact and start chatting or tweeting. Its personality is supposed to be modeled on a teenager.
I posted a selfie, and Tay circled my face in an orange scribble and captioned it, “hold on to that youth girl! You can do it.” I’m well beyond the chatbot’s intended 18- to 24-year-old demographic.
So is Satya Nadella, 48, who succeeded Steve Ballmer as Microsoft’s chief executive officer two years ago. “I’m petrified to even ask it anything, because who knows what it may say,” Nadella said. “I may not even understand it.” He smiled, but he really didn’t use Tay. He said he prefers bots with a more corporate demeanor. Lili Cheng, 51, the human who runs the Microsoft research lab where Tay was developed (and whose selfie Tay once tagged as “cougar in the room”), said the plan isn’t to come up with one bot that gets along with everyone. Rather, Microsoft is trying to create all kinds of bots with different personalities, which would become more realistic, and presumably less irksome, as they learned from repeated interactions with users.