TN Note: Artificial Intelligence is being applied to ‘chatbots’, programs that interact with users as if they were human. The problem is, people cannot differentiate between real and unreal human interaction, leading many to build bonding relationships with a program. This is not the way life is meant to be, however, and will lead to serious social dysfunction.
Artificial intelligence is coming to a messaging app near you.
Google has been working on a messaging-based chat bot for a year, according to The Wall Street Journal. The newspaper described the service as a Google Now-like virtual assistant that you could send messages to and get the answers back as messages.
It’s not clear whether this service would be available within Google’s Hangouts or Messenger service, whether it could be available on other platforms, such as over SMS, or whether it would be a new messaging service. One source told the Journal that Google would open up its chatbot as an extensible platform, which means other companies could build special-purpose chatbots based on Google’s data.
The Journal had no information about a launch date or name for the service, but did say the project is being headed by longtime Googler, Nick Fox.
An A.I. chatbot makes sense for Google. Consumers are increasingly going mobile and searching (pun intended) for an alternative to search. Current alternatives, such as Google’s own Google Now or its competitors — Siri, Cortana, Alexa and others — all suffer from imperfect voice recognition. And in their state of evolution, they can be unsatisfying to use.
John Underkoffler, the CEO of Oblong Industries (and creator of the Minority Report and Iron Man user interfaces), told me recently that “we haven’t built a good feedback system yet” (for voice assistants) that keeps you informed in real time about how well the system is understanding you. Virtual assistants also require a conscious decision to stop doing the current task and actively seek out the virtual assistant, which is a reflex many users haven’t developed.
Meanwhile, millions of online users, who used to seek out data on search engines like Google Search, and more recently on social networks like Facebook, are now moving to messaging apps, such as Facebook’s WhatsApp or Facebook Messenger, Snapchat, Viber, Telegram, WeChat and many others. The habit or impulse to reach out to people on messaging apps, and to respond to incoming messages through notifications, is growing stronger.
Google doesn’t have the most popular social network or messaging apps, but it does have the best and most popular search engine. Also: Many people consider Google Now to be the best virtual assistant. Building A.I. virtual assistance into a messaging platform makes a world of sense for Google. It helps the company with both their search engine exodus problem and the messaging app nonpopularity problem.
Of course, the new Google chatbot solves Google’s problems only if it succeeds. To succeed, Google needs to win users from a wide range of alternatives, including and especially Facebook’s.
M is for ‘Made Out Of People’
Facebook launched a new service on its mobile Messenger app called “M” (the code-name was Moneypenny).
M is a chatbot designed to do things for you. Trouble is, A.I. is imperfect. No chatbot has yet passed the Turing testuncontroversially.
So Facebook M performs a neat (if expensive) trick: Humans fill in where A.I. fails.
When you ask M whether people are involved, it replies: “I’m A.I., but humans train me.”
That claim is simply not true. Humans directly answer some of the queries. So some of M is A.I., and, yes, humans train this A.I., but many queries are answered by people.
This has been proved by multiple journalists testing the system for human involvement.
In any event, this reveals that Facebook is willing to pay what must be a massive amount of money for real people to help answer M queries, while denying it all the while. Chat-based A.I. as an alternative to search — or, for that matter, virtual assistants, customer service, and more — could become a major, important way for people to use the Internet.
Companies are desperate to show that computers can convincingly respond as people would. They grasp intuitively that the public wants exactly that: A fake human.
Cheating on the Turing test by inserting humans is Facebook’s stop-gap solution. But all chatbot makers, including Facebook, Google, Microsoft and many others, are working hard on acing the test — on creating a chatbot that always convincingly plays a humanlike role in our lives.
Google itself even created a somewhat philosophical A.I. engine, which emerged in the summer. Google researchers published earlier this year a research paper on Arxiv about a machine learning-based proof-of-concept chatbot they created that can discuss Big Questions, such as: “What’s the meaning of life?” That sounds profound, until you learn that the answers have been gleaned from a database of movie dialog. The chatbot answers the Big Questions, but with Hollywood’s answers.
Basing answers on existing dialog seems to be the winning approach to the problem of making chatbots seem human. At least, that’s been Microsoft’s experience.
X is for XiaoIce
Microsoft researchers in China have been developing a chatbot in China (and in Chinese) called XiaoIce, which is reportedly used by some 40 million people on their smartphones.
XiaoIce is different from the Siris of the world because it’s more of a friend than a personal assistant. It can hold conversations, tell jokes, suggest products to buy and do other things. The New York Times even reported that about 25% of users have at some point told XiaoIce “I love you.”
Unlike Google’s research project, which gleans responses from movie dialog, XiaoIce gets them from social media in China. So when you ask XiaoIce. “What’s the meaning of life?” the A.I. scans a database of people who have posed that question online, and chooses one of the popular responses to provide to the user.
The disturbing reality is that XiaoIce is not only basing its replies on social media chatter, it’s replacing social media and messaging for some users in some circumstances. And therein lies the dystopian risk with messaging-based chatbots.
Read full story here…