Will Artificial Intelligence End Up Being Really Stupid?

artificial intelligence
The pro-AI hype surrounding it offers human-level intelligence, but the reality may be far less. AI may find narrow areas of application, but no amount of knowledge can lead to wisdom. ⁃ TN Editor

It’s hard to go anywhere these days without coming across some mention of artificial intelligence (AI). You hear about it, you read about it and it’s hard to find a presentation deck (on any subject) that doesn’t mention it. There is no doubt there is a lot of hype around the subject.

While the hype does increase awareness of AI, it also facilitates some pretty silly activities and can distract people from much of the real progress being made. Disentangling the reality from the more dramatic headlines promises to provide significant advantages for investors, business people and consumers alike.

Artificial intelligence has gained its recent notoriety in large part due to high profile successes such as IBM’s Watson winning at Jeopardy and Google’s AlphaGo beating the world champion at the game “Go”. Waymo, Tesla and others have also made great strides with self-driving vehicles. The expansiveness of AI applications was captured by Richard Waters in the Financial Times [here}: “If there was a unifying message underpinning the consumer technology on display [at the Consumer Electronics Show] … it was: ‘AI in everything’.”

High profile AI successes have also captured people’s imaginations to such a degree that they have prompted other far reaching efforts. One instructive example was documented by Thomas H. Davenport and Rajeev Ronanki in the Harvard Business Review [here]. They describe, “In 2013, the MD Anderson Cancer Center launched a ‘moon shot’ project: diagnose and recommend treatment plans for certain forms of cancer using IBM’s Watson cognitive system.” Unfortunately, the system didn’t work and by 2017, “the project was put on hold after costs topped $62 million—and the system had yet to be used on patients.”

Waters also picked up on a different message – that of tempered expectations. In regard to “voice-powered personal assistants”, he notes, “it isn’t clear the technology is capable yet of becoming truly useful as a replacement for the smart phone in navigating the digital world” other than to “play music or check the news and weather”.

Other examples of tempered expectations abound. Generva Allen of Baylor College of Medicine and Rice University warned [here], “I would not trust a very large fraction of the discoveries that are currently being made using machine learning techniques applied to large sets of data.” The problem is that many of the techniques are designed to deliver specific answers and research involves uncertainty. She elaborated, “Sometimes it would be far more useful if they said, ‘I think some of these are really grouped together, but I’m uncertain about these others’.”

Worse yet, in extreme cases AI not only underperforms; it hasn’t even been implemented yet. The FT reports [here], “Four in 10 of Europe’s ‘artificial intelligence’ startups use no artificial intelligence programs in their products, according to a report that highlights the hype around the technology.”

Cycles of inflated expectations followed by waves of disappointment come as no surprise to those who have been around artificial intelligence for a while: They know all-too-well this is not the first rodeo for AI. Indeed, much of the conceptual work dates to the 1950s. In reviewing some of my notes recently I came across a representative piece that explored neural networks for the purpose of stock picking – dated from 1993 [here].

The best way to get perspective on AI is to go straight to the source and Martin Ford gives us that opportunity through his book, Architects of Intelligence. Organized as a succession of interviews with the industry’s leading researchers, scholars and entrepreneurs, the book provides a useful history of AI and highlights the key strands of thinking.

Two high level insights emerge from the book. One is that despite the disparate backgrounds and personalities of the interviewees, there is a great deal of consensus on important subjects. The other is that many of the priorities and concerns of the top AI researches are quite noticeably different from those expressed in mainstream media.

Take for example, the concept of artificial general intelligence (AGI). This is closely related to the notion of the “Singularity” which is the point at which artificial intelligence matches that of humans – on its path to massively exceeding human intelligence. The idea has captured people’s concerns about AI that include massive job losses, killer drones, and a host of other dramatic manifestations.

AI’s leading researchers have very different views; as a group they are completely unperturbed by AGI. Geoffrey Hinton, Professor of computer science at the University of Toronto and Vice president and engineering fellow at Google said, “If your question is, ‘When are we going to get a Commander Data [from the Star Trek TV series]’, then I don’t think that’s how things are going to develop. I don’t think we’re going to get single, general-purpose things like that.”

Yoshua Bengio, Professor of computer science and operations research at the University of Montreal, tells us that, “There are some really hard problems in front of us and that we are far from human-level AI.” He adds, “we are all excited because we have made a lot of progress on climbing the hill, but as we approach the top of the hill, we can start to see a series of other hills rising in front of us.”

Barbara Grosz, Professor of natural sciences at Harvard University, expressed her opinion, “I don’t think AGI is the right direction to go”. She argues that because the pursuit of AGI (and dealing with its consequences) are so far out into the future that they serve as “a distraction”.

Another common thread among the AI researches is the belief that AI should be used to augment human labor rather than replace it. Cynthia Breazeal, Director of the personal robots group for MIT media laboratory, frames the issue: “The question is what’s the synergy, what’s the complementarity, what’s the augmentation that allows us to extend our human capabilities in terms of what we do that allows us to really have greater impact in the world.” Fei-Fei Li, Professor of computer science at Stanford and Chief Scientist for Google Cloud, described, “AI as a technology has so much potential to enhance and augment labor, in addition to just replace it.”

James Manyika, Chairman and director of McKinsey Global Institute noted since 60% of occupations have about a third of their constituent activities automatable and only about 10% of occupations have more than 90% automatable, “many more occupations will be complemented or augmented by technologies than will be replaced.”

Further, AI can only augment human labor insofar as it can effectively work withhuman labor. Barbara Grosz pointed out, “I said at one point that ‘AI systems are best if they’re designed with people in mind’.” She continued, “I recommend that we aim to build a system that is a good team partner and works so well with us that we don’t recognize that it isn’t human.”

David Ferrucci, Founder of Elemental Cognition and Director of applied AI at Bridgewater Associates, said, “The future we envision at Elemental Cognition has human and machine intelligence tightly and fluently collaborating.” He elaborated, “We think of it as thought-partnership.” Yoshua Bengio reminds us, however, of the challenges in forming such a partnership: “It’s not just about precision [with AI], it’s about understanding the human context, and computers have absolutely zero clues about that.”

It is interesting that there is a fair amount of consensus regarding key ideas such as AGI is not an especially useful goal right now, AI should be applied to augment labor and not replace it, and AI should work in partnership with people. It’s also interesting that these same lessons are borne out by corporate experiences.

Richard Waters describes how AI implementations are still at a fairly rudimentary stage in the FT [here]: “Strip away the gee-whizz research that hogs many of the headlines (a computer that can beat humans at Go!) and the technology is at a rudimentary stage.” He also notes, “But beyond this ‘consumerisation’ of IT, which has put easy-to-use tools into more hands, overhauling a company’s internal systems and processes takes a lot of heavy lifting.”

That heavy lifting takes time and exceptionally few companies are there. Ginni Rometty, head of IBM, characterizes her clients’ applications as “Random acts of digital” and describes many of the projects as “hit and miss”. Andrew Moore, the head of AI for Google’s cloud business, describes it as “Artisanal AI”. Rometty elaborates, “They tend to start with an isolated data set or use case – like streamlining interactions with a particular group of customers. They are not tied into a company’s deeper systems, data or workflow, limiting their impact.”

While the HBR case of the MD Anderson Cancer Center provides a good example of a moonshot AI project that probably overreached, it also provides an excellent indication of the types of work that AI can materially improve. At the same time the center was trying to apply AI to cancer treatment, its “IT group was experimenting with using cognitive technologies to do much less ambitious jobs, such as making hotel and restaurant recommendations for patients’ families, determining which patients needed help paying bills, and addressing staff IT problems.”

Read full story here…

Join our mailing list!

Notify of