Renowned researchers Manuel Blum and Lenore Blum have devoted their entire lives to the study of computer science with a particular focus on consciousness. They’ve authored dozens of papers and taught for decades at prestigious Carnegie Mellon University. And, just recently, they published new research that could serve as a blueprint for developing and demonstrating machine consciousness.
That paper, titled “A Theoretical Computer Science Perspective on Consciousness,” may only a be a pre-print paper, but even if it crashes and burns at peer-review (it almost surely won’t) it’ll still hold an incredible distinction in the world of theoretical computer science.
The Blum’s are joined by a third collaborator, one Avrim Blum, their son. Per the Blum’s paper:
All three Blums received their PhDs at MIT and spent a cumulative 65 wonderful years on the faculty of the Computer Science Department at CMU. Currently the elder two are emeriti and the younger is Chief Academic Officer at TTI Chicago, a PhD-granting computer science research institute focusing on areas of machine learning, algorithms, AI (robotics, natural language, speech, and vision), data science and computational biology, and located on the University of Chicago campus.
This is their first joint paper.
Hats off to the Blums, there can’t be too many theoretical computer science families at the cutting-edge of machine consciousness research. I’m curious what the family pet is like.
Let’s move on to the paper shall we? It’s a fascinating and well-explained bit of hardcore research that very well could change some perspectives on machine consciousness.
Per the paper:
Our major contribution lies in the precise formal definition of a Conscious Turing Machine (CTM), also called a Conscious AI. We define the CTM in the spirit of Alan Turing’s simple yet powerful definition of a computer, the Turing Machine (TM). We are not looking for a complex model of the brain nor of cognition but for a simple model of (the admittedly complex concept of) consciousness.
In this context, a CTM would appear to be any machine that can demonstrate consciousness. The big idea here isn’t necessarily the development of a thinking robot, but more so a demonstration of the core concepts of consciousness in hopes we’ll gain a better understanding of our own.
This requires the reduction of consciousness to something that can be expressed in mathematical terms. But it’s a little more complicated than just measuring waves. Here’s how the Blum’s put it:
An important major goal is to determine if the CTM can experience feelings not just simulate them. We investigate in particular the feelings of pain and pleasure and suggest ways that those feelings might be generated. We argue that even a complete knowledge of the brain’s circuitry – including the neural correlates of consciousness – cannot explain what enables the brain to generate a conscious experience such as pain.
We propose an explanation that works as well for robots having brains of silicon and gold as for animals having brains of flesh and blood. Our thesis is that in CTM, it is the architecture of the system, its basic processors; its expressive inner language that we call Brainish; and its dynamics (prediction, competition, feedback and learning); that make it conscious.
Defining consciousness is only half the battle – and one that likely won’t be won until after we’ve aped it. The other side of of the equation is observing and measuring consciousness. We can watch a puppy react to stimulus. Even plant consciousness can be observed. But for a machine to demonstrate consciousness its observers have to be certain it isn’t merely imitating consciousness through clever mimicry.
Let’s not forget that GPT-3 can blow even the most cynical of minds with its uncanny ability to seem cogent, coherent, and poignant (let us also not forget that you have to hit “generate new text” a bunch of times to get it to do so because most of what it spits out is garbage).
The Blums get around this problem by designing a system that’s only meant to demonstrate consciousness. It won’t try to act human or convince you it’s thinking. This isn’t an art project. Instead, it works a bit like a digital hourglass where each grain of sand is information.