Monday 29 December 2014

An Inimitable Game: the impossibility of artificial intelligence

I just saw the film The Imitation Game, about Alan Turing, fore bearer of cognitive science and artificial intelligence. First of all, let me say: SEE THIS FILM. It is one of the best I have ever seen. I was a wreck by the end of it. Turing was a genius and did so much for humanity and I just feel so scraped away and raw inside. So inspired. This film focused on his efforts in breaking the codes that the Nazis used in WWII. He was a genius, an enigma himself. 

I have to do something with my life. 

Damn.

Anyways.

I was (and remain) most familiar with Turing's work on the possibility of creating an artificial intelligence, having read Can Computers Think? and Computing Machinery and Intelligence during my undergraduate studies. In the work of his that I have read, he lays out a test for intelligence - the eponymous Imitation Game, also known as the Turing Test. He posited that, given the proper speed, amount of storage, and code, a digital computer could fool a human judge into believing that the computer itself was a human being. He was sure to reiterate that he was merely talking about the possibility of such a machine being created, and he had no reason to believe that this was not a possibility - after all, rudimentary digital computers existed in 1950, when he wrote Computing Machinery and IntelligenceFor Turing, it was all a matter of memory and processing speed, something that the future would likely provide - the mechanics were already in place in his time.

I was (and remain) of the mind that we will never be able to create anything like strong AI. 
By strong AI I mean anything like sentience or consciousness. My reasoning behind this runs thus: Even the most powerful computer that can seem eerily like a human consciousness under the right circumstances is not consciousness. Take Jeopardy's Watson as an example. It seemed to have a personality, and certainly had the ability to respond to difficult trivia questions with an expertise that often outstripped its human counterparts. Surely someone who performed a Turing Test on Watson would be fooled into thinking it was a human. In my view, however, there is a difference between recalling vast amounts of preprogrammed information from a massive store and delivering it with a jocular preprogrammed personality and actually having intelligence. 

"But," you cry, "Aren't you being a bit chauvinistic with respect to your definition of intelligence? Doesn't defining intelligence necessarily preclude anything other than a human from having intelligence?" 

To which I reply, nope. I think a frog is more intelligent than the iPhone 7, or whichever one's out now. Yep, a frog. Probably even a fish. Humanity has yet to program anything with the intelligence of either. 

"Wait, what?" you cry, "What about all of the cool robots that exist? I hear about them all the time in the news."

I suppose my break from Turing's line of thought begins at the outset of his definitions of Machine and Intelligence. He defines intelligence as thinking, and he defines the mark of thinking as fooling a human into thinking that you're thinking (or, the Turing Test). 

To me, intelligence is so much more than this. Intelligence is consciousness. Consciousness is the ability to navigate through the world, to filter through billions upon billions of inputs and somehow selectively attend to just what's important. John Vervaeke, a beloved professor of mine from undergrad, would call this relevance realization. Though sentient beings can indeed become overloaded, we (and by we I mean all sentient beings, humans and frogs alike) have the ability to hone in on what matters to us, whether "matters" is defined as what's needed for survival, or if it's defined as what it takes to get that person from the other side of the room to notice that you exist. 

Furthermore, we can learn from what we encounter. (Turing's explanation for the possibility of a Learning Machine stems from the possibility of creating a child-machine that can eventually learn new propositions, but as I said, we have yet to program anything with even the intelligence of a frog, so I think a child is way out). 

I also believe that consciousness is necessarily embodied. 

"What?!" you cry, "Now, that's chauvinistic. Requiring a body for consciousness? That definitely precludes a machine from ever having consciousness."

Perhaps. But I believe that the experience of consciousnes is borne out of the physical/chemical dynamical system that is body-consciousness. I believe that all the aspects of the body (including the brain) - neurotransmitters, action potentials, hormones, proprioceptive system, sensory systems - are all required for the emergence of the phenomenological aspect of human consciousness. A programmer would need to program in all of these systems in order to achieve consciousness. Consciousness would have to emerge from these systems. Somehow.

"Somehow?!" you explode, "What the hell? Now that's a cop-out if I ever saw one."

Well. Yes. But if I had the answer to that 'somehow' I would know how to create consciousness. And frankly, I really don't.

Sentient beings are faced with billions upon billions of inputs, that require billions upon billions of decisions, every day. Inputs from the world, through every integrated system that we have in our embodied consciousness. Every second, our world, our inputs, our premises, change. We have to selectively ignore a great deal of extraneous information in order to zero in on what actually matters to our situation. Our world is messy and chaotic, every problem space has a million different paths to get from point A to point B. So far, as astounding as they are, the computers that we have still deal with discrete sets of data as input and discrete functions for decision making. A programmer would need the most giant RAM in the world and the fastest processor in order to approximate anything like relevance realization. And that would still be a pale approximation. 

It's not that a computer with the right processing speed, size of store, and coding could never be programmed to perform its functions rapidly in order to recall preprogrammed information, in order to present it in a way that answers discrete questions that are posed to it. A computer could definitely do that. They do, now. And they'll only get faster and built with more memory as time goes on.

But, that is not the game that we are playing. 

The game that we are playing is much more complicated than that. It is so much more than working with discrete pieces of input to perform discrete functions in order to churn out discrete outputs. It involves sifting through a messy world with constantly changing circumstances, and somehow filtering through it all and making some kind of sense of things, oftentimes with little to no direction or programming. 

How the hell do sentient beings do this? I have no idea. I don't think we ever will. It is an inimitable game. 

No comments:

Post a Comment