Friday, April 29, 2011

The Hard(wire) Problem: III


Before exploring the theories related to emergent properties of consciousness, however, we must first define artificial intelligence and look at some of the issues commonly associated with it.  Mathematician and philosopher Alan Turing was the first to raise the question, “can machines think?”  

Two important questions were raised and examined: 1.) What does it mean to be a machine?  2.)  How can we determine whether something thinks?    To determine whether something can think, Turing devised an intelligence test, now referred to as the Turing Test.  In the Turing Test, an interrogator presents a series of questions, to which both a machine and a human being answer.  If the machine can effectively fool the interrogator into thinking that it is indeed the human, then, in Turing’s argument, we must conclude that the machine is intelligent.  “Thinking” in the case of the Turing Test seems to be nothing more than “passing the test.”  

So is behaving like an intelligent human really equivalent to being intelligent?  Turing recognizes the fact that we cannot prove that the machine can ever experience qualitative consciousness like a human, but that this is precisely the same situation as trying to prove that another human being experiences qualitative consciousness.   All we have to go by, in either situation, is behavior; it is the only information readily available to us as the observer.  We can therefore only be certain of our own intelligence.  This seems to line up well with Dennett’s argument in “Quining Qualia.” 

John Searle, in his paper, “Minds, Brains, and Programs,” attacks what he calls Strong AI, or the production of intelligent machines, with the theory that achieving intelligence is impossible in computer systems due to their very nature.  He rejects the idea that a computer that can pass the Turing Test has cognitive states and that programs can thereby explain human cognition.  He uses the Chinese Room imaginary machine to clarify his views. 

 The example consists of him sitting in a room and manipulating sets of Chinese symbols for someone outside the room.  He receives sets of instructions, in English, for manipulating the symbols, yet still has no real understanding of the Chinese.  After a while, he becomes so good at manipulating the symbols that the outsider can no longer distinguish him from a native Chinese speaker.  However, he still cannot “recognize Chinese writing as Chinese distinct from, say Japanese writing or meaningless squiggles” (Searle, 1980).  He does not understand Chinese.  This, Searle argues, describes the distinction between a Turing computer and a cognitive being.  Searle later claims that intentionality will only arise in biological systems, an implication that does not bode well for physicalist, strong AI computer scientists.  

Margaret Boden, however, argues that Searle’s claims are mistaken.  He does not have any real understanding, nor does anybody else, she says, how intentionality would derive from the brain, as Searle claims later in his paper.  Boden thinks that Searle’s claim that intentionality can only arise in a biological system is incomplete, precisely because we have so little understanding of intentionality at all.  What exactly do scientists think about intentionality and the emergent properties associated with consciousness?  

No comments:

Post a Comment