Friday, April 29, 2011

The Hard(wire) Problem: V

As a continuation of my last post on the subject...



Their model of self-collapse focuses on tubulin, a microtubule subunit protein. They claim that quantum states develop in the tubulins and, upon reaching a mass-time-energy threshold, objective reduction promptly occurs (Hammeroff & Penrose, 1996). Each instance of self-collapse equates to a conscious event. If this is true, then the classically-held view of consciousness by a vast majority of today’s neuroscientists, that of materialism, proves to be a valid argument. Neuroscientists, when and if, discussing consciousness, tend to favor the idea that consciousness stems from s complex cortical pathway. And while this pathway has neither been determined or understood, this theory implies that consciousness emerges from a physical substrate, a series of biochemical reactions. Hammeroff and Penrose are not the only ones to look to neural substrates as a means for understanding the role of quantum physics in the emergence of consciousness. Gustav Bernroider, through observations from his work with the K+ channel, suggests that behavior of this channel can only be understood at the quantum level. He suggests that a system of computational mapping exists between the quantum-entangled systems of K+ channel and O2 atoms of the binding pockets (on the channels). The ions that are destined to be expelled from the channel are proposed to encode information about the state of the O2 atoms (Bernroider, 2005).


As mentioned briefly before, neuroscience is a field dominated by biological naturalists who believe that consciousness can only arise in completely biological systems. This is similar to Searle’s earlier view that was attacked by Boden. Neuroscientists look to certain studies for their evidence. PVS (persistant vegetative state) patients are a popular model to look at with regards to consciousness. PVS involves loss of higher cerebral powers in the brain. However, the individual maintains their sleep-wake cycle and remains autonomically functional. This poses unique ethical challenges, especially when making the decision whether or not to withdraw life support (Levy, 2006). This also brings up the question of whether or not the individual is indeed themselves. If they do not experience consciousness, yet remain alive, are they little more than a plant? As for neural correlations, studying PVS patients seems to be critical. They have lost their consciousness; so what is different. Well, researchers have determined that PVS patients experience an impaired connectivity between the brainstem and the cortical areas; their cortical activity is therefore lower than the average individual’s (Levy, 2006). Neuroscientists are also currently using sedatives (Reeves et al., 2004), anesthetics (Ghoneim & Block, 1992), and hypnosis (Rainville et al., 2002) to attempt to tease apart different aspects of consciousness. And drugs, both illicit and prescription have been known to have varying effects on consciousness, pointing toward the idea that consciousness is an emergent property of biochemical reactions. An interesting study by Rodolpho Llinas (Llinas et al., 1998), entitled “The neuronal basis for consciousness,” claims that consciousness is the result of resonance in the thalamocortical areas of the brain. And of course, blindsight studies have been very popular when discussing proof of some sort of natural depth of consciousness (Holt, 1999).


Computer scientists also tend to hold physicalist views on consciousness. Functionalists like to define mental states in terms of causal roles. Therefore, any system that can instantiate the same pattern of causal roles should be able to give rise to the same mental states, including consciousness. In other words, if you can write complex computer programs that copy the brain states associated and correlated with consciousness, then theoretically one should be able to elicit the emergence of consciousness in a machine. Chalmers believes this: in properly designed computers, consciousness can be realized. Computer scientists have come up with several sets of criteria used to define consciousness in computer systems. Namely, Bernard Baars (Baars, 1988) and Igor Aleksander (Aleksander, 1994) hold two of the most popular and widely-used definition criterion. Baars’ criteria for consciousness are as follows: definition and context setting, adaptation and learning, editing, flagging and debugging, recruiting and control, prioritizing and access-control, decision-making and executive function, analogy forming, metacognitive and self-monitoring function, autoprogramming and self-maintenance function, and definitional and context-setting function (Baars, 1988). Aleksander uses 12 principles for what he calls Artificial Consciousness: the brain is a state machine, inner neuron partitioning, conscious and unconscious states, perceptual learning and memory, prediction, awareness of self, representation of meaning, learning utterances, learning language, will, instinct, and emotion display.

All of these criteria and forms of evidence for consciousness seem like they should be comprehensive enough to understand consciousness, yet we still do not know much more about how it emerges than we did decades ago. As artificial intelligence continues to progress within the context of Moore’s Law, we can expect to see some amazing feats of engineering and machines that will undoubtedly pass the Turing Test with ease. But, I raise the question again: will they ever be fully conscious on the same level as a human? I apologize for ending this series of posts on a non-distinct ground, but my opinions on the matter accurately reflect the state of the field of artificial intelligence at this point in time. My answer to the question: I do not know. However, if I were to predict one way that consciousness would emerge from an artificial system, I would say the system would have to be the perfect combination of biochemical reactions and computer science, of neural networks and algorithms, and of quantum mechanics and “spooky forces.”

No comments:

Post a Comment