Gleiser is thinking hard about David Chalmers' "Hard Problem of Consciousness," i.e., what it is like to be conscious and how that happens from the 3.5 lb lump of fatty tissue in our skulls. He rejects the notion that consciousness is a by-product of neuronal activity, as well as the computational model (that we can simulate the brain and mind with a computer):
it becomes very hard to see how the subjective quality of the experiential mind will emerge from neuronal modeling in silicon chips: to capture thinking is not the same thing as capturing what the thinking is about.I agree.
Bagaimana Menarikkan Article Pada Hari Ini . BLUE.Jangan Lupa Datang Lagi Untuk Membaca Article Yang lebih Menarik Pada Masa Akan Datang/The Nature Of Consciousness: A Question Without An Answer?
by MARCELO GLEISER
August 07, 2013How does our subjective reality emerge from the physical structures of the brain and body?iStockphoto.com
Today I'd like to go back to a topic that leaves most people perplexed, me included: the nature of consciousness and how it "emerges" in our brains. I wrote about this a few months ago, promising to get back to it. At this point, no scientist or philosopher in the world knows how to answer it. If you think you know the answer, you probably don't understand the question:
Are you all matter?
Or, let's phrase it in a different way, a little less controversial and more amenable to a scientific discussion: how does the brain, a network of some 90 billion neurons, generate the subjective experience you have of being you?
Australian philosopher David Chalmers, now at New York University, dubbed this question "The Hard Problem of Consciousness." He did this to differentiate it from other problems, which he considers the "easy" ones, that is, those that can be solved through the diligent application of scientific research and methodology as it's being already done in cognitive neurosciences and in computational neuroscience. Even if some of these "easy" problems may take a century to solve, their difficulty doesn't even come close to that of the "hard" problem, which, some speculate, may be insoluble.
Note that, even if the hard problem may be insoluble, the majority of scientists and philosophers still stick to the hypothesis that matter is all there is and that "you" exist as a neuronal construction within your brain (and body, as the two are linked in many ways, not all understood yet).
Here are some of the problems Chalmers calls easy:These questions are on the whole localized, amenable to a reductionist description of how specific parts of the brain operate as electrochemical circuitry through myriad neural connections.
- The ability to discriminate and react to external stimuli
- The integration of sensorial information
- The difference between a state of wakefulness and sleep
- The intentional control of behavior
Recently, Henry Markram from the Federal Polytechnic School in Lausanne, Switzerland, received a billion-euro grant to lead the Human Brain Project, a consortium of more than a dozen European institutions that intends to create a full-blown simulation of the human brain. For this, they will need a supercomputer capable of more than a billion-billion operations per second (exaflops, where "exa" stands for 1018), about 50 times faster than today's high-end machines. Optimists believe that such computing power is within reach, possibly before the end of this decade.
Of course, Markram's project, or the intent of modeling a human brain in full in a computer, clashes frontally with the notion of the hard problem.
Markram and the "computationalists" believe that if the simulation is sufficiently complete and detailed, including everything from the flow of neurotransmitters across each individual synapse to the amazingly complex network of the trillions of inter-synaptic connections across the brain tissue, that it will function just as a human brain does, including a consciousness in every way as amazing as ours. To them, the hard problem doesn't exist: everything can be obtained from pilling neuron upon neuron on computer chip models, as bricks compose a house, plus all the other building details, plumbing, wiring, etc.
Although we must agree that Markram's project is of enormous scientific importance, I can't quite see how a computer simulation can create something like a human consciousness. Perhaps some other kind of consciousness, but not ours.
Another philosopher from New York University (that ought to be an amazing department to work in), Thomas Nagel, argued that we are incapable of understanding what it is like to be another animal, with its own subjective experience. He took bats as an example, probably because they construct their sense of reality through echolocation and are so different from us. Using ideas from MIT linguist Noam Chomsky, who has argued that every brain has cognitive limitations stemming from its design and evolutionary functionality (for example, a mouse will never talk), Nagel showed that we will never truly understand what it is like to be a bat.
This is another way of thinking about Chalmers' hard problem, what philosopher Colin McGinn calls "cognitive closure." (McGinn has just left the University of Miami after much controversy. Who knows, maybe he will also join NYU's philosophy department?)
Back to McGinn's ideas, he and other "mysterians" defend the idea that our brains can only do so much and one of the things that it can't do is understand the nature of consciousness. Being a philosophical argument there is of course no scientific proof of this limitation (what physicists fondly call a "no-go theorem"), but McGinn makes a compelling case, arguing that the difficulty comes from consciousness being nowhere and everywhere in the brain, thus not amenable to the methodic reductionist analysis as we tend to do with scientific issues.
This being the case, it becomes very hard to see how the subjective quality of the experiential mind will emerge from neuronal modeling in silicon chips: to capture thinking is not the same thing as capturing what the thinking is about.
McGinn leaves the door open to more advanced intelligences, with brains designed in more capable ways than ours. Of course, unless you are Ray Kurzweil and are convinced that it is just a matter of time before machines will be able to not just simulate the mind but leave us all behind, we can't ever predict reliably whether such technological marvels will come to be. But even if a more advanced (machine?) intelligence one day figures out what consciousness is about, it seems that for today we will have to continue living with the mystery of not knowing.
Posting Komentar