< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME

[14.0] Dualism (4): Easy Problems & The Hard Problem

v2.2.0 / chapter 14 of 15 / 01 feb 24 / greg goebel

* One of the most prominent modern dualists is Australian philosopher David Chalmers, who has acquired notoriety for postulating the "hard problem" of consciousness -- which states, in so many words, that there has to be "something else" to consciousness besides plain old neurons. The hard problem is effectively a rephrasing of the old mind-body problem, and fares no better under examination.

THE TURING TEST


[14.1] WHAT IS THE HARD PROBLEM?
[14.2] INTEGRATED INFORMATION THEORY
[14.3] THE HARD PROBLEM, VITALISM, & CUTISM

[14.1] WHAT IS THE HARD PROBLEM?

* The last stop on this tour of critics of cognitive science is Australian philosopher David Chalmers (born 1966). Chalmers is best known for his 1996 book THE CONSCIOUS MIND, outlining his theory of consciousness, and introducing the idea for which he is famous: the "hard problem". On the way to articulating the hard problem, Chalmers does ask astute questions about the mind and consciousness, for example:

These questions are the bread and butter of cognitive science -- but Chalmers more or less dismisses them as the "easy problems", all subordinate to the hard problem, which he states as:

BEGIN_QUOTE:

What makes the hard problem hard and almost unique is that it goes beyond [the easy problems] about the performance of functions. To see this, note that even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience -- perceptual discrimination, categorization, internal access, verbal report -- there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?

END_QUOTE

In other words, the hard problem is the mind-body problem, asking about the qualia of experiences. As Chalmers put it in a 2006 interview:

BEGIN_QUOTE:

The basic intuition that gets it all going is that there seems to be an explanatory gap between an explanation of, say, the brain processes, and an explanation of consciousness. We like the idea that in science we are going to get a chain of explanations that goes all the way up from physics to chemistry to biology to whatever. But there still is this problem of consciousness. Putting together any amount of information on, say, neurons and the connections within the brain and so on, always leaves this gap.

Why is it that there feels like there is something in the inside? That consciousness is a first-person phenomenon, whereby one may have an experience of the world or oneself from the first person point of view? And no physical explanation anyone has ever given to date tells one why there should be such a thing at all, the first person point of view. What I try to argue in my work is that there cannot be any such explanation.

END_QUOTE

In a 1998 interview, Chalmers stated that "materialism and the existence of consciousness can't be reconciled", that "need to add something else, some new fundamental principles, to bridge the gap between neuroscience and subjective experience." In short, Chalmers is a property dualist, what he calls "Type B materialism", as opposed to the "Type A materialism" of cognitive pragmatism.

BACK_TO_TOP

[14.2] INTEGRATED INFORMATION THEORY

* If there must be "something else" behind consciousness beside PONs, what does Chalmers suggest it is? At one time, Chalmers didn't think much of quantum consciousness, though it seems he's mellowed on the idea -- but he's been consistently attracted to the idea that the mind is based on information.

The assertion that the mind is all about information might seem trite, since the brain, and its manifestation the mind, are obviously all about information processing -- and from that premise, we simply identify neurons as the information-processing agents, and end up with PONs all over again. That's not exactly what Chalmers means. He takes his cue from Italian neuroscientist Giulio Tononi (born 1960), who has developed a theory of consciousness called "Integrated Information Theory (IIT)".

Christof Koch is a prominent advocate of IIT. In his 2018 SCIENTIFIC AMERICAN essay, he gave a thumbnail description of IIT, excerpted here with minor editing:

BEGIN_QUOTE:

Fierce debates have arisen around the two most popular theories of consciousness. One is the global neuronal workspace (GNW) by psychologist Bernard J. Baars, and neuroscientists Stanislas Dehaene and Jean-Pierre Changeux. The theory begins with the observation that when you are conscious of something, many different parts of your brain have access to that information. If, on the other hand, you act unconsciously, that information is localized to the specific sensory motor system involved. For example, when you type fast, you do so automatically. Asked how you do it, you would not know: you have little conscious access to that information, which also happens to be localized to the brain circuits linking your eyes to rapid finger movements.

GNW argues that consciousness arises from a particular type of information processing -- familiar from the early days of artificial intelligence, when specialized programs would access a small, shared repository of information. Whatever data were written onto this "blackboard" became available to a host of subsidiary processes: working memory, language, the planning module, and so on. According to GNW, consciousness emerges when incoming sensory information, inscribed onto such a blackboard, is broadcast globally to multiple cognitive systems -- which process these data to speak, store or call up a memory or execute an action.

Because the blackboard has limited space, we can only be aware of a little information at any given instant. The network of neurons that broadcast these messages is hypothesized to be located in the frontal and parietal lobes. Once these sparse data are broadcast on this network and are globally available, the information becomes conscious. That is, the subject becomes aware of it. Whereas current machines do not yet rise to this level of cognitive sophistication, this is only a question of time. GNW posits that computers of the future will be conscious.

END_QUOTE

Okay, so far so good: according to GNW -- that is, GWT -- consciousness is, as Dehaene puts it, "brain-wide information sharing". It's about messages broadcast to the brain's society of agents, which react to the messages, and compete with each other to contribute messages themselves. The sharing among the "community of mind" implies awareness.

Seems to make sense, right? Certainly, Dehaene sees the global workspace as involving information distribution, and so the concept seems entirely consistent with an information-driven view of the mind. Koch speaks of finding the "neural correlates of consciousness"? Dehaene says he's identified them. However, the global workspace is not enough for advocates of IIT:

BEGIN_QUOTE:

Integrated information theory, developed by Tononi and his collaborators, including me, has a very different starting point: experience itself. Each experience has certain essential properties:

END_QUOTE

Nobody would deny any of these properties -- they seem completely obvious, and there's nothing in the global workspace model that does, or could, contradict them. What, then, does IIT have that GWT doesn't? The key is the term "the experience itself" -- that is, experience as a "thing is itself", which means "qualia". Alarms now start going off. Working from there, Koch writes:

BEGIN_QUOTE:

Tononi postulates that any complex and interconnected mechanism whose structure encodes a set of cause-and-effect relationships will have these properties -- and so will have some level of consciousness. It will feel like something from the inside. But if ... the mechanism lacks integration and complexity, it will not be aware of anything. As IIT states it, consciousness is intrinsic causal power associated with complex mechanisms such as the human brain.

IIT theory also derives, from the complexity of the underlying interconnected structure, a single non-negative parameter called "Phi" that quantifies this consciousness. If Phi is zero, the system does not feel like anything to be itself. Conversely, the bigger this number, the more intrinsic causal power the system possesses and the more conscious it is. The brain, which has enormous and highly specific connectivity, possesses very high Phi, which implies a high level of consciousness.

END_QUOTE

It's not so hard to follow along with this argument, but it is hard to see where it's going. Now the other shoe drops with a KABOOM:

BEGIN_QUOTE:

IIT also predicts that a sophisticated simulation of a human brain running on a digital computer cannot be conscious -- even if it can speak in a manner indistinguishable from a human being. Just as simulating the massive gravitational attraction of a black hole does not actually deform spacetime around the computer implementing the astrophysical code, programming for consciousness will never create a conscious computer. Consciousness cannot be computed: it must be built into the structure of the system.

END_QUOTE

Wait, what? According to IIT, if we play the Game of Mind, then if a machine acts like, has all the behaviors, of a human being, that doesn't mean it's actually conscious -- it might really be a zombie. Alas, we've been through this, we know the score, it's "not even wrong"; p-zombies are an incoherent idea. The only things we can observe going on upstairs with humans are the firing of neurons and behaviors, the only markers for consciousness are behavioral. If there's anything else there, we have no way of ever knowing what it is, and it is pointless to concern ourselves with it: "What we see is all we get."

Besides, as also discussed before, from the evolutionary point of view it's only the behaviors that make an evolutionary difference. If consciousness weren't an inescapable aspect of the behaviors and their underlying brain circuitry, it would have been broken by mutations and disappeared a long time ago. Indeed, were consciousness to have no function, no selective advantage, there's no reason it would have arisen in the first place.

Koch's contrast between a simulation of a black hole on a computer and a black hole itself is confused. Both computers and humans have brains; the computer is not simulating a brain. The question here is how the computer brain compares with the human brain, and determining if the computer brain can be conscious. They are very different sorts of brains, but what specifically does the computer lack that precludes consciousness? If that difference could be nailed down, we could then modify the machine to be conscious.

Once again, both human and machine minds are universal machines, and so what can be done on one can be done -- in principle, if not necessarily very practically -- on the other. IIT founders on the Turing rule: there's no way to identify any cognitive process that can be performed by a human that can't be performed by a machine. Raising the objection that consciousness is independent of cognition is playing the zombie card, and legal card decks don't have zombie cards.

IIT is perfectly correct in identifying consciousness as an aspect of the complexity of the human brain, but that complexity is associated with behaviors: the more elaborate and capable the behaviors, as a good rule, the more elaborate the system that generates the behaviors, the brain, must be. Phi is a measure of information-processing complexity -- in the terminology of its advocates, a measure of "integrated information" -- but skeptics have pointed out that it would be possible to build information-processing systems that would have very high values of Phi, even though all they do is certain complicated busy work.

After all, if Phi is a measure of processing system complexity independent of the cognitive functions -- the observable behaviors -- of the system, there's no requirement in Phi that the system actually do anything in particular. We could have a system with a high Phi that nobody would think of as mindful from its behavior. In reply to such observations, some of the advocates of IIT have shot back: "Well, how do you know it isn't mindful?"

That's another "escape hatch", off-loading the burden of proof. Again, there is no mathematical analysis of anything in the real Universe that is any more valid than the conformance of its results to observation. If there's no way to show by observation that Phi really is a marker for consciousness, then it amounts to nothing. Phi ends up being a mathematical take on Harvey the homunculus, the magical ingredient that separates the living from the p-zombies -- even though, by definition, nobody can actually tell the difference between the two.

In 2023, in response to reports on the "rivalry" between GWT and IIT, a group of 100 researchers in the consciousness field, including Dennett, issued a public letter that stated some researchers considered IIT "pseudoscience" and, while not claiming it "lacks intellectual merit", indicating that it required "meaningful empirical tests" before it could be called a "leading" or "well-established" theory. Before IIT can get to "meaningful empirical tests", however, it might help for it to obtain a workable definition of "consciousness".

BACK_TO_TOP

[14.3] THE HARD PROBLEM, VITALISM, & CUTISM

* IIT's belief in the possibility of p-zombies is not a problem for Chalmers, since he's inclined to believe in zombies -- or more properly said, he believes they could exist, in some conceptual alternate Universe, while saying he doesn't believe they do exist in our Universe. As he wrote in 1996:

BEGIN_QUOTE:

END_QUOTE

OK, breaking this down:

Eliezer Yudkowsky, after an intense debate with Chalmers about p-zombies, commented:

BEGIN_QUOTE:

Chalmers is one of the most frustrating philosophers I know. [He] does this really sharp analysis, and then turns left at the last minute. He lays out everything that's wrong with the Zombie World scenario -- and then, having reduced the whole argument to smithereens, calmly accepts it.

END_QUOTE

caught between a rock & the hard problem

Chalmers' argument on p-zombies is purely philosophical, but he has no doubt of the existence of the hard problem. Dennett, in a 1995 essay critiquing Chalmers, expressed his own frustration with him, by imagining a vitalist telling a biologist:

BEGIN_QUOTE:

The easy problems of life include those of explaining the following phenomena: reproduction, development, growth, metabolism, self-repair, immunological self-defense ...

These are not all that easy, of course, and it may take another century or so to work out the fine points, but they are easy compared to the really hard problem: life itself. We can imagine something that was capable of reproduction, development, growth, metabolism, self-repair, and immunological self-defense, but that wasn't, you know, alive.

The residual mystery of life would be untouched by solutions to all the easy problems. In fact, when I read your accounts of life, I am left feeling like the victim of a bait-and-switch.

END_QUOTE

Dennett then envisioned how difficult it would be to convince the vitalist that such a line of reasoning made no sense:

BEGIN_QUOTE:

This imaginary vitalist just doesn't see how the solution to all the easy problems amounts to a solution to the imagined hard problem. Somehow this vitalist has got under the impression that being alive is something over and above all these subsidiary component phenomena.

I don't know what we can do about such a person beyond just patiently saying: your exercise in imagination has misfired; you can't imagine what you say you can, and just saying you can doesn't cut any ice.

END_QUOTE

Of course, the vitalist would indignantly reply he had no problem imagining the matter at all -- ignoring the reality that his imagination was incoherent. He could claim he could imagine, say, a cubical sphere, but as Dennett put it, that wouldn't cut any ice; he can't actually imagine two completely different things as being the same -- except in principle, with no visualization of what it meant.

Switching back to dualism from the vitalist analogy, Dennett went on to describe that he could see nothing in his own mind that suggested any inexplicable magic at work:

BEGIN_QUOTE:

What impresses me about my own consciousness, as I know it so intimately, is my delight in some features and dismay over others; my distraction and concentration; my unnameable sinking feelings of foreboding, and my blithe disregard of some perceptual details; my obsessions and oversights; my ability to conjure up fantasies; my inability to hold more than a few items in consciousness at a time; my ability to be moved to tears by a vivid recollection of the death of a loved one; my inability to catch myself in the act of framing the words I sometimes say to myself; and so forth.

These are all "merely" the "performance of functions", or the manifestation of various complex dispositions to perform functions. In the course of making an introspective catalogue of evidence, I wouldn't know what I was thinking about if I couldn't identify them for myself by these functional differentia. Subtract them away, and nothing is left beyond a weird conviction (in some people) that there is some ineffable residue of "qualitative content" bereft of all powers to move us, delight us, annoy us, remind us of anything.

END_QUOTE

Dennett, in this, linked to what Hume had said in his youth centuries before, in his comments on the self:

BEGIN_QUOTE:

When my perceptions are remov'd for any time, as by sound sleep; so long am I insensible of myself, and may truly be said not to exist. And were all my perceptions remov'd by death, and cou'd I neither think, nor feel, nor see, nor love, nor hate after the dissolution of my body, I shou'd be entirely annihilated, nor do I conceive what is farther requisite to make me a perfect non-entity.

If any one, upon serious and unprejudic'd reflection thinks he has a different notion of himself, I must confess I can reason no longer with him. All I can allow him is, that he may be in the right as well as I, and that we are essentially different in this particular. He may, perhaps, perceive something simple and continu'd, which he calls himself; tho' I am certain there is no such principle in me.

END_QUOTE

In any case, Dennett concluded that Chalmers was on the road to nowhere:

BEGIN_QUOTE

Chalmers recommends a parallel with physics, but it backfires. He suggests that a theory of consciousness should "take experience itself as a fundamental feature of the world, alongside mass, charge, and space-time." As he correctly notes: "No attempt is made [by physicists] to explain these features in terms of anything simpler," but they do cite the independent evidence that has driven them to introduce these fundamental categories.

Chalmers needs a similar argument in support of his proposal, but when we ask what data are driving him to introduce this concept, the answer is disappointing: It is a belief in a fundamental phenomenon of "experience". The introduction of the concept does not do any explanatory work. The evidential argument is circular.

We can see this by comparing Chalmers' proposal with yet one more imaginary non-starter: "cutism", the proposal that since some things are just plain cute, and other things aren't cute at all -- you can just see it, however hard it is to describe or explain -- we had better postulate cuteness as a fundamental property of physics alongside mass, charge, and space-time.

... Cutism is in even worse shape than vitalism. Nobody would have taken vitalism seriously for a minute if the vitalists hadn't had a set of independently describable [reliably observable] phenomena -- of reproduction, metabolism, self-repair and the like -- that their postulated fundamental life-element was hoped to account for. Once these phenomena were otherwise accounted for, vitalism fell flat, but at least it had a project.

Until Chalmers gives us an independent ground for contemplating the drastic move of adding "experience" to mass, charge, and space-time, his proposal is one that can be put on the back burner -- way back.

END_QUOTE

On consideration of Dennett's "cutism", it appears Egnor may have missed a bet -- since it would work as well as anything else as one of his arguments of immaterials. Of course, even Egnor would find it too silly.

The hard facts of the matter are that the "easy problems" are not necessarily all that easy, while the "hard problem" is a dead end, merely a restatement of the mind-body "problem" -- a failure to realize that it's light in the room because we turned on the lights, silly. Chalmers, in his pursuit of the "explanatory gap", is in pursuit of a necessary connection for the mind, failing to realize there are no necessary connections, only observed connections, in any context.

Yes, we can imagine a conceptual alternate Universe in which the behavior of billiard balls is different than it is in ours -- but observation shows us that billiard balls behave in certain predictable ways, and we model their interactions accordingly. Maybe they could behave some other way, but they don't. Exactly the same is true of the mind. There is no hard problem, just as there is no software-hardware problem; no Harvey the homunculus; no Schroedinger's rabbit; and Phi amounts to nothing.

BACK_TO_TOP
< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME