< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME

[12.0] Dualism (2): The Chinese Room & Goedel's Proof

v2.2.3 / chapter 12 of 15 / 01 oct 24 / greg goebel

* In the 1980s, philosopher John Searle devised what might be called the "anti-Turing test" -- a fable of machine translation known as the "Chinese room" that supposedly proved machines will never be able to think. Although the Chinese room did become a popular, it falls apart under skeptical examination. The same can be said of various mathematical "proofs" that machines cannot think, going back to the dawn of the computer age, the best-known of them being from the work of the Austrian-American mathematician Kurt Goedel.

THE TURING TEST


[12.1] THE CHINESE ROOM
[12.2] THE EXPANDED CHINESE ROOM
[12.3] GOEDEL & THE MATHEMATICAL OBJECTION
[12.4] TURING & FAIR PLAY FOR THE MACHINE

[12.1] THE CHINESE ROOM

* In addition to Nagel's bat, another well-known shot at cognitive pragmatism is the "Chinese room", a scenario devised by the American philosopher John Searle (born 1932). Searle introduced the "Chinese room" concept in the 1980 article "Minds, Brains, & Programs" -- envisioning a person, we'll say Bob, sitting in a closed room, with people slipping pieces of paper under the door to the room with Chinese writing on them. Bob looks up the characters in a book, then writes the answer on a piece of paper and slides it back out under the door.

According to Searle, this proves that "strong AI" -- that is, a machine that thinks, as per Turing -- is impossible, since Bob is holding a conversation in Chinese, but doesn't understand a word of it. This is just plain silly. As discussed above, this scheme would not be at all adequate even for translation. Attempting to translate Chinese on a character-by-character basis, leveraging off rules of syntax, does not yield useful results for much more than trivial expressions. Bob could certainly not hold a conversation in Chinese without understanding what was being said; the most he might do is parrot canned replies with no coherent connection to each other, like an ELIZA program, if even that minimally smart.

The Chinese room, as defined, cannot pass the Turing test. The irony is that the only way this scenario would work is if Bob had access to a computer with a deep translation system that understood the Chinese characters, and tracked what was being said well enough to hold a convincing conversation. Bob wouldn't understand the conversation; but the computer would.

Searle, on being handed this inconvenient truth, asserted without justification that the computer program would have "all the syntax but no understanding." However, syntax alone only gives an obnoxious ELIZA program that can't hold anything resembling a convincing conversation. If the computer can actually hold a convincing conversation on a particular topic, of course it understands:

"What color are apples?"

"Stereotypically red, but sometimes green or yellow."

"What make of car do you drive?"

"Don't be silly."

"Why is there air?"

"The question is ambiguous. Please clarify."

"What did you just say about apples?"

"I said they were usually red in color." If the machine were built to make personal judgements -- it won't be, it's impersonal, it's not a human -- it would have to conclude it had a better comprehension of things than the person asking the questions. More specifically, suppose we ask a machine what it knows about, say, Abraham Lincoln. In response, it will reply with the data it has available on Lincoln. How is that different from what a human knows about Lincoln?

To be sure, the machine may not be able to do any more than blindly recite text and display images, but some humans would do the same; and if we can build a machine that can hold intelligent conversations, we can similarly build one that can read with comprehension. There's no technical obstacle to building a machine that go through a text, pick out matters of interest, make connections within and outside the text, or summarize the text. It's another angle on the Turing rule: there's no way to identify any specific aspect of human understanding of a text that a machine demonstrably cannot accomplish.

As Searle phrased the Chinese room, it's completely unrealistic, an exercise in misdirection. Searle confronted the question posed by the Turing test; but instead of thinking out exactly what the point of the test is and what it would mean for a machine to pass it, he simply concocted a strawman, and declared that a machine cannot think. Turing was a genius. Searle -- not so much.

BACK_TO_TOP

[12.2] THE EXPANDED CHINESE ROOM

* Searle didn't stop there. Confronted with criticisms of his Chinese room scenario, Searle expanded his vision to invoke not just Bob in a room, but an entire network of people, even of a number equivalent to the entire population of India. Each person in the network has a specific and narrow understanding of a bit of Chinese, but the aggregate is able to hold a convincing conversation in Chinese, without any one person of the network able to understand Chinese.

In a broad sense, this scenario isn't so far from having a conversation on an internet forum. Bob posts a question; different people then try to answer it. Internet forums are notoriously unreliable sources of information; but we could imagine what might be called a "universal answer forum", in which a huge professional community of experts is standing by to answer questions, any questions, from anybody.

Different members of the community will answer different questions, with those who can't answer a question yielding the floor to those who can, to then stand by, listening in. They're all monitoring the same forum, they all know what's going on. There may be disputes among experts on the answers on occasion, but there are conflict-resolution mechanisms to sort such collisions out. One expert answers one sort of question, another a different sort of question -- but for Bob, as long as the members of the community do nothing to disrupt the illusion of unity, following a common "style guide" to maintain uniformity, it's a coherent conversation with a unitary system. This distributed network architecture parallels that of the human brain, with different agents having different competences staying coordinated through broadcast messages, the agents working out which one of them takes the floor.

Now suppose we substitute a network of AI systems on the internet, accessed through GENIE. Other than the fact that the different AIs are more specialized than the human experts -- as well as both more exacting and less flexible -- the network of AI systems would be functionally equivalent to the network of human experts. If we can hold a thoughtful conversation with the network, it passes the Turing test. With the network of AIs, all of them monitoring the conversation with the user and contributing as each one sees the need, we have a mindful chat with GENIE. She's not lacking in understanding; indeed, she seems to know just about everything about everything. Of course, the understanding is in the entire network, GENIE generally passing on the queries to the internet.

It must emphasized that this is not the same scenario as posed by Searle, because each of the AIs accessed through GENIE is a stand-alone, if narrowly specialized, system, while Searle envisions each node in the network only doing one little part of the job of holding a conversation in Chinese, as if pretending to be a single neuron. Okay, but then Searle's story is exactly the same as it before: there's not enough to the network to pass the Turing test. Searle did nothing more than rephrase his strawman story, to then grab on to the distribution fallacy of Leibniz -- claiming that if we dismantled a brain, electronic or biological, into its independent basic components, it wouldn't be able to produce a mind.

That's true, in the same way that if we dismantle a clock, it won't keep time any more. There's no way to network the scattered components to get the clock to work again; they could be connected by tying long lengths of string between each other, but that would be silly, since the gears would still not be meshing with each other.

As discussed earlier, a competent translational or conversational system requires a deep cognitive network that incorporates not only all the details and nuances of the language, but a world model defining the things being discussed -- and an ability to dynamically create a cognitive model, rooted in the system's world model, of the specific issue under discussion. Breaking that cognitive network down into its petty elements, eliminating the necessary interactions of that network, destroys the network.

Searle shrugged off the criticisms to double down on the distribution fallacy, saying that claiming the mind was a product of the system of the brain was a kind of dualism. That was ridiculous: dualism is the assertion that there's more to the mind than the ordinary workings of PONs, which cognitive functionalists -- meaning advocates of strong AI -- see no reason to believe.

Searle, it seems, does not understand the notion of a system property. It is hardly dualism to claim that a mechanical clock can only keep time if all its components are there and working together, with each component doing its job. The neurons of the brain are massively interconnected, and not in a haphazard fashion: as discussed previously, the brain has a hierarchical, clearly structured architecture, and any major variation from that architecture means a brain that doesn't work right. It may not even be alive for long.

Not really incidentally, in the era of GAI chatbots, a far-reaching network of AIs is not such a remote future. The inherent limitation of a GAI chatbot is that it has to be trained using floods of information, and it is not practical to train a GAI chatbot to be an expert in everything. A generalized GAI chatbot will know a great deal about the Chinese language, for example, but it won't be a substitute for a human expert on the subject. However, a specialized GAI chatbot could be built that was trained up to the expert level on Chinese, and given enough training would be as or more capable than a human expert. Universities could collaborate to build such specialized GAI chatbots, with different departments focused on different chatbots, with the evolving network of chatbots appearing as a seamless whole.

Indeed, even today it is possible for anyone to build a GAI chatbot to personal specification; there is, for example, a website named "character.ai" that allows a user to build a chatbot for any real or fictional person. The chatbots are convincing to the level to which they have been trained; the David Hume chatbot is good, others not so good. In any case, the proliferation of chatbots is already in energetic progress. with so many of them emerging, sorting out the jewels from the junk is likely to become a problem.

lost in the Chinese room

In any case, it should be noted that Searle rejects the idea of an inexplicable Harvey the homunculus, instead believing that the mind is rooted in properties that we haven't understood yet -- though he denies he is a property dualist. In reality, all forms of cognitive dualism are effectively the same. They all assert that the mind isn't just a product of the workings of PONs, that there's got to be something there that can't be seen, or hasn't been seen yet. The distinctions between the forms of dualism are purely rhetorical, like arguing over whether the fur of Harvey the homunculus is white, black, or pink with purple polka-dots.

Searle is a particular adversary of Dennett, denouncing Dennett's work as "intellectual pathology". Dennett returns the compliment, pointing out that Searle's work has been heavily criticized by the cognitive studies community, with Searle dismissing all criticisms out of hand. Searle takes Dennett to task for not recognizing the central importance of subjective experience -- which is baffling, because Dennett's heterophenomenology is all about subjective experience, and nothing else. Dennett simply says that if we can't establish subjective experience as objective evidence, we can go nowhere. As he neatly put it in his 2003 essay, it covers all the bases:

QUOTE:

The total set of details of heterophenomenology, plus all the data we can gather about concurrent events in the brains of subjects and in the surrounding environment, comprise the total data set for a theory of human consciousness. It leaves out no objective phenomena, and no subjective phenomena of consciousness.

END_QUOTE

In response Searle, and many other dualists, say that nobody can truly describe their subjective experiences. Say what? We do it all the time. We read entire books by people reporting their personal experiences, and they are judged as credible as the authors, with some being more skillful in communicating their experiences than others. With modern multi-media and virtual reality technology, narratives of personal experience can be immersive experiences. If the authors are still among the living, then if we have questions about their experiences, we can in good heterophenomenological fashion ask them to clarify. What more could anyone ask for that was of any use?

Consider even the simple example of Alice telling Bob about a sunset: since Bob regards Alice as highly credible, he has no cause to question her. Can Bob actually be Alice? Of course not, giving follow-on question: that's a problem? Not to Bob. He has no concern, and no reason for concern, about qualia.

The notion of qualia amounts to a "Cartesian wedge" to deny the objective reality of personal experience; to reject the validity of heterophenomenology; to split conscious experience from observable and explicable cognition. Instead of "the buck stops here", the buck is simply pocketed, and disappears. Of course we have personal experiences, there's no argument about that. Don't we spend our lives telling others about our personal experiences, and being told about the personal experiences of others? What do qualia bring to the party?

Dennett noted how Searle, in response to one critique of the Chinese room, zeroed in on the critic's use of the term "a few slips of paper" instead of, as Searle had put it, "a few bits of paper" -- and then claimed that rendered the criticism entirely invalid. Dennett, with strained patience, pointed out that if the Chinese room scenario fell apart when afflicted with such a trivial misunderstanding, then it is extremely fragile, in other words an intellectual house of cards. However, that was obvious to begin with.

The question that can be, and has been, posed to Searle is: if we were able to entirely simulate the workings of the human brain on a digital computer, running what could be called "the Game of Mind", would it have a human mind? Would it think? Would it be conscious? Of course, this almost an RTX, it's far beyond anything we can think of doing, and it's not at all clear it would be worth the immense trouble even if we could -- but there's nothing in Searle's Chinese room fable that would prevent a cognitive pragmatist from replying: "Nobody has ever come up with a single sensible reason to prove it wouldn't be conscious. If it acts like it's conscious, we have no reason to believe it isn't."

BACK_TO_TOP

[12.3] GOEDEL & THE MATHEMATICAL OBJECTION

* Along with shots from philosophers like Nagel and Searle, mathematicians have also taken on cognitive pragmatism. The root of their objection is what Turing called the "mathematical objection" in his 1950 essay:

QUOTE:

There are a number of results of mathematical logic which can be used to show that there are limitations to the powers of discrete-state machines. The best known of these results is known as Goedel's theorem (1931) and shows that in any sufficiently powerful logical system statements can be formulated which can neither be proved nor disproved within the system, unless possibly the system itself is inconsistent.

There are other, in some respects similar, results due to Church (1936), Kleene (1935), Rosser, and Turing (1937). The latter result is the most convenient to consider, since it refers directly to machines, whereas the others can only be used in a comparatively indirect argument: for instance if Goedel's theorem is to be used, we need in addition to have some means of describing logical systems in terms of machines, and machines in terms of logical systems.

The result in question refers to a type of machine which is essentially a digital computer with an infinite capacity. It states that there are certain things that such a machine cannot do. If it is rigged up to give answers to questions as in the imitation game, there will be some questions to which it will either give a wrong answer, or fail to give an answer at all however much time is allowed for a reply.

END_QUOTE

Goedel's theorem -- his "incompleteness theorem", to be precise -- is well the most prominent of the "mathematical objections", and is phrased as follows:

QUOTE:

1: Any consistent formal [axiomatic] system "F" within which a certain amount of elementary arithmetic can be carried out is incomplete -- IE, there are statements of the language of "F" which can neither be proved nor disproved in F.

2: For any consistent system "F" within which a certain amount of elementary arithmetic can be carried out, the consistency of "F" cannot be proved in "F" itself.

END_QUOTE

The Goedel incompleteness theorem leads to what could be most conveniently called the "Goedel postulate against strong AI", or "GPAI" for short, which goes as follows:

To dig into the GPAI, we need to start with the definition of an algorithm. According to Wikipedia:

QUOTE:

In mathematics and computer science, an algorithm is an unambiguous specification of how to solve a class of problems. Algorithms can perform calculation, data processing, and automated reasoning tasks.

An algorithm is an effective method that can be expressed within a finite amount of space and time and in a well-defined formal language for calculating a function. Starting from an initial state and initial input (perhaps empty), the instructions describe a computation that, when executed, proceeds through a finite number of well-defined successive states, eventually producing "output", and terminating at a final ending state.

END_QUOTE

As per this definition, algorithms are logically proveable, and also fully deterministic: given particular inputs, there's no doubt what the output will be. We know every step they take from input to output, and know for certain what the outputs will be for given inputs.

* The GPAI is based on two assumptions, the first being that computers can only work by logically proveable and fully deterministic algorithms. This assumption is arbitrary and not demonstrably valid.

To show why it isn't valid, it is necessary to define exactly what is meant by "determinism" -- instead of just winging it, as has been done up to this point, and glossing over the common confusions generated by the term. Determinism simply means that the Universe runs by consistent and, in principle, predictable rules.

This is not an observable fact of the Universe; it's something stronger than that, it's an unavoidable assumption. It is validated by all our experience, and it is absurd to assume anything else: we can't prove that the Sun will rise tomorrow morning, but nobody with sense bothers to argue that it won't.

There's actually no way to invalidate determinism: if something happens that's completely outside all our experience, we can only assume it's some aspect of the orderly Universe that has escaped our notice until now. Concluding that it is a violation of the order of the Universe is lazy and useless. Again, the Universe does whatever it does, so we just have to pay attention and try to follow along. The sciences are entirely based on the assumption of determinism. If the Universe did not operate by consistent rules, the sciences would be a waste of time, with no predictive power. A Universe without order would be purely chaotic.

Of course absolute determinism, meaning no uncertainty about the results, doesn't exist in the real Universe, since we can't rule out unpredictable events: as programmers like to say, there's always one more bug, and a machine that executes a perfectly predictable algorithm can still break down, or be misused.

Determinism, in reality, exists in degrees. Yes, there is such a thing as strong or rigid determinism, in that we do have reliable systems only fail if they have a hardware breakdown, and on which we do place trust. We regard such systems are absolutely deterministic -- though strictly speaking, they're not.

There is also such a thing as weak or loose determinism, for example in the rolling of a die: we know it will come up with a value from 1 to 6, and given a fair die, the odds of coming up with any one of those six values are the same. We just never know what it's going to be, which is the point of dice; we expect them to be unpredictable. We have no expectation that a fair die will always roll a 6, we expect it to follow the odds -- and get suspicious that it isn't a fair die if it rolls a 6 several times in a row, becoming ever more suspicious if it keeps on rolling a 6. Dice are deterministic, but only in a weak sense.

In any case, the real problem with the GPAI is that there's no reason other than prudence that computer programs have to be based on rigidly deterministic algorithms. We can write programs that simply perform randomized guesses to solve a problem, though they work much better if they're randomized educated guesses -- using machine learning to obtain the education, accumulating a pool of knowledge from which to make the guesses, then obtaining the best matches to the stated problem from a set of guesses extracted from the pool. Exploiting the pattern-matching powers of the brain, humans often play hunches, trying different ones, even absurd ones, in hopes of finding one that works, and often do.

More generally, computers can operate by heuristics, rules of thumb, just as Alice does when she's shopping for groceries. Consider again the traveling salesman problem, in which a salesman has to determine the most efficient route to visit sales prospects. The only way to prove which route is the most efficient is to chug through all of the possible routes -- and due to the combinatorial explosion, this becomes ever more difficult as the number of offices increases.

A heuristic approach to the traveling salesman problem uses a "greedy algorithm", simply taking the shortest route from one office to the next, without concern for whether the overall route is shortest or not. The end result will be a workable route, but one which is not guaranteed to be optimum. Playing with a bit of randomization of routes, to then devise and compare alternate routes, will give a more optimized route, with the optimization improving with the number of comparisons -- but there will still be no guarantee that the route will, in the end, be fully optimized.

At this point, the sleight of hand in the GPAI starts to reveal itself. Are heuristics algorithms? They can be and are called such, but the GPAI rests on algorithms defined as logically proveable, rigidly deterministic -- and heuristics don't work that way. The assumption that computers can only work on the basis of rigidly deterministic algorithms is bogus on the face of it.

Advocates of the GPAI reply that the basic operations of a computer are logically proveable and rigidly deterministic, and so the GPAI still holds. This is, once again, a fallacy of distribution -- assuming that what is true for the parts is true for the whole, when it isn't. Any sort of real-world procedure, every procedure, that we could define will have well-defined basic mechanisms; if it didn't, we wouldn't be able to define the procedure. That says nothing about the effectiveness of the complete procedure: even if all the basic mechanisms work perfectly, the complete procedure might be indeterminate, not necessarily accomplishing anything, or leading to a wrong answer.

* More to the point, this leads straight to the second assumption of the GPAI: that the human mind, being defined as non-algorithmic, operates by principles that can never be explained. After all, if it is alleged that the fact that computers operate on the basis of well-defined fundamental mechanisms -- even if their overall operation isn't necessarily strongly deterministic -- means they can never think like humans, then human thinking can't be based on well-defined fundamental mechanisms either.

This assumption is incoherent. It implies the brain is indeterminate, not following any observable rhyme or reason, when it clearly does. We expect people's thinking and behavior to make some sort of sense -- and when it doesn't, we perceive something's not working right. Indeed, even the mentally incoherent have highly predictable behaviors, it being a good bet that they'll do the wrong thing, and that they shouldn't be trusted. Certainly, it's not like cognitive researchers have thrown up their hands in hopeless desperation, and proclaimed: "Oh dear, we can't make heads or tails of the brain and its behaviors! It's hopeless, we'll just have to commit ritual suicide!"

What makes this assumption preposterous is that both computers and the brain are universal machines: either can perform any task that can be defined as a bounded procedure. Both computers and humans can perform logically-proveable, rigidly-deterministic algorithms -- after all, humans devised them. Both computers and humans can also perform educated guesswork and heuristics. AI systems are generally based on artificial neural networks, which are more or less modeled on the brain, and it is then difficult to see any fundamental difference in their operation and potential.

Of course, in evaluating the validity of theorems, mathematicians are operating according to a kit of rules, acquired over centuries of study and discourse, that are not always logically proveable; with mathematicians quick to light into real or perceived violations of the rules by colleagues. Experts like to argue -- and without common rules, they couldn't have constructive arguments. Even if they can't prove a postulate is correct, it will remain standing if nobody can show where it falls down. There's obviously a distributed cognitive network, shared among the interested mathematicians, embodying this process. If the workings of that cognitive network can be articulated by the mathematicians, it can be implemented on a machine. If it can be done with neurons, it can be done with computer hardware.

BACK_TO_TOP

[12.4] TURING & FAIR PLAY FOR THE MACHINE

* In a 1947 lecture, Alan Turing neatly summed up the GPAI and its kin, showing how contrived such lines of reasoning were:

QUOTE:

It might be argued that there is a fundamental contradiction in the idea of a machine with intelligence. ... It has for instance been shown that with certain logical systems, there can be no machine which will distinguish provable formulae of the system from unprovable ... Thus if a machine is made for this purpose, it must in some cases fail to give an answer.

On the other hand, if a mathematician is confronted with such a problem he would search around and find new methods of proof, so that he ought eventually to be able to reach a decision about any given formula. This would be the argument.

Against it, I would say that fair play must be given to the machine. Instead of it sometimes giving no answer we could arrange that it gives occasional wrong answers. But the human mathematician would likewise make blunders when trying out new techniques. It is easy for us to regard these blunders as not counting and give him another chance, but the machine would probably be allowed no mercy.

In other words, if a machine is expected to be infallible, it then cannot also be intelligent. There are several mathematical theorems which say almost exactly that. But these theorems say nothing about how much intelligence may be displayed if a machine makes no pretence at infallibility [using, say, heuristic algorithms].

To continue my plea for "fair play for the machines" when testing their IQ: A human mathematician has always undergone an extensive training. This training may be regarded as not unlike putting instruction tables into a machine. One must therefore not expect a machine to do a very great deal of building up of instruction tables on its own. No man adds very much to the body of knowledge -- why should we expect more of a machine?

END_QUOTE

Turing's argument in favor of "fair play for the machines" is compelling, and can be readily extended. For example, one way of making the argument that there's an insurmountable barrier between human and machine cognition would be to point out that humans, while capable of being extremely intelligent, can at least as often be capable of astonishing stupidity -- we have the one, we also have the other. Who, however, would claim such stupidity as a barrier between the minds of humans and machines? If humans can be bewilderingly stupid, why can't machines? Isn't it unfair to the machine to the machine to say it can't?

Of course, through failures or limitations of design, machines can be pretty stupid; and it would not be so difficult to make them imaginatively stupid, if we wanted to. We simply do not dare give machines the independence of thought to do dangerously stupid things; any manufacturer who did build such machines would be put out of business in a hurry, with the bosses thrown into lockup for criminal negligence. Yes, the "argument of stupidity" is silly; but how can it be shown to be any more or less silly than the other arguments offered for the GPAI?

The human brain is, once again, a noisy neural net. No cognitive process of the brain that has ever been described is or could be inconsistent with the operation of a noisy neural net; cognitive processes that haven't been described give us nothing to discuss. Computers can be used to implement noisy neural nets. If we were to once more play the Game of Mind, implementing a computer system that fully emulated the brain, what could the human brain do that the computer could not? The computer is a different sort of brain, made of silicon instead of cells; but both, by definition, have the same mind. Again, the mind is a system of behaviors; if the machine has the same behaviors, then what exactly is it missing?

In his 1950 essay, Turing characterized the GPAI as effectively saying there is at least one specific meaningful question that a machine can't give a good answer to, but a human can:

QUOTE:

The short answer to this argument is that, although it is established that there are limitations to the powers of any particular machine, it has only been stated, without any sort of proof, that no such limitations apply to the human intellect.

But I do not think this view can be dismissed quite so lightly. [The more adequate answer is that whenever] one of these machines is asked the appropriate critical question, and gives a definite answer, we know that this answer must be wrong, and this gives us a certain feeling of superiority. Is this feeling illusory? It is no doubt quite genuine, but I do not think too much importance should be attached to it.

We too often give wrong answers to questions ourselves to be justified in being very pleased at such evidence of fallibility on the part of the machines. Further, our superiority can only be felt on such an occasion in relation to the one machine over which we have scored our petty triumph. There would be no question of triumphing simultaneously over all machines. In short, then, there might be men cleverer than any given machine, but then again there might be other machines cleverer again, and so on.

END_QUOTE

The question is then: "So what, exactly, is this critical question?" Even if somebody came up with a candidate question and found a computer couldn't answer it, that would give no proof it couldn't be done eventually. We could equivalently ask for a bounded procedure that a human could perform, but not a machine -- or, for that matter, the reverse; but since they're both universal machines, that's not a real question, it can't be done. The GPAI is defeated by the Turing rule: there's no way to know all the possible programs that can be written, and there's no way to prove that a machine can't do anything the human brain can do.

There is and, by its own definition, can be no evidence to support the GPAI, since all it says is: "Humans can get from HERE to THERE, but machines can't." As with Thomas Nagel, this assertion is predicated on incoherent notions of HERE and THERE. It doesn't even invoke a skyhook.

Of course, since the GPAI can't be proven by evidence, it also can't be disproven by evidence -- but that's what is known as an "escape hatch", hiding behind unproveables to stymie further discussion. One might say that advocates of the GPAI have the burden of proof to show they have anything to offer; but since they clearly don't, it would be a waste of time to ask them for it.

An advocate for the GPAI might reply that its validity can only be evaluated by those with an adequate mathematical background. Even if that's true, however, that suggests that it's irrelevant to everyone else. Only mathematicians could take the GPAI seriously; to the extent it has been, it suggests over-reach, a form of what Dennett calls "greedy reductionism": mathematicians generating a skyhook in an attempt to show their work has a broader significance than it actually has. Turing, one of the most famous mathematicians in history, wasn't impressed. Non-mathematicians have no reason to pay it any mind.

BACK_TO_TOP
< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME