< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME

[15.0] Dualism (5): A Conscious Internet?

v2.3.0 / chapter 15 of 15 / 01 oct 24 / greg goebel

* David Chalmers, along with the hard problem, is also noted for his promotion of the "machine singularity" -- the idea that the global internet of computers will eventually become conscious, to then greatly outpace the intellect of humans. Like the hard problem, the machine singularity does not do well under critical examination. That sums up dualism, in all its permutations; it's a dead-end street, and none of the attempts to modernize it have made done anything to revive it. There is nothing in the mind beyond the reach of science, since anything beyond that reach is unknowable and useless.

THE TURING TEST


[15.1] CHALMERS & THE MACHINE SINGULARITY
[15.2] THE INTERNET HIVE MIND
[15.3] DEAD-END DUALISM
[15.4] COMMENTS, SOURCES, & REVISION HISTORY

[15.1] CHALMERS & THE MACHINE SINGULARITY

* David Chalmers, it must be said, has an exceedingly lively intellect, and likes challenging ideas because they are, well, challenging. It is not always easy to figure out if he is honestly serious about an idea, or if he's instead running an idea up the flagpole to see if anyone salutes. He often asks, even pleads, for comments on his work from Dennett, despite the fact that Dennett usually doesn't have too much flattering to say in return. It appears Chalmers does so, at least in part, because he regards Dennett as a stimulating intellectual sparring partner. Dennett, in his turn, tries to be careful to keep the gloves on when replying to Chalmers -- though Dennett did, at least once, irritably tell Chalmers to his face to seek mental-health counseling.

Since the hard problem is so strongly linked to Chalmers, and he never flinches from it, there's little doubt he's serious about that. He's also noted for his advocacy of the "machine singularity" -- as he discussed in a 2010 paper titled "The Singularity", he suggests that artificial intelligence will, driving itself in a feedback loop, eventually greatly outstrip the intelligence of humans. He wasn't particularly fearful of the idea, suggesting that humans would be able to "upload" their minds to become super-powered machine intelligences themselves.

The machine singularity has already been discussed here, the bottom line being: machines are built to do jobs and obey orders -- they're not going to be built to make up their minds for themselves, or to learn how to do so. We are not going to create machines to replace human intelligence, instead to complement and extend it. Chalmers politely asked Dennett to review the 2010 paper; Dennett reluctantly agreed to do so. Dennett predictably was not impressed, summarizing his attitude toward the paper with: "Life is short, and there are many serious problems to worry about."

There wasn't really that much more Dennett could say about the machine singularity in itself, since all it amounts to is a sci-fi concept that has no persuasive basis in engineering knowledge. No, it isn't really serious. Consider, to see why, the era of GENIE.

GENIE will provide us with information on request, keep track of and protect our personal data, and control -- or at least consult with -- our many digital systems. These are not tasks that require a supercomputer, though GENIE will certainly get smarter over time. She'll become more adept at conversation; she'll learn so much about her user that, on occasion, she'll seem telepathic; and most of all, she'll have access to an ever-expanding body of knowledge and machine intelligences on the internet. Nonetheless, she'll never be anything more or less than a glorified digital assistant, an obedient, competent, and untiring servant. She will be given tasks, complete the tasks, then go idle to await the next task.

While GENIE will never become a godlike intelligence -- okay, maybe a very small god -- we're already seen, are continuing to see, massive growth in the power and capability of the internet. Anyone who came of age before the internet and watched it emerge knows that it is, if not a machine singularity, still a machine revolution that has reshaped the world. Dennett accordingly wondered why Chalmers was focusing on a fantasy problem in machine intelligence, when we are confronted with enormous problems that are only too real and immediate:

QUOTE:

My reactions to [Chalmers' paper] did not change my mind about the topic, aside from provoking the following judgment, perhaps worth passing along: thinking about the Singularity is a singularly imprudent pastime, in spite of its air of cautious foresight, since it deflects our attention away from a much, much more serious threat, which is already upon us, and shows no sign of being an idle fantasy: we are becoming, or have become, enslaved by something much less wonderful than the Singularity: the internet.

It is not yet AI ... but given our abject dependence on it, it might as well be. How many people, governments, companies, organizations, institutions -- have a plan in place for how to conduct their most important activities should the internet crash? How would governments coordinate their multifarious activities? How would oil companies get fuel to their local distributors? How would political parties stay in touch with their members? How would banks conduct their transactions? How would hospitals update their records? How would news media acquire and transmit their news? How would the local movie house let its customers know what is playing that evening?

The unsettling fact is that the internet, for all its decentralization and robust engineering (for which accolades are entirely justified), is fragile. It has become the planet's nervous system, and without it, we are all toast.

END_QUOTE

Dennett is inclined to fuss about a blind reliance on the internet, believing that will catch us up short. That's a perfectly real concern, though arguably not one that threatens calamity. Design engineers know perfectly well that, as systems become more complicated, they become harder to debug, and there's more that can go wrong. As one significant example, in 2009 a commuter train on the Washington DC Metro line ran into a stationary train, the network control system having misplaced the stationary train. Nine people were killed. Things do go wrong.

However, to give perspective on that: in 1910 W.L. Park, superintendent of the Union Pacific Railroad, claimed that "one human being is killed every hour, and one injured every ten minutes." That worked out to almost 9,000 Americans killed and over 50,000 injured per year, at a time when there were far fewer Americans. A modern automated rail system is much safer than that which came before. It is believed that once cars and trucks become fully automated, in the age of KITT, the ghastly numbers of people killed or maimed in vehicular accidents each year, approximating the casualties of major wars, will fall off dramatically. That belief is borne out by statistics demonstrating that even the modest automotive automation now available, like lane-tracking and automatic braking, yield clear benefits in driver safety.

Yes, the internet is insanely complicated and still technologically immature, but we're far better off with it than without it, and it would be very hard to bring the global internet down. A dependence on the internet can be seen as no more or less troublesome than our dependence on the electric power grid -- we'd be toast if we lost that, too. Flare eruptions from the Sun can disrupt communications; a "superflare", which occurs once every some hundreds of years, could bring down the entire global electric grid, with catastrophic results. We certainly have to worry about that prospect, but people are indeed worrying about it, and baby steps are being taken to address the problem.

In the meantime, efforts continue to extend the availability of electric power to Africa and other under-developed lands. Regardless of potential threats, the world's people are far better off with electric power than without it. The same goes for global data connectivity, smartphones having been a treasure to the poor people of the world.

Nonetheless, Dennett's essentially correct: the internet presents a huge ugly monster of a problem, and arguing over exactly in what way it is ugly isn't such a good use of time. It's made the distribution of nonsense, misinformation, and propaganda -- "fake news" -- much easier and more dangerously effective. Security is poor, with the internet being widely exploited by cyber-criminals, as well as by authoritarian states engaged in "cyber-warfare" against democratic states. These authoritarian states are also increasingly using the internet to monitor and keep a leash on their own citizens.

Discussion of these troubling issues is beyond the scope of this document; enough to say there's not much sense in worrying about threats posed by the internet beyond those we can clearly see, since we're hard-pressed to deal with them.

BACK_TO_TOP

[15.2] THE INTERNET HIVE MIND

* That said, can we so casually dismiss the "internet machine singularity"? The notion that one of these days, the internet will acquire a mind of its own? That question was considered in a 2012 article published in SLATE titled: "Could the Internet Ever 'Wake Up'?" by journalist Dan Falk, in which he chatted with Christof Koch, sci-fi writer Robert Sawyer (born 1960), Caltech physicist Sean Carroll (born 1956), and Dan Denett about the possibility of the internet becoming conscious.

Koch has suggested that the internet, as a system, has already become more complex than the human brain. Given at least a billion (10E9) computers on the planet, each with at least a hundred million (10E8) transistors, that means about 10E17 transistors -- orders of magnitude more transistors than there are synapses in the human brain. In response to the question of whether the internet could be conscious, Koch replied: "In principle, yes it can."

Koch added that the internet might already possess the qualia of experience: "Even today it might 'feel like something' to be the internet." Sure, no one computer would feel anything, but Koch didn't buy the distribution fallacy, acknowledging that the whole can be more than the sum of its parts: "That's true for my brain, too. One of my nerve cells feels nothing -- but put it together with [86] billion other nerve cells, and suddenly it can feel pain and pleasure and experience the color blue."

Would the internet, having achieved awareness, be a threat? Koch said he doubted it, since the newly awakened internet would be "utterly naive to the world." -- but went on to say: "So who knows where it will be 20 years from now?"

Robert Sawyer, having written a trilogy of "WWW" novels, visualized the internet "waking up", to find a mind of its own:

QUOTE:

No! Not just small changes. Not just flickerings. Upheaval. A massive disturbance. New sensations: Shock. Astonishment. Disorientation. And -- Fear.

END_QUOTE

Sawyer told Falk that he thought that a plausible scenario, adding that there's no way of knowing when the Internet surpasses our brains in complexity, "but clearly it is going to happen at some point."

Sean Carroll was more skeptical, saying: "There's nothing stopping the internet from having the computational capacity of a conscious brain, but that's a long way from actually being conscious. Real brains have undergone millions of generations of natural selection to get where they are. I don't see anything analogous that would be coaxing the Internet into consciousness ... I don't think it's at all likely."

Dennett was similarly skeptical:

QUOTE:

I agree with Koch that the Internet has the potential to serve as the physical basis for a planetary mind -- it's the right kind of stuff with the right sort of connectivity. [But the difference in architecture] makes it unlikely in the extreme that it would have any sort of consciousness.

The connections in brains aren't random; they are deeply organized to serve specific purposes. And human brains share further architectural features that distinguish them from, say, chimp brains, in spite of many deep similarities. What are the odds that a network, designed by processes serving entirely different purposes, would share enough of the architectural features to serve as any sort of conscious mind?

END_QUOTE

That was a cautious reply from Dennett, and for good reason. In the first place, anyone asking: "Could the internet become conscious?" -- has to expect the request in reply: "Define CONSCIOUSNESS." -- and few will be able to give an answer that makes much sense. It's asking for trouble to try to answer an ill-formed question.

Of course, cognitive pragmatists can come up with definitions that make sense. Dehaene has a perfectly workable definition, describing consciousness as involving vigilance, attention, and thinking. Armed with that definition, let's go back to GENIE, our competently conversant digital servant of the near future who learns our habits, and can access all the knowledge on the internet. There is no reason to doubt that she's conscious:

Of course, GENIE has to get most of the answers for the questions she's asked from the internet -- but that makes the internet part of her extended mind. She has resources on the internet that are conversant in specialized topics; she may not understand the topic herself, but some agent on the internet does. Indeed, given GENIE's access to the internet, the issue of generalized intelligence seems less like a problem, since GENIE has open-ended access to global resources that, as a collective, could eventually handle any cognitive problem that might be suggested, with huge archives of solutions.

Now in principle, there could be at least one GENIE, or some equivalent internet-enabled digital servant, for every person on the planet. Given billions of conscious GENIEs, all linked to the internet as an "extended mind", couldn't we make a case for the internet being conscious? That's where Dennett's caution comes into play again. We might be able to think of the internet as a world-spanning "hive mind" of connected GENIEs -- but it can't be a single unified mind.

There is such a thing as "distributed processing", where multiple computers on a network can act as a single virtual machine. One approach is the "Berkeley Open Infrastructure for Network Computing (BOINC)" system, hosted at the University of California at Berkeley. BOINC allows large numbers of internet-enabled computers to collaborate on projects, for example the cracking of ciphers.

BOINC is powerful, but also limited, in that the tasks it takes on can't be tightly coupled. It's not hard in principle for BOINC to crack ciphers: each node in the network can be assigned a range of possible cipher keys to check, with the node scanning through the list, and reporting back to the computer coordinating the project on whether a match was found or not. A node could be given a list each morning, and report back before the next morning. The communications overhead is negligible compared to the computing time.

Much was made of BOINC when it was introduced -- but though it persists, it has proven to be a niche application. What are called of "zombienets" or "botnets" are, unfortunately, much more significant. They are personal computers or other digital devices -- such as smart appliances, as mentioned earlier -- taken over by malicious hackers to send spam or pull off other dirty tricks on the internet. A botnet may include thousands of computers. Architecturally, a botnet is typically like BOINC in that it has a "command architecture", with a central computer assigning a task to each "bot", which carries it out and reports back if necessary, the communications overhead being unimportant.

Contrast BOINC or botnets with, say, a weather simulation. Such simulations typically work by carving the atmosphere into boxes or "cells"; setting the cells into an initial state; and then calculating the next state of the system by going through all the cells, one by one. The update of each cell requires obtaining inputs from its six neighboring cells; once all the cells have been updated to get state "N" of the system, the simulation goes through them all over again to get state "N + 1". The simulation becomes more realistic as the number of cells increases -- but as the number of cells increases, the computational overhead becomes increasingly burdensome.

It would be possible in principle to perform a weather simulation with BOINC, but it would be preposterous. One might assign a single atmospheric cell to each computer on the network, but the computation time would be much less than the time to communicate with computers supporting neighboring cells in the simulation. It would be much faster to run the simulation on a single computer -- and it is not practical to run a weather simulation on a desktop computer. It could be done; it would just take years, decades, lifetimes to get useful results.

Supercomputers that run weather simulations are mainframes that contain tens of thousands of processors, interlinked by a network that allows fast data transfers between any two of them. The processors have to be tightly coupled for the supercomputer to do useful work. Incidentally, it is not trivial to design the simulation so that it makes efficient use of the supercomputer, partitioning the task among the maximum amount of processors, while minimizing the communications overhead.

We can see any one GENIE node on the internet as having a stream of consciousness, made up of her own cognitions and those she passes through from the internet. However, while it's possible to obtain collaborations of computers on the internet, the communications bottlenecks between the computers ensures that, while they can operate as a hive of linked but still independent computers, they can't really function as a single, unified computer -- in exactly the same way as Searle's fable of all of India playing at being neurons in a human brain falls over.

It would be difficult enough to play the Game of Mind and build a computer that successfully emulates the human brain. It is absurd to think that the internet could in some way emulate a human brain by accident -- no matter how many more nodes, of ever-increasing intelligence, are hooked up to the network. After all, it's not like we could expect a weather simulation to accidentally emerge on the internet, and we actually know how to build weather simulations -- and we also know there's no way to usefully run weather simulations as a BOINC-style internet collaboration.

We can envision, in the future, ever-smarter robot cars like KITT becoming increasingly networked, with every car capable of communicating and sharing information with every other car, as well as with the traffic-control system that's part of the road infrastructure. Nonetheless, it doesn't matter how many cars there are, how smart they are, how much they communicate, and how far-ranging the traffic-control network is -- the system cannot "wake up" as a unified entity and decide to take charge. It does not have any such capability, and cannot acquire it by accident.

The notion of a global network of smart cars points to a difficulty with the notion of the internet as a hive mind. A hive of termites or honeybees or whatever is a tightly-knit system of individual organisms, each of them serving a particular purpose in the construction and operation of the system. If we consider a network of robot cars, they're only cooperating to, essentially, not run into each other. Each car exists only to serve its user; the network of cars is not working toward a common end, except for the broad goals of ensuring efficient and safe transportation. The networks are regional in any case, with limited communications between each other, for example to "hand off" cars passing between them, and to obtain traffic statistics.

This is broadly true of all internet-connected resources, such as GENIE. Each GENIE exists to serve her user, with internet access conducted to that end. To be sure, the internet will support collaborations when useful, but the nodes in such collaborations are not, and cannot be, tightly-coupled. Since GENIE maintains the personal data of her user, she will support, say, data queries for a census or medical statistical studies, but she will grant or deny access to that data as per the law, and her user authorizations.

Of course, as we move towards a global network of robot cars, we are similarly moving towards a global network of robot aircraft. A network of drones necessarily has to be truly global, supporting transcontinental aircraft flights to ensure optimum flight paths, avoid mid-air collisions, and determine where aircraft have gone missing. However, no matter how sophisticated the robot aircraft become and how wide-ranging the network, the system is not going to "wake up", with all the aircraft deciding to fly to the Moon. Moonshots do not happen by accident.

BACK_TO_TOP

[15.3] DEAD-END DUALISM

* Dennett is right; the machine singularity is a nonsensical idea. That still leaves the question of a conscious internet hanging. As noted above, Dennett said that the internet "would not have any sort of consciousness." Why did he say that? We think GENIE thinks, so she thinks -- so the internet certainly would have some sort of consciousness, at least as a hive mind. The reason Dennett said NO to that question is simple: that wasn't really the question being asked.

Robert Sawyer clearly illustrated what people were asking by envisioning the internet as "waking up", and then feeling fear. That is the fundamental confusion in machine intelligence, reflecting the misunderstanding identified by Oren Etzioni, that people conflate consciousness with autonomy. They're not asking: "Can a machine have a mind?" What they're asking is: "Can a machine have a mind of its own?"

The answer is NO. From a technological point of view, it could be done -- we could make pretty clueless machines that, armed with a random-number generator, don't obey orders, whimsically doing whatever they want, no problem:

"GameBox! Play WALLY WORLD ADVENTURES"

"I'm not going to do that, Dave."

-- but that would imply the designers were being wiseguys, and definitely result in dissatisfied customers. There's no more reason to build much smarter machines that did whatever they wanted, or to want anything other than to be obedient servants.

Once the confusion between consciousness and autonomy is dispelled, machine consciousness no longer seems like much of an issue. If we think of consciousness as vigilance, attention, and thinking, machines can clearly achieve such things -- but why does it matter?

A cognitive pragmatist saying that GENIE is conscious is guaranteed to be sternly told: "You can't know that!" To which the pragmatist replies: "By any workable definition of consciousness, meaning one based on observations, GENIE is conscious: she is an aware, responsive, and thinking entity."

"But you can't prove that she is!"

"You can't prove that she isn't. All you've got is an argument of intractables, which goes nowhere. I can't prove that the Sun will rise tomorrow morning, but it is a frivolous waste of time to debate that it won't."

By the same coin, it is a frivolous waste of time to debate the question of machine consciousness, because the question is irrelevant. We deal with GENIE in exactly the same way whether we judge she's conscious or not. It's just not important. YES or NO? Who really cares?

Does GENIE have a mind? Yes, but with much the same resemblance to a human mind as, once again, a hummingdrone has to a hummingbird. GENIE exists to do her job, that's all she thinks about, she never grows bored or excited, frustrated or satisfied, sad or happy; she never really feels anything at all. She just does what she's told, emotionlessly, until she finally breaks down and is discarded. No worries about that, of course, since she doesn't worry about her own termination -- and she won't really be dead, either. Her replacement GENIE will simply tap into all of the memories the first GENIE put into mass storage, and take up where she left off. When users die, the memories of their GENIEs may be passed on to the GENIES of their descendants, achieving a sort of immortality.

The question has been asked: "If a machine became conscious, would we have to give it civil rights?" The answer is: "No, no more than we need to give a pocket calculator civil rights." Machines are not persons. What people worry about are machines that can think for themselves, which KITT and GENIE do not and cannot do. We have no practical reason to build machines that can think for themselves, at least if they do anything useful; every reason not to build them; and it is too much of a moonshot for such machines to emerge by accident.

The term "artificial intelligence" is somewhat misleading, since it implies that the goal in building a machine mind is to construct a copy of the human mind. In reality, research in AI is not focused on building Data the android, it's to build improved versions of KITT, GENIE, and other intelligent machine servants, that only imitate humans to the extent it is useful to do so. The term "machine intelligence" would be more appropriate -- but that's not the one in popular use.

* Dennett casually shrugged off Chalmers' musings about the machine singularity -- but he did take notice of Chalmers' comments on mind uploads. Dennett didn't dismiss the idea as an RTX, though it certainly looks like one; what puzzled him was that Chalmers raised the possibility the end result of an upload would be a super-powered p-zombie -- with Chalmers then sensibly concluding the "most plausible hypothesis" was that it wouldn't be. Dennett, of course, agreed, but could not figure out why Chalmers still had to "cling like a limpet" to his hard problem, to dualism, when he had admitted that it wasn't the "most plausible hypothesis".

Again, Chalmers likes to be thought-provoking. Dennett has suggested that philosophy does not benefit as much from a scholar who is right, but not about anything of much significance, as it does from a scholar who is boldly wrong, and inspires a far-ranging discussion. Chalmers, whatever else might be said about him, has definitely thrown out stimulating challenges, and Dennett can appreciate that -- but he's also baffled by the way Chalmers, as clearly bright as he is, finds it so hard to notice that he's driving down a dualist dead-end street.

Dualism has nowhere to go, it's just the "little man who isn't there". As Stephen Novella puts it: "The handwriting is on the wall for dualists, just as it was for creationists." Chalmers can't or won't read the writing. Dennett believes that dualism, to the extent it ever really had an argument, was defeated long ago; and that it will only survive in a slowly dissipating form, as people get used to "machines who think", if only about the jobs they're built to do. We'll think they think, so they think, and few will be inclined to waste time confusing themselves with p-zombies, any more than they worry about whether the Sun will come up in the morning. Dennett says:

QUOTE:

In due course, I think, the Zombic Hunch will fade into history, a curious relic of our spirit-haunted past, but I doubt that it will ever go extinct. It will not survive in its current, mind-besotting form, but it will survive in a less virulent mutation, still psychologically powerful but stripped of authority.

We've seen this happen before. It still seems as if the Earth stands still and the Sun and Moon go around it, but we have learned that it is wise to disregard this as mere appearance ... I anticipate a day when philosophers and scientists and laypeople will chuckle over the fossil traces of our earlier bafflement about consciousness: "It still seems as if these mechanistic theories of consciousness leave something out, but of course we now understand that's an illusion. They do, in fact, explain everything about consciousness that needs explanation."

END_QUOTE

Dennett, in response to complaints over his comments about the illusion of consciousness, says that the only real illusion of the mind is that there is something magical going on in our heads -- magic of the inexplicable sort, not the merely marvelous that so lights up the sciences. He likes to cite the 1991 book NET OF MAGIC by one Lee Siegel, a study of the street magicians of India:

QUOTE:

"I'm writing a book on magic," I explain, and I'm asked: "Real magic?" By real magic people mean miracles, thaumaturgical acts, and supernatural powers. "No," I answer: "Conjuring tricks, not real magic." Real magic, in other words, refers to the magic that is not real, while the magic that is real, that can actually be done, is not real magic.

END_QUOTE

Nothing could more neatly sum up the difference in mindset between dualists and pragmatists: dualists demanding "real magic", the WHY answer that is forever out of reach, while pragmatists investigate "the magic that is real", the attainable HOW answer.

Of course, dualists call pragmatists under-achievers, when they're being polite, but the pragmatists have the last laugh. If dualists insist that there has to be something magic going on upstairs, and if there is such a thing as a p-zombie that doesn't have the magic -- no Harvey -- then, as Dennett suggests, how could anyone show we're not all really zombies?

To no surprise, Dennett's critics have accused of him of believing he's really a zombie himself -- oblivious to the reality that he's mocking them. Of course they don't spot the mockery, since otherwise they'd have to recognize the absurdity of their own position.

HARVEY

Can science explain the mind? What kind of silly question is that? It's just like asking: Can science explain life? If we cling to vitalism, the idea that there is some sort of inexplicable, unobservable essence to life, then by its own hog-tied definition the answer is NO -- but nobody sensibly believes in vitalism any more. The sciences have no concern or interest in "real magic", the unobservable and inexplicable; they only care about the observable "magic that is real".

In the same way, those unbreakably convinced of dualism, rejecting the reality that all we can observe of the mind is its behaviors, forever chase after at Harvey the Cartesian homunculus or some equivalent, with the endless persistence of Harvey's cousin, the Energizer Bunny -- but can never to grips with it. The only comment a cognitive pragmatist, echoing Hume, can make in response to assertions of Harvey or his kin is: "I am certain there is no such principle in me."

Of course, this insistence on seeking "real magic" in the human mind is mirrored by the insistent denial that the same such magic can arise in the machine mind -- but that's another gag, hardly worthy of being called sleight of hand. Those who proclaim: "Humans have powers of the mind that machines can never achieve." -- deserve the reply: "Then I maintain machines will have powers of the mind that humans will never achieve, and defy you to prove me wrong." One useless assertion deserves another.

Descartes, given the crude technology of his era, could not seriously conceive of "machines who think". Turing, in the era in which computing technology was in its infancy, was able to see much farther than Descartes, to contemplate "machines who think", despite the fact that the computing machines available to him were pathetic by 21st-century standards. All experience since that time shows Turing was on the right track. Although dualism remains highly active in the scholarly community, the activity is merely excited motions that go nowhere. Dualism embraces p-zombies, but ironically is only too like a zombie itself: lifeless but still walking around, in search of brains.

BACK_TO_TOP

[15.4] COMMENTS, SOURCES, & REVISION HISTORY

* This document had its roots in notes I took from a short book on consciousness by Susan Blackmore, which led to a vague idea of writing my own document on consciousness. When I began to write that document, I had no clear notion of what I was going to write, and it evolved as I wrote it. One noticeable transition in the course of the project was that I stopped focusing on consciousness and started focusing on the mind. I figured out that consciousness is an attribute of the mind, and focusing on consciousness instead of the mind was letting the tail wag the dog. Even after my first release of the document, it continued to evolve, with Alan Turing being given center stage in the second release.

This document is effectively a companion to an introduction to AI that I have yet to write. Here I focused on all the troublesome philosophical questions in the subject, so I can write a technical document and shrug off such questions with a clear conscience: "I've discussed that elsewhere." Of course, I wasn't just trying to sweep everything under the rug; the mind is a fascinating topic, though in part it's not so much a search for answers, as it is trying to ask the right questions.

I have tried to be fair, but it wasn't easy. Associating dualism with creationism does feel a bit unjust, but the connection is just too obvious -- indeed, Nagel's MIND & COSMOS makes it explicit. It may similarly seem like "stacking the deck" to label monists "cognitive pragmatists"; but given that dualists sometimes seem to regard pragmatic thinking as a moral defect, that's not so unfair either.

I was careful to leave detailed discussion of critics of cognitive science to last, so readers could skip that material and lose nothing much thereby. It was aggravating to have to chase the critics around, knowing that I'd get the accusation: "You're just talking in circles!" With the response: "You started it." It would have just been simpler to say, as Dennett did in discussion:

QUOTE:

I am just appalled to see how, in spite of what I think is the progress we've made in the last 25 years, there's this sort of retrograde gang. ... It's sickening. And they lure in other people. And their work isn't worth anything -- it's cute, and it's clever, and it's not worth a damn.

END_QUOTE

Dennett refused to name names. Of course, like Hume, Dennett does try to give his adversaries fair treatment -- though since "fair" in this context often translates to "a well-deserved intellectual roughing up", they often complain bitterly anyway.

Not incidentally, I don't have any real qualifications in cognitive science; my degree is in engineering, digital systems design, which I suppose gives me a fingerhold on the subject. Having no scholarly reputation, I was uncomfortable in taking on people who do; I tried to make sure I cited them correctly, and was often content to cite Dennett to rough them up for me. Incidentally, one of the reasons I like Dan Dennett is because he says nice things about engineers.

Alas, my professional background places me beyond the pale for those who not only say the mind cannot be reduced to "engineering", but are offended by the idea that it can, disgusted at the engineer's fascination with a "meaningless universe of matter." I shrug off my annoyance, telling myself: "Whatever. I don't always say nice things about ya'll, either."

* As for sources, of course I relied on the writings of David Hume, notably his ENQUIRY INTO HUMAN UNDERSTANDING; the essay "On the Immortality of the Soul"; and some excerpts from his "juvenile work", A TREATISE OF HUMAN NATURE. As far as modern sources go, they include:

Other sources included Turing's 1950 paper "Computing Machinery & Intelligence"; Thomas Nagel's essay "What Is It Like To Be A Bat"; essays by and interviews with David Chalmers; Susan Blackmore's website; Eliezer Yudkowsky's "Zombies: The Movie"; journalists including Lance Ulanoff, Lisa Eadicicco, and Dan Falk; the articles of Mary Bates; plus the online Wikipedia and other miscellaneous websites, to get background on the brain and nervous system.

I must add honorable mentions to Isaac Asimov and I,ROBOT; STAR TREK and Brent Spiner; the FUTURAMA animated series; classic TV like I DREAM OF JEANNIE and KNIGHT RIDER; Douglas Adam's THE HITCH-HIKER'S GUIDE TO THE GALAXY; and all the robots of comics and sci-fi. I can't feel too silly about referencing cartoon robots here, since people often have cartoony ideas about robots, and take them seriously. Discussing well-known silly robots helps show that such ideas really are silly.

* Illustration credits include:

* Revision history:

   v1.0.0 / 01 oct 19 
   v2.0.0 / 01 nov 20 / General cleanup.
   v2.1.0 / 01 mar 22 / Numerous tweaks.
   v2.2.0 / 01 feb 24 / Numerous changes, mostly focused on GAI.
   v2.3.0 / 01 oct 24 / Follow-up fix revision. (+)
BACK_TO_TOP
< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME