* Yet another challenge to the belief that machines can think has come from quantum physics, with physicists -- and exciteable non-scientists -- claiming that consciousness must arise from quantum principles. "Quantum consciousness", however, has gone nowhere but in circles, with no prospect of going anywhere else.
* Goedel's proof against strong AI is useless in cognitive research -- but it has achieved prominence in the work of Roger Penrose (born 1931), a British mathematical physicist, in his 1989 book THE EMPEROR'S NEW MIND. Starting from the GPAI, Penrose came to grand conclusions about the mind, stating in a 2009 interview:
QUOTE:
In my view the conscious brain does not act according to classical physics. It doesn't even act according to conventional quantum mechanics. It acts according to a theory we don't yet have. This is being a bit big-headed, but I think it's a little bit like William Harvey's discovery of the circulation of blood. He worked out that it had to circulate, but the veins and arteries just peter out, so how could the blood get through from one to the other? And he said: "Well, it must be tiny little tubes there, and we can't see them, but they must be there." Nobody believed it for some time. So I'm still hoping to find something like that -- some structure that preserves coherence, because I believe it ought to be there.
END_QUOTE
In quantum physics, elementary particles can exist in a number of different states. For example, an electron has a property called "spin" -- only analogous to the popular meaning of the word "spin", but never mind that -- with its spin being either UP or DOWN, and nothing in between. Before the electron is observed, it potentially exists in both the UP or DOWN states, a phenomenon known as "quantum superposition of states". This property can be exploited in "quantum computers", with an electron -- or some other quantum entity -- representing a bit, a binary number, a "1" or "0", in a calculation.
In contrast to ordinary computer bits, such a "quantum bit" AKA "qubit" exists in potential as both a "1" and a "0" until a calculation is performed on it, with the qubit then "decohering" from a superposition of states into an ordinary bit in the result. If we have, say, 8 qubits, in effect we perform a calculation on 2^8 == 256 values at once. For certain classes of calculations, this results in a great increase in calculational efficiency -- or at least it does in principle, nobody having built a fully-functional quantum computer just yet, though workable experimental lab systems and limited commercial systems have been developed.
Penrose came to believe that, since the mind supposedly can't work by an algorithmic process, it could not be the result of any known mechanistic process either. He decided the only alternative was decoherence of a quantum system -- that the brain operated as a sort of quantum computer. However, it could only be a "sorta" quantum computer, because quantum computers being designed today use algorithms in much the same way as a conventional computer. It's just that a quantum computer performs some of those algorithms far more efficiently than a conventional computer.
Penrose understood that, and so he came up with a revised scheme of quantum system decoherence, which he called "objective reduction". There was nothing much more to the model until 1992, when Penrose hooked up with Stuart Hameroff (born 1947) -- a professor of anesthesiology at the University of Arizona in Tucson. Hameroff suggested that "microtubules" -- tubular protein structures found generally in cells, including neurons -- could provide a basis for quantum computer-like operations in the brain. They called their combined theory "orchestrated objective reduction (Orch-OR)".
Not everyone was impressed by Orch-OR, one of the most devastating criticisms being from Swedish-American cosmologist Max Tegmark (born 1967). It is well-known that it is hard to maintain the superposition of states of qubits needed to perform calculations in a quantum computer, such machines often requiring cryogenic cooling. The brain is nothing like that, being what Tegmark called a "wet, warm, and noisy" environment in which quantum superpositions could not play a direct role. Besides, the action of neurons is very slow by computer standards, clocked in terms of milliseconds at the minimum, while quantum decoherence is vastly quicker. It's difficult to see how there could be any traceable interaction between the two.
* At this point, the skeptically-minded might well begin to wonder if Penrose is lost, and trying to get others lost as well. Operating from the aimless GPAI, he determined that the brain had to be a quantum computer, but not a quantum computer as such things are understood at present -- when there's nothing in the known operations of the brain that suggests it is any sort of quantum computer.
On reading THE EMPEROR'S NEW MIND, it is apparent that Penrose is strongly focused on theoretical physics, but has a naive comprehension of cognitive science and AI. For example, he discussed the Chinese room in detail, but never acknowledged, or seemed to be aware of, the many criticisms of Searle's strawman, concluding: "I think that Searle's argument has considerable force to it."
Wot? For an even more startling example, Penrose claimed: "I do not see how natural selection, in itself, can evolve algorithms that could have the kind of conscious judgements of the validity of other algorithms that we seem to have." In other words, according to Penrose, the brain has capabilities it shouldn't have acquired. Where did he get the idea it shouldn't? He explained, sort of:
QUOTE:
Imagine an ordinary computer program. How would it have come into being? Clearly not (directly) by natural selection! Some human computer programmer would have conceived of it and would have ascertained that it correctly carries out the actions that it is supposed to.
END_QUOTE
Although Penrose is not a creationist, this is in form a straight creationist "Intelligent Design" argument, revealing that Penrose is naive about evolutionary science as well. One might as well have pointed out the spontaneous evolution of a hummingdrone is impossible -- no argument there -- and then used that to cast doubt on the spontaneous evolution of hummingbirds. Could a digital computer have spontaneously evolved? NO. Could a biological neural net, a brain? Well, it did; we still have a wide range of brains, from flatworms to humans, that suggest an evolutionary sequence for the brain. Human behaviors can be seen, if sometimes in much more primitive forms, in other animals.
If the brain hadn't become more effective in the course of its evolution, able to generate ever more sophisticated behaviors with observable practical effects, it wouldn't have evolved, there being no selective advantage in doing so. Of course, imitating nature, a computer can implement a neural net, no problem. If anyone were to point out any specific way in which the simulation didn't work in the same effective way as a biological neural net, then we would just update the simulation so that it did work. If nobody could specify what the simulation was doing wrong, then there would be nothing to fix.
Whatever the case, all roads with Penrose lead back to his insistence that the mind is derived from quantum-mechanical phenomena. It appears, on sorting through his ideas, that he believes machines, being necessarily algorithmic, can't have imagination, creativity, insight, or intuition. These things, then, have to be derived from what might be called "quantum magic"; or possibly a "quantum Harvey"; or most aptly, on the basis of the well-known quantum-physics tale of "Schroedinger's cat" -- no worries about the details here -- a "Schroedinger's rabbit", which Penrose is trying to pull out of a hat.
BACK_TO_TOP* What, precisely, does Penrose believe quantum physics buys him when it comes to the mind that classical physics does not? The reality is that classical and quantum physics are interdependent with each other, quantum physics picking up where classical physics leaves off. Classical physics deals with the Universe more or less as we see it in the macroscale Universe; quantum physics arose when observations began to reach into the microscale Universe, revealing that it operated by "sorta" different rules.
The reason for adding the "sorta" is that the rules of the macroscale Universe and the microscale Universe are closely connected, since microscale events add up in the aggregate to the macroscale behavior of the Universe. This is known as the "correspondence principle", and it's flatly obvious: microscale events are the building blocks of macroscale events. If we see a macroscale event, then all the microscale events must, one way or another, contribute to it.
Consider, as a loose analogy, a computer display. It looks like we have a nice crisp image on the display -- but on very close examination, we can see the image is composed of sets of red-green-blue dots or "pixels". We can't look at one pixel and learn anything about the image; the image is due to the collective of pixels. Of course, we are usually only concerned with the macroscale image, which we generally won't have any trouble understanding, and have little reason to be concerned about the microscale structure of the display, which we may not understand at all.
Another way of phrasing the correspondence principle is that we normally deal with the macroscale world, and only resort to quantum physics when we can't avoid doing so. If we don't have to use quantum physics, we'd rather not, it just makes things unnecessarily complicated. If anyone insists that the complications of quantum physics really are necessary to understand the operation of the brain, it can be said in reply that a case can be made that quantum physics is at least as relevant to the operation of a computer -- since a computer runs using transistors and other solid-state electronic devices, whose workings can't be understood without recourse to quantum physics.
Computer processor chips are made from crystals of silicon; in such crystals, the wave nature of electrons ensures, by a quantum-mechanical rule known as the "exclusion principle", that electrons can only exist in certain ranges or "bands" of energies. By itself, silicon doesn't conduct current very well, which is why is it classified as a "semiconductor" -- but its conductivity can be enhanced, in subtly different ways, by adding "dopants" to the crystal, most commonly phosphorus or boron. Using fabrication schemes resembling silk-screen printing greatly scaled down, the electronic devices are fabricated through selective doping of regions on the chip, along with laying down layers of silicon dioxide (glass, essentially) as electrical insulation, and an overlay of electrical connections.
This all sounds very exotic, but there's no incomprehensible magic in it; yes, the idea that electrons act as either particles or waves, depending on how they are dealt with, doesn't make sense in the macroscale world, but we have no reason to expect that the rules of the macroscale world should apply to the microscale world. The only expectation we could have is that the microscale rules must underlie the rules of the macroscale world, as per the correspondence principle.
The design of such electronic devices is perfectly explicable engineering. All the devices have well-defined operational specifications -- they have to, if they didn't, there's no way the chip would work. To be sure, as electronic devices on a chip get ever smaller, the quantum effects become increasingly significant -- but the goal of the designers is to deal with the quantum effects, in order to come up with devices that still have predictable and useful properties.
More to the point, the people who actually design computer processing chips, as opposed to people who design the underlying electronic device technologies, have relatively little concern for the details of the operation of the electronic devices on the chip, instead being focused on the manipulation of "1s" and "0s" by logic gates -- and in many cases simply plugging in building blocks made up of logic gates and memory cells, obtained from a library of such building blocks. Once the chip is produced and put in a computer, those programming it has no need to know anything about the devices that make it up: they're just manipulating "1s" and "0s". Nobody needs to know a thing about quantum physics to understand the architecture of a computer, and write programs to make the computer stand up and dance.
Of course, Penrose and Hammeroff acknowledge that there's nothing magical about electronic devices, but imply that neurons are different. Well OK, they're certainly not the same sorts of things -- but we understand the operation of neurons as well as we do that of electronic devices. In response to weighted inputs on the neuron's dendrites exceeding a threshold, an electrical signal generated by ion transfers travels up the axons to the terminals, providing inputs to other neurons in turn. The operation of neurons is as explicable as that of logic gates; indeed, we can build electronic neurons that work much like biological neurons.
It is true that a neuron operates by the rules of biochemistry, and chemistry is ultimately quantum-mechanical in nature. However, there's no need to dig into quantum mechanics to figure out how a neuron works; the operation of a neuron can be modeled perfectly well on the basis of macroscale chemistry. There's no more or less need to use quantum mechanics to model the operation of a neuron than there is to use it to model the operation of, say, a car battery. We can also build analogues of neurons with electronic devices, or for that matter emulate them in software. If anyone says such artificial implementations of neurons are missing something, once again the reply is: "Tell us what's missing, and we'll add it in."
BACK_TO_TOP* It is also true that neurons are noisy in their operation, and the noise can be traced down to quantum effects. Still, in terms of modeling, all that has to be done is to add noise factors to the neuron model -- for example, specify the variation in threshold of the firing of a neuron -- without concern for how the noise comes about. Indeed, in developing digital simulations of elements of the brain, Dehaene found that he didn't have to specify noisiness, the noisy behavior was inherent in a combinatorial explosion, in having too many neurons to allow the behavior of the system to be predicted.
However, there has been some fuss by Penrose, Hameroff, and others about the quantum basis of the noisy operation of neurons, and by extension the brain -- the most common thrust of this argument being that, without the brain's basis in quantum noise, we would not have free will. This is leaping into the abyss of the definitional argument over free will, the short reply being: "Nonsense."
That's absolutely not a good enough answer by itself so, unfortunately, there's no alternative to spelling it out in excruciating detail. Quantum mechanics implies a certain absolute indeterminism. For example, consider the decay of a radioactive isotope. While we can determine, often to many significant figures, the half-life of a radioactive isotope, that only applies to the isotope in bulk. If it came down to a single atom of that radioactive isotope, there is no way to predict when it will decay. No matter what the half-life is, that single atom might decay instantly, or we might spend the rest of our lives waiting for it to happen.
All the half-life tells us is the odds of how long we'll have to wait. This indeterminism is not merely an issue of lacking the tools to probe the atom; it's more the case that any probe that we made into the atom would significantly disrupt its state, very possibly forcing it to decay in response to the probe. It would be like picking a lock with a sledgehammer: sure, that will let us in the door, but then we have to replace the door.
More generally, as go deeper into the microscale, it gets harder and harder to make observations -- until we reach the "quantum limits", and it is impossible, even in principle, to make observations at all. At the quantum limits, causal chains come to an end: things just happen, we don't know why, and we can never know why, even in principle. If we have a quantum superposition of states in a quantum system, we have absolutely no way of knowing in advance what the state of that system will be after decoherence.
Some find the "uncertainty principle", as it is known, distressing -- but a skeptic can rightly ask again: "So what?" Once we started probing more deeply into the microscale, we had no more reason to think we could do so to an indefinite level of detail than we did to think we would run into a limit. Indeed, the idea that we would never run into a limit can be judged harder to swallow.
In addition, although quantum physics shows there is an endpoint to observability, it's not like we notice so much. In the macroscale world, it's not so troublesome to measure the emissions of a block of radioactive material, and there's nothing unpredictable about the process. To observe a single, isolated atom of that material, to find that its behavior is nowhere near as predictable, takes a lot of very troublesome work, and it's effectively only done as a lab experiment.
Besides, the predictability of the radioactive decay of the block of material is somewhat misleading. While we can precisely measure the emissions, that requires carefully isolating the material, fabricating it into a block, and then observing it under controlled conditions. If we're talking about events that are not, or cannot be, so precisely controlled, then predictability is not so assured. We are dealing with events involving vast numbers of particles, and the combinatorial explosion ensures we can't possibly know what all of them are doing.
Weather is the classic example. We can predict yearly climate fairly well -- in northerly regions, stereotypically hot in summers, cold in winters, with variations towards one extreme or the other depending on locale -- but it's hard to predict the daily weather a week in advance. If we redouble our efforts, we can only push the prediction out a day or two. Weather is subject to what is known as the "butterfly effect", in which the flapping of a butterfly's wings in China may ultimately lead to a hurricane in Florida months later. The idea that quantum physics overthrew the notion of absolute determinism is bogus: we never had, and can never have, absolute determinism in any case.
In any case a skeptic, having asked: "So what?" -- and not getting a good answer, could then ask: "What does quantum indeterminism have to do with free will?" Given the legal definition of free will as "volition", related to establishing culpability -- and it's hard to find any other definition of free will that's coherent, or amounts to anything different -- then free will actually implies determinism. A court will assess a defendant as having exercised free will by finding the defendant had a properly functioning (deterministic) brain, and was thinking things out in an orderly (deterministic) fashion.
The rejoinder to those who object to this definition of free will is: "It may well be a problem for you to have a properly-functioning brain, and be thinking clearly -- but it's a problem for me if I don't." Do dysfunctional people, for example advocates of a flat Earth, demonstrate a capability for free will that anyone with sense would care to emulate? No.
* That being said, it does have to be emphasized that the noisy operation of the brain is essential to human cognition; if the mind worked in a rigidly deterministic fashion, never making mistakes, we'd be hard-pressed to come up with anything new. In calling for "fair play for the machine", Turing clearly saw that fair play worked both ways: if humans were infallible, their thinking would be limited to provable algorithms, with such a mind being too narrow, unimaginative, and inflexible to be truly intelligent. Alice's trips to the supermarket are conducted using a large set of informal rules; trying to rigorously nail down the process would be neither useful nor practical.
Doesn't that still leave standing the issue of noise and the indeterminacy of quantum physics? To the extent it does, it is at least as relevant to computing machines as it is to brains. It is possible to build a "hardware" random-number generator for computers that obtains its values of random numbers from the quantum noise of a solid-state device, with the random numbers then driving randomized algorithms.
To complicate the issue further, computers don't particularly need hardware random-number generators. They more generally use "pseudo-random number (PRN)" generators, which are algorithms that generate a long sequence of unpredictably-varying numbers. They can initially be given a "seed" value, typically the count from the computer's system clock. It is not easy to make a robust PRN generator, but the ideal is a PRN generator whose sequence:
Yes, a PRN generator is strongly deterministic, in that if a generator is given a specific seed, it will always generate the exact same PRN sequence. The trick is that it is difficult to impossible in practice to tell the difference between software obtaining random factors from a good PRN generator, and software obtaining random factors from a hardware random-number generator.
Under the Microsoft Windows operating system, the system clock returns a number giving the count of milliseconds from 1601 CE, with the count not overflowing until 30,828 CE. In other words, the number of possible seeds for a PRN generator is over 9E14. Even if two different computers end up generating the same PRN sequence, they would only do exactly the same thing if they were executing the exact same program under the exact same conditions -- and, since nobody is going to be recording and comparing every execution of the exact same program all over the world, nobody would ever know it happened. It takes a certain amount of careful testing to spot a PRN generator that isn't quite right, because it's so hard to notice anything's wrong in practice.
In short, we have a scheme based on elements that are strongly deterministic -- a PRN generator, seeded by a system clock -- that generates an effectively indeterminate sequence of numbers. We have no practical way of knowing what a system driven by a good PRN generator will do next, any more than we do if the same system is driven by a hardware random-number generator.
There's also nothing particularly special about obtaining random values from a quantum device. We could just as well flip a coin to get a random string of digits, assigning HEADS and TAILS a value of 1 or 0 respectively:
HHTHTHHTTHTTTHT 110101100100010
This would work fine for a small binary value, though since coins aren't perfectly symmetrical from face to face -- the center of gravity is closer to one face than another -- there's a bias towards one face that would skew the binary sequence slightly. If we wanted a better binary sequence, we could put 100 black balls and 100 white balls in a bin, with the balls all being the same, except for color; we could spin the bin, pull a ball, assign a 1 if white, a 0 if black:
WWBBBWBWWWBBBWW 110001011100011
The trick here is that, if we were given a set of random numbers generated in this way and a set of random numbers generated by a quantum device, it would be impossible to tell the difference. There effectively isn't one: random is random.
In short, if anyone wishes to argue that human free will (however that is defined) derived from quantum indeterminacy creates a barrier between human and machine minds, the reply, as before, is: "Nonsense." The operation of machines can be made dependent on quantum indeterminacy if we like; but if it's indeterminacy we want, there's no particular reason other than convenience that we need to resort to quantum technologies.
* Although Penrose and others who believe in the quantum mind do often belabor the quantum free will argument, they actually tend to place more weight, in demonstrating the quantum basis of the mind, on the similarity between the way a quantum system decoheres from a superposition of states into specific properties, and the way competing agents of the mind end up producing a single conscious thought.
The difficulty is showing the similarity amounts to anything. There's nothing inexplicable about the brain's subsystems being in competition, with one subsystem over-riding, at least for a moment, the others to broadcast a conscious thought. There's no need to invoke quantum physics to model the process, and adding quantum physics to the model doesn't improve on it in any significant way. Dehaene suggests this "seductive analogy" may be "superficial" -- and, with French directness, goes farther to declare that the "baroque proposals" of Penrose and others pursuing a quantum explanation of the mind "rest on no solid neurobiology or cognitive science."
Penrose is, with good reason, a highly respected scholar, but he has no substantial qualifications in cognitive science or AI, and demonstrates no strong grasp of either, his refutation of strong AI being based on not understanding what it is. Few in the cognitive science / AI community take his work seriously -- mostly because they can't understand what they could do with his ideas if they did.
They also are not happy to be challenged by a physicist who implicitly dismisses them as ignorant, and then loftily suggests that it's physics that will straighten them out. Penrose, it could be said, has a hammer, and so all he sees is nails. Penrose is treated with a certain respect for his substantial accomplishments as a physicist -- but the cognitive science community, as a rule, would have preferred that he stick to physics.
BACK_TO_TOP* Penrose has also suffered in his reputation for being a contributor, if not an enthusiastic one, to a cult of what is called "quantum woo" -- in which the nomenclature, if not the substance, of quantum physics has been trotted out to justify life after death, telepathy, precognition, and other toys of the fringe.
Quantum woo inevitably bedevils theories of the quantum mind. There's an assortment of such theories, sometimes generated by obvious quacks who have no substantial background in the science they're mangling; sometimes by physicists moonlighting outside their field -- much the same as Penrose, hammers in search of nails. The rationale behind the exercise has been mocked as:
We don't understand consciousness? Dennett and Dehaene would disagree, GWT does an excellent job of explaining it. We don't understand quantum physics? Of course we do: we have models that allow us to predict results of quantum events and interactions to a high degree of detail. True, all that tells us is HOW, and doesn't explain the WHY of the crazy things that happen in the microscale; but we don't really get the WHY of collisions of billiard balls either. Sure, quantum physics isn't intuitive, but so what? Quantum physics is a model, no more and no less. Even layfolk can understand quantum physics, given a model at the level of detail useful to them. Physicists can and do argue over the structure of the model, but in the end they've still got a model that gives the right results.
Stuart Hameroff was inspired to see quantum physics as the key to understanding the mind by the fact that anesthetics, so it seems, do influence neurons in ways that need to be modeled at a quantum-physics level. Working from there, he has expended a lot of effort to establish that quantum effects can be observed in neurons, and says the idea that neurons are "on-off" systems -- that is, they either fire or they don't, and it's the firing that does the work, which is what cognitive pragmatists think -- is simplistic.
The reply is that Hameroff appears to be hauling excess baggage. We could observe quantum effects in the operation of a car battery if we liked, but thanks to the correspondence principle, we would learn nothing much useful about a car battery by doing so. Again, solid-state electronic devices work on quantum-mechanical principles, but computer organization is described on the basis of logic gates, and no knowledge of quantum physics is required to know how a computer works.
More to the point, if we're missing something by assuming that neurons operate by "on-off" principles, then if we play the Game of Mind again, building a machine that functionally imitates the human brain on that assumption -- then what could it not do that a human brain could? If the machine didn't behave any differently than a human, then what would it be missing? If somebody could nail down what was missing, then couldn't we update the machine to put it in?
The most glaring difficulty with quantum consciousness theories is that it is difficult to find any workable definition of "consciousness" in them. To a cognitive pragmatist, that's not so tough a question: consciousness characterizes certain behaviors of the brain that can investigated via heterophenomenology, but it is not "a thing in itself". Hameroff's quantum states of microtubules instead end up being like what Dennett calls a "smidgen of consciousness in a dish", some kind of Schroedinger's rabbit.
Microtubules, incidentally, are common among many different types of cells, and so if they are the "atoms" of consciousness, then consciousness would seem to be distributed among the cells. At a conference, Dennett threw a challenge at Hameroff: "Stuart, you're an anesthesiologist. Have you ever assisted in one of those dramatic surgeries that replaces a severed hand or arm?"
Hameroff replied that he had not, but he knew about them. Dennett continued: "Tell me if I'm missing something, Stuart -- but given your theory, if you were the anesthesiologist in such an operation, you would feel morally obliged to anesthetize the severed hand as it lay on its bed of ice, right? After all, the microtubules in the nerves of the hand would be doing their thing, just like the microtubules in the rest of the nervous system, and the hand would be in great pain, would it not?"
Dennett reported that Hameroff was somewhat taken aback by this comment. Dennett, incidentally, is a big, burly guy with a beard, often compared to Santa Claus -- but when annoyed, more reminiscent of a retired Navy master chief petty officer, with a strong scholarly bent. Dennett has been described as seeing philosophy as a contact sport; he plays a bit rough when he sees it as justified.
However, advocates of quantum consciousness don't always stop at thinking there is a "smidgen" of consciousness just in cells. If consciousness is derived from quantum phenomena like decoherence, then it must also be found in such quantum phenomena even when they have nothing to do with a biosystem. That leads to the notion of "panpsychism" -- that consciousness is distributed through the entire Universe. Some of the advocates like to suggest that at the moment of the Big Bang, the creation of the Universe, the entire Universe achieved consciousness, what they call the "Big Wow". To cognitive pragmatists, this is ridiculous. If consciousness can only be sensibly defined as a behavior pattern determined by observation, then what behaviors of the Universe could be observed that demonstrate it? There aren't any.
That's the fundamental problem with quantum consciousness, one it shares with all other forms of dualism: none of the advocates can come up with any persuasive way it can be tested. Revealingly, while Hameroff talks a great deal about microtubules and quantum states, he doesn't say much about the practical cognitive experiments, or computer simulations of neural systems performed by cognitive researchers like Dehaene. Cognitive experiments really wouldn't do much for Hameroff; after all, an fMRI scan doesn't give different results whether Schroedinger's rabbit is assumed or not.
Hameroff, in short, is big on quantum-flavored rhetoric, short on practical experimentation -- and, though he has professional qualifications, is uncomfortably close to the dubious "quantum woo" crowd, who often don't. For example, Hameroff has flirted with the idea that near-death experiences imply the possibility of consciousness surviving death, saying in an interview:
QUOTE:
I've been asked basically if it's possible that consciousness can exist outside the brain in the case when the brain has stopped being perfused and the heart has stopped and so forth. I think we can't rule it out. I think it's possible, because in the model that Penrose and I developed -- and I should say this is my speculation, and Roger wouldn't go there -- but I would say that since consciousness is happening it seems to us at the level of spacetime geometry, the most fundamental level of the Universe, or at least down to that level in the brain in and around the microtubules, right now while we're conscious, while we're talking.
If that's the case, then when the brain stops functioning, some of this quantum information might not be lost or dissipated or destroyed -- but could persist in some way in this fundamental level of spacetime geometry which it seems is not local, and something like holographic repeating in scale and distances and persists perhaps even indefinitely at a finer scale, which would be a higher frequency, smaller scale but also lower energy. And it could exist somewhat indefinitely.
END_QUOTE
The reaction to this thinking is either: "Wow, cosmic!" -- or deciding it is better for everyone to just leave Hameroff alone.
BACK_TO_TOP