* At the dawn of the computer age, British mathematician Alan Turing wondered if machines could ever think. He suggested a test to show that they did: if a machine could hold a convincing conversation, then it could be said to think. The "Turing test" has, since that time, led to a great deal of work on machines that can hold conversations, as well as a great of argument -- mostly due to misunderstandings of what Turing was trying to say.
* Scholars had been discussing the human mind for centuries on the basis that it was unique. It was only until the last half of the 20th century that programmable computers were developed, and changed the ground rules of the discussion -- by suggesting the possibility that the human mind could be understood, and even duplicated by a machine. Such a proposal was far beyond anything that Descartes, limited to pipe organs as his vision of high technology, could have realistically contemplated. When the possibility of a machine mind became a serious question, it provoked a good deal of consternation and indignation.
In 1950, Alan Turing published a far-sighted essay titled "Computing Machinery & Intelligence", in which he posed the question: "Can machines think?" -- or put another way, can a machine have a mind? At the time, machines certainly didn't have minds, but Turing was looking forward to the future, and simultaneously trying to sort out the question. Turing devised his own definition of "thinking", defining what he called the "imitation game", or what is now called the "Turing test".
It's a deceptively simple idea. Suppose Alice is conversing with Bob over the internet -- Turing, lacking 21st-century technology, envisioned a teletype instead, it makes no real difference. Turing suggested that if Alice couldn't tell if she was communicating with Bob or with a machine, then the machine could be regarded as a thinking machine.
The Turing test challenged Descartes' failure to envision a machine that could "use words or other signs by composing them". Turing, considering machines far beyond anything Descartes could have imagined, posed the question: "Why can't machines hold a conversation?" Turing suggested that, if a machine could hold a conversation for five minutes and fool 30% of judges into thinking it was human, then it passed the test.
It is often said that "the Turing test proves nothing". In reality, it doesn't try to prove anything; Turing simply put forward his own definition of "thinking", and asked the question of if a machine might be built that could, as per that definition, think. Turing proposed his test as merely one possible definition, asserting that a convincing "conversational software system" could and would be judged by people as demonstrating a degree of thought -- not to be mistaken for a mindless calculator, no matter how sophisticated.
BACK_TO_TOP* Turing's question has led to a great deal of work on computing systems that can hold human-like conversations. It is actually not hard to build a crude conversational software system -- the archetype being ELIZA, written in 1966 by pioneering computer scientist Joseph Weizenbaum (1923:2008) of the Massachusetts Institute of Technology (MIT).
ELIZA pretends to be a psychotherapist. It's dim-witted, it simply scans through input text for keywords like "mother" or "depressed", and gives kitted answers, usually in the form of neutral leading questions: "Tell me more about that." -- or: "What do you feel about that?" For example:
QUOTE:
E: Hello, let's talk.
What do you want to talk about?
E: We were discussing you, not me.
Is there something specific you want to ask me?
E: Oh ... want to ask you?
That's right.
E: Tell me more...
About what?
E: Have you asked such questions before?
Yes, see above.
E: Are you sure?
And that, Your Honor, is when I shot him.
END_QUOTE
As this dialogue shows, ELIZA quickly reveals that it is a dimwit, making it clear that nobody's home. Although Weizenbaum thought that it was taken too seriously and soured on AI research, ELIZA was seen as remarkable in its time; nobody had ever built a machine that could fake a conversation that well, and it eventually spawned a range of "chatterbots" or "chatbots", some built as pranks -- for example, to imitate well-known incoherent internet trolls, as sort of a compliment to them -- and some built to generate spam for flooding the internet.
Working from ELIZA, chatbots have become far more sophisticated. In 2014, a conversational system based on a character named "Eugene Goostman" -- given as "EGM" for convenience here -- created as a side project by a gang of programmers at a startup named PrincetonAI, successfully passed the Turing test by fooling a third of its judges into thinking they were talking with a real person. Journalist Lance Ulanoff of the MASHABLE website reported on his conversation with EGM, the following transcript being edited down:
QUOTE:
LUF: Hi Eugene, how are you?
EGM: Hello! I am fine, thanks, and you?
LUF: I'm well. Congratulations on fooling those judges.
EGM: Nice to hear that! Do you like the weather?
LUF: Yes, it's pretty nice, though a bit humid.
EGM: At least someone agrees with me. I'll remember this nice moment for ages! Oooh. Anything else?
LUF: Do you think?
EGM: No, not really. I don't think. Where do you come from, by the way? Could you tell me about the place where you live?
LUF: I live on Long Island, though I was born in New York City.
EGM: I saw New York in many movies. I think it exists to bewilder people's imagination all -- over the world.
LUF: What do you want to be when you grow up?
EGM: Maybe I'll be a gynecologist, like my father -- so at least I'll finally understand where children come from ...
MEF: Where did you come from?
EGM: I live in a big Ukrainian city called Odesa. It is on the Black Sea shore.
LUF: What do you think of computers?
EGM: If we are talking about machinery, do you like computer games, by the way?
END_QUOTE
And so on. John Denning, one of the creators of EGM, said that the program would ask a conversationalist for a name and remember it, and also remember where the conversationalist was from. If the conversationalist contradicted his location, EGM would call out the contradiction.
It doesn't take long to realize that EGM, although far smarter than ELIZA, is superficial. EGM was deliberately handicapped; his creators cast him as a quirky 13-year-old from Odessa -- in other words, English wasn't his first language, meaning misunderstandings could be expected, and as a 13-year-old, his world view and knowledge were limited. Vladimir Veselov, one of the other programmers who helped put EGM together, commented: "13 years old is too young to know everything and too old to know nothing."
EGM's world view was preprogrammed; anything outside the bounds of his programming simply didn't make sense to him. When he ran up against the bounds, he would plead ignorance or change the subject. He could remember names and places, but it wouldn't be possible to teach him anything substantial. The general response from the AI and cognitive communities was that EGM was, at most, interesting and even clever trickery; or if not that, a silly gimmick.
* EGM's makers had no illusions themselves about their creation, Denning saying: "We did not build the ultimate answer bot, and we did not build something that's completely stupid, either." EGM was put together as an exercise in conversational systems, with the programmers who created him working on practical conversational systems as a day job.
In other words, EGM was a demonstrator for conversational systems that could be put to work. There's little reason to build a conversational system for the sake of talking; if we just want to talk, it's not hard to find people to talk to -- and in fact, sometimes we'd rather they shut up. We talk to machines to have them do something for us. Designers of conversational systems don't worry so much about the Turing test, being much more concerned with the three closely-joined questions of (1) what they need to build, (2) what they can build, and (3) how they can build it.
At a primitive level, we communicate with electronic machines in that we press a button or throw a switch, and then they do something. Nobody honestly sees that sort of thing as conversational, but it starts to seem more so when we have computing machinery with hierarchies of command menus that we can traverse, like on a computer user interface. It seems even more like conversation when we have machines that can respond to voice commands -- like, say, a video game console with a motion tracker that can be used for interactive activity games.
Voice input on a computer is not always the best way to communicate with a machine; it may be faster and less prone to misunderstandings to provide inputs with a keyboard, mouse, or touchscreen. However, it's handier to control the lights and other house functions by voice -- "Lights ON!" -- than by punching a command into a smartphone; it's also handier in activity games to have voice input to control the game console: "GameBox! Play ACTION ADVENTURES!" -- than to have to grab a game controller, select a command from a menu, and then put the controller down again.
Okay, voice input to the GameBox is just the same as traversing command menus with a mouse on a computer -- but still, the players are talking to the game console, and it understands them, carrying out requested commands. That's all it understands, a few dozen commands; that's all it has the "need to know". Although it has perceptions, meaning the ability to track the movements of players and obey commands, the GameBox doesn't really have a mind, since it makes no decisions, it doesn't think things over, it just does what it's told.
The GameBox does not pass the Turing test; nobody would think there was anyone home. Nonetheless, the ability of the GameBox to respond to voice commands makes it seem much more intelligent than a desktop PC. That gives an insight into the nature of the Turing test, in that it's at least as much about human attitudes towards machines as it is about machine intelligence.
In any case, it's only another step up from a menu-driven computer to a simple conversational system, for example one to screen customers who contact an online support organization, qualifying their inquiries and offering suggestions for simple troubleshooting. Simple screening systems are based on "decision trees", which are really just interactive flowcharts -- providing kitted text, and typically asking users to select answers from a list. If they do permit free-form input, they're invariably highly limited in their comprehension of the inputs -- expecting a simple and straightforward answer, and replying "not understood" if they don't get it.
The same sort of decision-tree scheme shows up in voicemail menu systems, which allow callers to traverse a menu of options, using voice or phone button inputs. Voicemail menu systems, if sensibly designed, work pretty well, though they demand a degree of patience and limited expectations on the part of a caller. That's true of decision-tree systems in general: the technology implementing the system is less important to its effectiveness than the careful design of the application decision tree -- and such systems are only really suited to simple, well-defined tasks.
Going above that level of capability means machines that aren't limited to fixed menus of inputs, that can to some lesser or greater degree understand "natural language", and have more flexible repertoires of skills -- for example:
These technologies are more or less available now; their language capabilities remain limited, but are rapidly improving, as beneficiaries of advances in natural-language processing. Instead of providing answers to simple and direct questions, a truly adept conversational system is able to build up a model of what a user is talking about; query the user for clarification; and provide a tailored answer to the user. Today, conversational systems are well more capable than EGM, and they're continuing to improve.
BACK_TO_TOP* One of the core efforts in natural-language processing is language translation. As anyone who has ever studied a foreign language knows well, simply performing a word-for-word translation between languages, even when factoring in proper syntax, doesn't do the job very well. The classic example is a translation, from English to Russian, with:
The spirit is willing, but the flesh is weak.
-- becoming:
The vodka is good, but the meat is bad.
Having a dictionary of words and a full set of the rules of syntax is, as the saying goes, a "necessary but not sufficient condition" for language translation. Such an approach works to the extent that simple, childish, entirely literal-minded text can be usefully translated; but the cognitive network is simply too sketchy to do any better than that.
Properly translating text of greater elaboration is much tougher, involving a deep network not merely including the words and the language syntax, but a knowledge of variations in word usage in different contexts; colloquialisms; full definitions of the words being used; and, very importantly, a dynamically-created cognitive model of the scenario being discussed, embedded in a world model that allows it to make sense.
It is particularly difficult to translate between languages that don't have much in common, such as English and Japanese. Word-for-word translations from Japanese to English may not make any sense, with the Japanese text having to be rewritten in English after determining its meaning. Understanding Japanese text may also demand an understanding of Japanese culture that most English-speakers don't have. For example, the word "shi", which the Japanese derived from Chinese, means both "four" and "death", and so four is an unlucky number in both Japanese and Chinese. The Japanese have a tendency to use the native word for four instead, which is "yon".
It might be said that languages always have a rhyme and a reason, but not necessarily both at the same time. In the "false dawn" of AI systems in the 1980s, the way forward was seen as "rule-based systems", which were just what they said they were: elaborate sets of rules, with the system able to interpret the rules, and perform inferences using the set of rules.
The problem with this "Good Old-Fashioned AI (GOFAI)" approach was that it was laborious and inflexible: everything known to the system had to spelled out, and plugged into the set of rules. AI didn't really start to gain traction until the introduction of neural net-based "machine learning" systems after the turn of the century. ANN-based systems were developed that could be trained to translate between languages by being fed large amounts of conversational text, being provided both the original text in one language and the translated text in the other. The translation ability of such systems improves with the amount of text they are fed.
Incidentally, rule-based systems are not dead, with advocates believing they have their uses, and thought given to hybridizing them with machine learning systems. Since machine learning systems tend to be somewhat unpredictable, it might be useful to have a backup rule-based system that is nowhere near as smart, but constrains the machine learning system from breaking the big rules.
* Conversational systems and translation systems are not so different, both needing to parse input verbiage, comprehend it, and generate an appropriate response -- in the case of a conversational system, for example give a useful response to a query; in the case of a translational system, give a correct translation. It's not so hard to build a conversational system that understands what it's been told, because such systems are focused on particular tasks, with each different system built with a cognitive model that defines all it "needs to know" about the job. Of course, the systems don't understand anything but the job they're built to do.
Once a conversational system understands what it's been asked, it's not that difficult for it to reply; it either has an answer on file, or can search for one elsewhere on the internet. It can simply return a canned answer or splice them together, which is also not so hard to do. There's a principle in communications between electronic instruments expressed as: "Forgiving Listening, Precise Talking":
Given the rules of language syntax and a well-defined vocabulary, it's not that troublesome for a machine to express itself, though it can come across as stiff and stilted. Language construction is a much easier issue than language understanding. To be sure, a conversational system that could actually compose an answer, instead of using predefined answers, is a much tougher proposition -- but it's not necessary for conversational systems in this day and age.
In short, we have the technology to build conversational systems as competent as needed for the job, if we're willing to put in the work -- but how competent do we need them to be? The work takes time and costs money, and so there's no reason to make them more competent than they need to be. A conversational system only needs to be competent enough to do its assigned job. There's no reason to expect it to carry on a conversation about anything in general, or even be very sophisticated in conversations related to its job.
If we want a machine to do something for us, we can't complain if we have to articulate our communications carefully -- we don't want a machine to jump to conclusions, and do something we didn't really want it to do; and we don't want it to misunderstand a query, and give us a wrong answer. It is nothing unusual for people to use communications to confuse other people, in fact at times it's necessary to do so. There is, however, no point in making things difficult for a machine that was built to do our bidding.
In 2017, journalist Lisa Eadicicco, writing for TIME magazine, visited the Google design team working on the company's Assistant personal assistant. The team wasn't purely technical; it included creative types, such as fiction writers, film-makers, video-game designers, psychologists, and comedians. Their job was to give Google Assistant a personality.
The team specifically did not want to design Google Assistant to fake being a human being, since users would know it was a fake. Google Assistant needed to be a synthetic character, a sort of cartoon character like Mickey Mouse or Bugs Bunny, that people like, but don't confuse with a real human being. The design team was focused on Google Assistant's responses -- after all, it would never talk or do anything on its own initiative. To that end, the team members asked questions such as:
For example, suppose a user asked Google Assistant: "Are you afraid of the dark?" It would be fake to say it was; a conversational dead end to just reply NO; so what Google Assistant was actually built to say was: "I like the dark because that's when stars come out. Without the stars, we wouldn't be able to learn about planets and constellations."
That was nothing more than a bit of entertaining philosophy-lite, coded in by the design team -- not that much more than what one might get out of a fortune cookie, but then people like fortune cookies. In practice, as team members told Eadicicco, users liked to test Google Assistant, for example often asking: "Will you marry me?" Of course, that was just clowning around; Google Assistant would play along, replying: "I'm flattered that you're looking for commitment." Thank Assistant for giving an answer, it replies: "I'm here to help." -- or: "Think nothing of it." By the way, Assistant can be configured with male or female voices, with a range of accents.
The team did design a degree of empathy in Google Assistant. For example, if a user said: "I'm stressed out." -- Google Assistant would reply: "You must have a ton on your mind. How can I help?" Google Assistant didn't have to fake empathy, since it's not an emotion, it's an intentionality. All the machine had to do was make an effort to get in touch with and reassure the user, in order to provide help. That soft reply made personal contact with the user, while prodding the user to think of what help Google Assistant could, and could not, provide.
Going beyond that, improvisational comedy was a big influence in the design of Google Assistant, many of the team members having done "improv" in the course of their earlier careers. Dialogue in improv is like a session of verbal juggling between the participants, one tossing out a line, the other picking it up and throwing it back -- a process known as "yes-and".
For example, suppose a user asked Google Assistant if it liked ice cream. A response such as: "I do not eat ice cream, I do not have a body." -- would be a conversational dud, while a response like: "I love chocolate ice cream and I eat it every Tuesday with my sister." -- would be ridiculous fakery. The actual response invoked the fortune cookie principle: "You can't go wrong with Neapolitan; there's something for everyone."
Users, of course, might then ask about a specific flavor within Neapolitan, like vanilla or strawberry. At some point, Google Assistant would have to stop going along with the gag; nobody's going to ask such questions unless they're playing "stump the machine", and they'll run into a dead end sooner or later at that game. It might be best to reply: "I'm here to help -- what can I do for you?"
Eadicicco found the team's efforts focused on expanding Google Assistant's comprehension of queries, and also in providing nuanced, more human-like responses -- for example, when reporting on the weather, Google Assistant might emphasize words like "mostly"; and raise the pitch of its voice if it said "no" at the start of a sentence.
Most importantly, Google Assistant needed to maintain a history of its interactions with a user, to obtain a context with the user that would make Google Assistant more useful. That could be seen as an extension of the "auto-complete" facility on a smartphone, in which the smartphone tries to second-guess what the user is trying to type in -- and gets better at guessing with experience.
It is not completely obvious when one starts using Google Assistant exactly how much good it is, but with experience it proves its worth. Starting out, it can provide weather reports, set clock alarms, and -- after configuration -- turn on or off lights around the house. With more experience, it turns out it can answer questions about the road distance between two cities, the time difference of a region from UTC, simple calculations such as unit conversions, and generalized queries. The generalized queries do have to be simple and specific, or Google Assistant simply provides a set of links to relevant resources online.
It does take experience to determine when it makes more sense to submit queries to Google search via a keyboard than to make voice requests to Google Assistant -- but Google Assistant is getting smarter and more powerful all the time, and that determination is a moving target.
BACK_TO_TOP* The Google Assistant design team was pointedly not out to trick users into thinking Google Assistant was a human, but they were trying to convince users there was a person of sorts, a likeable synthetic person like a cartoon character, really there. However, if users dealt with Google Assistant as if it thought, that had a mind, wouldn't that mean it passed the Turing test?
As the Turing test is phrased, no, since nobody would believe Google Assistant was human. However, the Turing is no more or less than an attempt to come to grips with the question: "Can a machine think?" The question was more Turing's focal point than his answer. That answer: "Yes, but only if it can't be told from a human." -- suffers from a number of difficulties on inspection.
For one thing, no human is entirely adept at language understanding or generation. Indeed, there are humans who can't pass the Turing test, if taken literally. Everyone is inarticulate sometimes; some people are inarticulate, even incoherent, all the time. There are also the obsessive crackpots who inflexibly say the same things over and over, being easily emulated with a badly-written chatterbot. If there is a Turing test, there is necessarily also a "reverse Turing test" -- in which the game is not to tell a machine from a human, but a human from a machine, possibly a poor machine.
Along the same lines, perfectly sensible people who don't speak English as a first language may not be able to understand or reproduce the nuances in a conversation between two English-speakers, even if their English is very good. As noted above, that was the dodge in making EGM a Ukrainian boy. Conversing with such non-English speakers can be done, but it requires a more careful and literal approach to communications.
Turing himself pointed out weaknesses in the Turing test, noting that computers do some things, particularly computations, far better than humans. Being able to rattle off lightning calculations would be a big giveaway that one was dealing with a computer. As Turing put it:
QUOTE:
The game may perhaps be criticised on the ground that the odds are weighted too heavily against the machine. If the man were to try and pretend to be the machine, he would clearly make a very poor showing. He would be given away at once by slowness and inaccuracy in arithmetic. May not machines carry out something which ought to be described as thinking but which is very different from what a man does?
END_QUOTE
The question in response is: why is this a problem? Yes, it gives away the machine, but if it could carry out a sensible conversation, it's not like we'd judge it wasn't a thinking machine after all. Turing didn't think it was a problem himself: "We need not be troubled by this objection."
Turing's comment is revealing, in that it shows he did not expect his test to be considered as anything more than a suggestion. In response to the question of: "Can a machine think?" -- what his actual answer was: "Yes, it could think if we couldn't tell it from a human, though once we acquire experience, we may not demand so much."
Turing raised the bar unrealistically high, choosing his test because it was, in principle, bullet-proof; there was no more way to prove such a machine couldn't "really think" than there was to prove a human couldn't "really think". All he was saying was that, since we judge humans as thinking beings, a machine that was indistinguishable in its behavior from a human would have to be considered a thinking being as well. If nobody could tell the difference, then what would it be missing? Would it be a zombie? How could anyone ever know?
At the time, Turing didn't have any conversational machine systems to inspect, and the practical realization of the test wasn't in the cards he had. Now we do have conversational machine systems, and we give them a spin around the block. Given conversational robocars and virtual assistants, then if we can talk with them on a conversational basis, they understand what we want; and they can do what they're told, on what basis could we think they didn't have minds? Of course we think they do. The Turing test can be more generally and usefully rephrased as: "If a machine can reasonably convince us it thinks, and nobody can provide any material reason to believe it doesn't, then it is a thinking machine."
There's not much need and not much sense in building a machine that can trick us into thinking it's a human. It may not be impossible to do so, but we have little useful reason to try. We want to build machines that are humanlike, at least in a cartoonish way, but only to make them more capable servants.
* The emergence of what became known as "generative AI (GAI)" in the 2020s led to a surge of commentary about the Turing test. GAI started out in 2014, when an American computer scientist named Ian Goodfellow (born 1987) came up with the bright idea of the "generative adversarial network (GAN)".
One of the difficulties of traditional machine-learning systems is that they have to be trained on massive datasets, which can be difficult to handle. Various schemes have been devised to allow machine-learning systems to "bootstrap", being originally seeded with a relatively modest dataset, to then fill it out on their own. A GAN consists of two primary elements: a "generative network" that synthesizes candidates for inclusion in the dataset, along with a "discriminative network" that evaluates the candidates to see if they are within the bounds already established by the original training dataset, and candidates that have been accepted. The discriminator feeds the PASS or FAIL decision back to the generator to allow it to refine its synthesis of candidates.
A GAN does not end up merely with a bigger effective training dataset, but is capable of generating original content. One of the early applications was synthesis of images, particularly of people, with GANs built that would generate endless images of people who didn't exist. Of course, they could also generate images of people who did exist, the result being "deepfake" videos of well-known persons that were complete fabrications. Similar tricks could be done with music, allowing users to synthesize any kind of music, in any style, they wanted.
More relevantly, after figuring out how to properly encode textual data, GANs became a core element of chatbots, with capabilities that had barely been imagined before. A GAI chatbot trained on the floods of information available online, given a query, could actually write a little article as a response. Working from there, it could write a cover letter, produce advertisements and business releases, and generate working program code. The applications were open-ended, with applications growing as GAI became more capable. It was possible to imagine a GAI system being given a specification for a video program, to then generate the entire program, including original theme music.
At the outset, of course, GAI chatbots were far from reliable, sometimes giving disastrously wrong answers -- what were called "hallucinations", with generated images similarly afflicted by things like missing or extra arms, doubled lips, and so on. However, given more training and tweaking of the GANs, in particular to catch errors, in time GAI systems became more competent and less prone to hallucinations.
* In any case, again the introduction of GAI led to considerable discussion of the Turing test. Does a GAI chatbot really think? That led to the question in turn: "Does it give thoughtful answers?" -- with the answer: "Yes it does, so it thinks." Everyone knows it's a machine, but it gives answers that, as a rule, are as articulate and well-reasoned as we might expect from a human. Yes, sometimes it goes wrong, sometimes badly so, but humans don't always give articulate or correct answers either. There was massive enthusiasm to the introduction of GAI chatbots, but it had nothing to do with how well they faked being a human; users instead recognized how very useful they could be.
It should be noted that anyone using a GAI chatbot recognizes it is clearly different from humans in that it has no will of its own. It answers a question to the best of its ability and then goes idle, doing nothing while waiting for another question. It should have memories of previous questions and answers, but otherwise it doesn't have much of a stream of consciousness. Otto the autopilot, in contrast, does have a stream of consciousness -- a very narrow one, strictly focused on conducting his mission, operating in real time.
Anyway, Turing's real bottom line was to say that, yes, machines could think. Given any credible test for a thinking machine that might be proposed, there was no reason to believe that, sooner or later, a machine would not be able to pass that test. In his 1950 essay, Turing threw out a grab-bag of human capabilities that, supposedly, machines would never be able to do:
QUOTE:
Be kind, resourceful, beautiful, friendly, have initiative, have a sense of humour, tell right from wrong, make mistakes, fall in love, enjoy strawberries and cream, make some one fall in love with it, learn from experience, use words properly, be the subject of its own thought, have as much diversity of behaviour as a man, do something really new.
END_QUOTE
Turing could have easily dismissed these claims, since they were basically silly, or at least trivial -- but having raised the issue, he had to address it. Fall in love with a machine? People do it all the time, some becoming passionately devoted to their cars; children becoming deeply attached to their inanimate stuffed toys. With an intelligent machine, it would be no problem for it to answer: "I love you, too."
As Turing pointed out, no justification was ever given for such claims. Who was to say in 1950 that machines would never be able to learn from experience, or come up with something new? He suggested that people expressed incredulity about suggestions of the things a machine might be able to do, for no other reason than they had never seen machines do them -- and simply concluded they couldn't.
As Turing realized, that was not so unreasonable given the limits of computing power of the era -- a cheap modern smartphone has far more memory and processing power than the mainframe computers of fifty years ago -- but it's hardly so unreasonable now. Even in his time, Turing clearly saw the silliness of the debate over the machine mind:
QUOTE:
The original question: "Can machines think?" -- I believe to be too meaningless to deserve discussion. Nevertheless, I believe that at the end of the century, the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.
END_QUOTE
Turing was wrong on that, it's still argued, with the Turing test the target of bitter criticism. Since all the Turing test amounts to is: "If we think a machine thinks, then it thinks." -- that seems like a waste of ammunition. Indeed, it almost seems like Turing was merely playing a prank in coming with up the Turing test; but not true, he was attempting to address a question that people were going to ask, were asking even in 1950. At the core of his response was what might be called the "Turing rule": it is impossible to identify any cognitive process that a human can perform that a machine can't. He never actually used the term, but the rule is implicit in his notion of a universal machine.
Humans cannot figure out everything a computer can do. An ordinary modern PC could run a program a gigabyte or more in size; each byte of memory can contain 2^8 = 256 possible values, so the number of possible binary code arrangements is about (2^8)^1,000,000,000, or about (10^2.408)^1,000,000,000 == 10E2,408,000,000. Specifics of computer command sets means that the actual number of possible programs is less than that, but we can lowball the value to 10E1,000,000,000 to cover all the bases and simplify the calculation. Given double-sided pages, with 100 zeroes per line and 100 lines from top to bottom of the pages, it would take 50,000 pages to print them all out. Binding 500 pages into a volume would give a set of 100 books, all full of zeroes.
Of course, the number of programs that actually do anything interesting is a vanishingly small subset of those programs; but we can easily claim that, say, 10E100, one line in a page, could be assumed to be workable programs. The ratio between that and even two lines in a page would also be 10E100, which is vastly smaller than the ratio of mass between a single hydrogen atom and the entire Universe, which is "only" about 10E80.
10E100 is more likely a gross underestimate of the number of working programs than an overestimate -- but at 10E100, it's in practical terms unbounded. There is no way humans will ever write an appreciable fraction of those possible programs. That isn't factoring in multiple interacting programs, either those executing in sequence on a single computer, or those executing in a network on multiple computers.
If it is claimed that humans have cognitive capabilities that cannot be performed by a machine, we can ask for a specification of these actions; but computers being universal machines, if we can specify those actions, we can then implement them on a computer. If human cognition is simply due to PONs, and a digital computer can emulate neural nets -- using a random-number generator to feed noise into the operation, if that is what's needed -- then there is absolutely no demonstrable reason a computer can't do what a human brain can do.
To assert that there's more to the mind than the operation of PONs is to invoke some sort of Harvey, without being able to say anything specific about him, which is merely confused thinking. As Bob might say: "There are a lot of things going on that we don't know anything about, Alice!"
To which Alice would reply: "OK Bob, tell me about one thing you don't know anything about. You can't, can you?" End of story.
BACK_TO_TOP