< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME

[9.0] What's It Like To Be A Machine?

v2.2.0 / chapter 9 of 15 / 01 feb 24 / greg goebel

* Robots have been around, in concept, for a long time, with the concept of them generally as machine duplicates of humans. The difficulty with that idea, as it turns out, is that the more a machine attempts to ape a human, but still obviously remains a machine, the more uncomfortable people are with it -- a phenomenon known as the "uncanny valley". In fact, there is no sensible reason to build a machine that has humanlike autonomy, that determines its own agenda. We can expect to see machines that are ever more humanlike in the future; but only to the extent that it makes them better servants.

THE TURING TEST


[9.1] THE UNCANNY VALLEY
[9.2] AUTONOMOUS ROBOTS?
[9.3] REAL-WORLD ROBOTS

[9.1] THE UNCANNY VALLEY

* In showing there was no cause to believe machines couldn't think like humans do, Alan Turing -- having no reason to muddy his argument -- sidestepped the question of why we would even want a machine that could convincingly imitate a human.

Consider the humanlike robots of science fiction -- for example, Data the android on STAR TREK, played by actor Brent Spiner (born 1949), one of the best implementations of the concept. Data easily conversed and interacted with humans; had learning capabilities as good or better than humans; and could exercise human-like judgement and autonomy in his actions. He was a competent officer of the Planetary Federation Starfleet, respected for his dispassionate objectivity, liked for his impersonal civility and insatiable curiosity.

Brent Spiner / Data

Nonetheless, although Data's crewmates regarded him as a thinking being -- by that measure, passing the Turing test -- none of them ever failed to realize that Data was a machine, not a human being. When he tried to be more humanlike -- for example, to exhibit a sense of humor -- it generally didn't go well, and he was given suggestions that it wasn't a good idea. He was confronted with the "Uncanny Valley", a term coined in 1970 by Japanese roboticist Mori Masahiro (born 1927). What Mori realized was that people were generally comfortable with robots, but only as long as they were obviously robots. The more they seemed like humans, the more uncomfortable people would become.

Few have any problem with the STAR WARS robots C3PO and R2D2 -- but most find the robot Abraham Lincoln at Disneyland unconvincing, even creepy. The Google engineers working on Google Assistant were very aware of the Uncanny Valley, going to great lengths to avoid the pretense of being a human, as opposed to a cartoon character. Modern animated video features are often purely digital; they may feature human characters that are 3D rendered, with full shading, or implemented to resemble traditional drawn characters, with no great concern for shading. The traditional characters often seem more convincing than 3D-rendered characters, which may have the wooden and unconvincing appearance of being puppets. They're very sophisticated puppets, but still obviously puppets.

There are speculations as to why this is so, for example a human fear of being replaced by robots. Possibly, but it could have more to do with "willing suspension of disbelief" -- which writers of fantasy and science fiction try to achieve with readers, attempting to persuade them to accept fictional scenarios that are unrealistic, even preposterous. The irony is that readers find it easier to accept something that is blatantly preposterous than something that pretends to represent reality, but fails to convincingly do so.

If we are dealing with an automated answering system, we are comfortable with it to the level that it can communicate smoothly with us; but we are irritated if it pretends to be personal, since we know it's fakery. Tesco, a major British supermarket chain, once during the holiday season decided to have automated checkout systems announce to customers: "Ho! Ho! Ho! MERRY Christmas!" Customers found it obnoxious, and it was quickly discontinued.

Studies of people being shown a video of two animated characters engaged in a dialogue demonstrated people found the video more difficult to accept when they were told the dialogue had been created by a machine, instead of by humans. Similarly, when people have conversations with an advanced chatbot that convinced them it's human, they are resentful when they find out it's not, feeling that they've been defrauded. `We don't really have much use for a machine that can talk, except to the extent that we can get what we want from it. We want digital servants; there is no reason to build them to be anything but digital servants, and if they were to diverge from their mission, the typical reaction would be to complain to the manufacturer.

That is precisely why the Google team working on the Google Assistant personality used cartoon characters as models, as opposed to real-world humans. The cartoon characters were humanlike to the extent that people were able to relate to them, but not to the extent that they could be confused with real-world humans. The researchers did not try to cross the Uncanny Valley.

The reality is that it would not only be very hard to build a machine that could perfectly imitate a human -- which Turing clearly understood -- but we would have little use for it anyway -- an issue that Turing did not address. Being able to communicate and interact with humans is great, but the more a machine tries to pretend it's something it obviously isn't, the more people dislike it as a fake. In practice, as we approach the era of truly thinking machines, we simply want them to be humanlike enough to be "user-friendly" to deal with, able to understand instructions and data, giving back clear and informative replies, and performing their jobs with little or no human intervention: "That's your job, go figure it out." And they will.

BACK_TO_TOP

[9.2] AUTONOMOUS ROBOTS?

* Going much farther than that would be problematic. Data is a practical impossibility at the present time; not an RTX, not preposterous on the face of it, just not possible with technology available now or on the horizon. A computer that could duplicate the 86 billion neurons of the human brain would fill up a skyscraper, require a dedicated power plant to run it, and be preposterously expensive. To be sure, the human brain was designed by evolution, and we could be expected to come up with something much more efficient; if a machine brain were functionally comparable in its behavior to a human brain, the details of how it did the job would be of secondary interest.

However, if it doesn't make sense to slavishly copy the architecture of the human brain in a machine, then why should we try to slavishly copy human behavior either? Why should a machine mimic a human any more than needed to get its job done? To fully act like a human, it would need to have human feelings -- but why should a machine feel pleasure or pain? We could make it feel pain if we wanted to; we could design the machine so that if it suffered damage, it would sense a persistent need to attend to the damage, with the priority of that drive increasing with the damage level until the machine was desperate to get the damage repaired.

A dualist would of course ask: "But does it really feel pain?" It would certainly look like it -- but to ask such a question would be missing the point entirely, the point being that the scenario is absurd. It would be cruelty to want a machine to feel pain; it would then be absurd to say it was okay, because the machine didn't really feel pain. By the same coin, there's no reason to make a robot feel hungry or thirsty -- much less annoyed, frustrated, embarrassed, or resentful. We could build a machine to feel, or arguably pretend to feel, anything we wanted it to feel, but why should a machine feel anything at all?

Revealingly, Brent Spiner said that when he played Data the android, he simply dismissed all emotion and followed the script, feeling nothing -- which is why his performance was so convincing, demonstrating a mastery of the reverse Turing test that invoked a willing suspension of disbelief. Any attempt to implant feelings into a machine would be judged a gag at best, and a fraud at worst.

* Most importantly, humans are autonomous, they decide what they want to do for themselves. Another way of putting this is that they have "free will", though that's a loaded term. There's been confused bickering over free will for centuries -- without good cause, Hume having pointed out long ago that there would be no controversy over free will if we had an agreed-on, clear, and coherent definition of what it meant.

As Hume also pointed out, we actually do: if free will is defined as devising and pursuing one's own agenda, there's no real ambiguity in it. It is perfectly true that, within the constraints of a situation, humans will do whatever they feel like doing, and sometimes they will do so in defiance of constraints, if not necessarily with good results. The law has no fundamental difficulties with the idea of free will, equating it to "volition" -- meaning voluntary as opposed to involuntary action, for the determination of if a person is culpable for a crime: "Did you do this of your own volition? Or were you coerced? Were you in your right mind?"

Instead of getting caught up in the treacherous argument over free will, at least for the moment, it is best to stick to the term "autonomous" -- all the more so because, though animals are largely autonomous, by the legal definition they do not have free will, not being legally culpable for their actions. The only reason to bring up the subject here, to then immediately dodge it, is because free will has a way of popping up in discussions of the mind, if not with good result.

Anyway, why would we want an autonomous robot? Why would anyone build a highly capable and expensive machine that would do whatever it wanted to do? We would buy a robot to do a job, and it would do that job without any second thoughts in the matter -- or even an ability to have a second thought in the matter. To the extent that we are building autonomous robots in the present day, say a Mars rover, they're only autonomous in figuring out how to carry out their mission; they don't, cannot, have any choice in their mission.

Data's creator, Dr. Noonian Soong, did build Data and several other androids out of intellectual curiosity, and a desire to create a family of sorts; however, even if such androids could be built, for reasons of cost and utility, the vast majority of robots would still be single-minded obedient servants. Why have a robot Starfleet officer? That's a command job for humans. A human Starfleet officer would simply leverage off of machine intelligence to do the job: it's not humans versus machines, it's humans and machines.

Data was really just an expensive toy, and if it's a toy we want, we'll buy a toy at a more reasonable price. In 1998, Sony introduced a robot dog, the "Aibo" -- from "AI roBOt", with "aibou" also being the Japanese word for "pal" -- which has been in and out of production since that time. An Aibo has a processor system; cameras, one in its nose and one near its tail; microphones; touch sensors, to allow it to respond to being petted; and wireless connectivity.

Sony Aibo

Aibo has some visual recognition ability, and can respond to sets of voice commands; it can vocalize, able to talk to a degree, as well as generate musical sounds. Aibo is highly mobile with doggy mannerisms, and has a personality, not being always quick to obey commands. It has learning capability, and its programming can be updated. Everyone, particularly small children, appears to like it, and be impressed by its cleverness.

Aibo is also not remotely in a league with Data. Aibos are becoming smarter over time, with much-enhanced communications and learning capabilities -- but there's no good cause to give Aibo a mind of its own. Aibo isn't perfectly obedient, but it'll always be a good dog, unless some prankster monkeys with his programming. It will pretend to relieve itself on the leg of a table or such, but we certainly wouldn't want to actually do so. No matter how much more sophisticated Aibo becomes, it'll never be anything like Data the android; it will be a very smart, chatty, cartoonish version of a dog, always unquestionably a machine and not a real dog.

* In sum, it may be possible to build a simulated human like Data the android; but it would be very difficult and expensive to do, and it's not clear why we would feel the need to do so. One of the central directions in AI research is "general intelligence", which could be rendered as "making smarter machines". That leads to the question of: "Smarter? In what way?" A number of tests have been proposed for a generally intelligent machine:

The difficulty with these tests is that it is not entirely clear how useful these capabilities would be. Is making a cup of coffee something we would want a machine to do? If it were important, a startup company could design a self-contained coffee-making machine that, on pushing a button, would take all the steps needed to turn out a cup of coffee, and ring a bell when the coffee was poured and ready.

This is not to suggest that generalized intelligence is a bad idea, only that it's tricky. Of course we want smarter machines, but to do what? What should they not do? It's hard to see any need for a robot that can attend and learn from classes; but we do have a need for an AI that understand the meaning of user queries, expressed informally -- and more ambitiously, be handed a topic, then scour the internet and write a well-ordered summary document on it. Today, GAI chatbots can actually do that.

As far as the employment test goes, it's useful to consider the military's efforts to build drones that can engage in combat. Originally, the thought was that such drones could carry out combat missions on their own, but there were second thoughts that this was a good idea. It would clearly be useful to build drones that could carry out routine missions, such as reconnaissance or aerial refueling, but skepticism that they could engage in fast-moving, rapidly-changing combat autonomously. A human fighter pilot can make ghastly mistakes under such circumstances, attacking friendly forces or civilian targets by mistake. Would we dare to place as much or more trust in a machine?

The idea then emerged of building combat drones that would assist a fighter pilot, operating as "loyal wingmen" -- accompanying Major Harvey on a mission, talking to him and keeping him informed, to be assigned support tasks by him as needed. Obviously, we can keep on improving the drones, but they'll still be "loyal wingmen", under the command of Major Harvey.

The basic problem with general intelligence is that using the human mind as a literal model for a machine mind is somewhat like using a flapping-wing bird as a model for a jetliner. Yes, if we understand the basic aerodynamics of birds, much the same fundamental aerodynamics applies to an airliner, but we don't design airliners to flap their wings like birds.

Researchers have developed small flapping-wing drones, some of which can copy the agile flight of hummingbirds -- but even at that, the underlying implementation of a "hummingdrone" is entirely different from that of the hummingbird, the hummingbird only being copied to the superficial extent it needed to be. The hummingdrone is made out of metal and plastic, being painted to look like a hummingbird. We might add plastic feathers if there were some reason to look more authentic, but why bother?

hummingdrone

Similarly, basic principles of cognition apply to both humans and machines; but of necessity, they won't have the same kind of minds. The two are complementary; again, humans do well at things at which machines do poorly, while machines do well at things at which humans do poorly. We can clearly enhance the capabilities of machines -- but we don't really want to build a fake human like Data the android. We want to make machines more humanlike so they can interact more effectively with us and do their jobs better, but they'll never be much more than cartoon characters.

BACK_TO_TOP

[9.3] REAL-WORLD ROBOTS

* What, then, will the robots of the future really be like? As a far more realistic example of future robot technology, imagine a robocar of the year 2050 -- call it "Knight Industries Thinking Technology (KITT)". KITT 2050 would be about as close to the classic idea of a robot that could be implemented in that timeframe, since it would be mobile; equipped with cameras, radar, lidar, sonar, and other sensors to perceive its world; capable of driving and navigating on its own; while able to interface with its passengers, when needed, in spoken language. Consider a report from an Alice, its owner, as to what it would be like to ride in KITT:

BEGIN_QUOTE:

I was snoozing as KITT rolled down the highway in the dark -- but then KITT braked abruptly and sounded an alarm chime, waking me up. It was like 0410 AM. I asked: "What's going on, KITT?"

KITT replied: "There was a large animal in the road. I braked, honked, and rapid-flashed my lights at it; it ran off the road. I was concerned there were other animals along with it, but I don't sense them."

"What was it?" Remembering that KITT didn't always get ambiguous statements, I repeated: "What kind of animal was it?"

"I believe it was an elk. It may have been a cow, but I didn't get a good look at it." That was surprising; KITT's sensors are more acute than my eyes, he certainly does vastly better in the dark than I do, and he's typically good at recognizing things. If he doesn't recognize something, he can ask for help on the wireless network.

"Everything else okay?"

"Yes, everything is fine now, I'm continuing on my way at cruise speed."

"How long until we reach Salt Lake City?"

"Five and a half hours, including a recharge stop at about 8:00 AM. I received an alert over wireless that emergency repairs are being done on the freeway ahead, so that adds a delay of about ten minutes to the original trip schedule."

"Okay. Anything else of concern?"

"No. I'm receiving an update for my software security system, but I won't be able to validate it until it's been fully downloaded. I won't do the update until after we arrive in Salt Lake City."

"Very well. I'm going to crash out for a while. Out here." KITT chimed off and continued down the road. Some people prefer their cars to be more verbose, but I tend to find that annoying -- cars with a joker personality overlay are particularly obnoxious, though I think that's the idea.

Myself, I don't see any reason for a machine to be any chattier with me than it needs to be. I chose a personality overlay with a male voice, formal and cool, with an inclination to communicate in chimes. The chime patterns are randomly variable, so they don't get too repetitive.

Which reminded me of the app I'd recently found for KITT's music system; it generates original music generated by rules and random factors -- making KITT, or at least his music system, a composer. I said: "KITT?" He chimed back at me, and I said: "Play a soft ambient music stream, please." The music started up, and I went back to sleep.

END_QUOTE

KITT is perfectly conversational -- if the conversation is effectively limited to driving and errands associated with the car's resources, like the music player -- and thinks about what he's doing -- meaning his job. Alice is an autonomous human, doing whatever she feels like doing, but all KITT cares about is driving. KITT is a machine who thinks, but he is never going to say, has no reason to say: "I think, therefore I am." Having said that, of course, it is certain the designers of KITT will have to include that in his programming as a digital "easter egg".

KITT

* KITT is right around the technological corner. In 2050 he won't be alone either, with a proliferation of intelligent systems. We can envision, as a straightforward extension of the virtual assistants of today, that every person will have a dedicated digital servant -- call her, maybe, the "GENeralized Intelligent Executive (GENIE)" -- from birth to death, built around an AI that manages all the data streams and intelligent systems associated with that person.

When Bob is an infant, GENIE will keep an eye on baby and maintain his health records; when Bob goes to school, GENIE will help teach him and maintain his academic records; in adulthood, GENIE will keep track of all the details of Bob's personal and professional lives, and perform any task Bob asks of her. Indeed, with GENIE, life logging will become a normal process, with GENIE storing, organizing, assessing, and providing intelligent access to every bit of data generated by or about Bob.

We could actually build GENIE right now, the technology is available; it's just a question of a very big and ugly data management problem. In its emergence, GENIE won't be any one thing as such, more like a flexible set of interacting systems, all with a common data stream -- linking together a household computing and control system, robocars, smartphones, appliances, a central record store for Bob, as well as access to internet resources. Given a common memory stream, all the different devices would simply be different faces of the same AI -- establishing a groupmind, some nodes of the system being stupid and others smart, the collective being very intelligent.

GENIE

GENIE will be an obedient and reliable servant, focused on doing what she is told to do with a minimum of user intervention. She will not be an autonomous agent like Data the android; she will have learning capabilities, but only related to the jobs she does. She will be able to acquire added functionality through downloads of software modules to help her do those jobs. When Bob buys any new intelligent product, GENIE integrates it into her system, incorporating a "driver" module for that product if necessary.

GENIE will have a more flexible understanding than KITT, but nonetheless will only understand what she "needs to know" to do her job -- that is, to perform queries, and handle all the errands to which she is tasked. However, since GENIE is an extensible groupmind, her understanding expands as new elements are added to the system, and she also has open-ended access to internet resources.

Some may find GENIE frightening, and not without good reason. Do we really want an AI that is like a personal god, tracking everything we do, knows all about our movements, records the grades of every school examination and financial transaction? Were a Black Hat to break into GENIE, he would have everything on Bob. Worse, an authoritarian state would be able to use GENIE as an untiring, all-seeing spy to keep tabs on Bob.

The difficulty is that all that data is simply floating around more or less loose in cyberspace right now. Bob may not only have no idea who is making use of his data, but may not even know what data on him exists. We live in a data-driven world, and it is only going to become more so. It is in our personal interests to have it under our own control.

Along with managing our personal information, GENIE will necessarily be vigilant in data security, double-checking downloads to make sure they came from authorized sources, validating them to make sure they haven't been altered, and monitoring his domain to spot unauthorized access. Not so incidentally, Bob could be entitled to petty compensation for authorizing access to his data, possibly obtaining "rewards" points for purchases, with GENIE negotiating the transactions and maintaining the account.

To prevent abuse, laws will need to be implemented to define rights to privacy and rights to access, with GENIE ensuring that no illegal access is performed -- and if a legal access is performed, making sure Bob knows about it. Bob will have to be able to trust GENIE, otherwise her usefulness to him would be compromised, and he would then work against her.

Given KITT and GENIE, the question comes back again: Do they think? Do they have minds? For all their limitations, Alice and Bob don't question they have minds. Nobody could show they don't: they have electronic brains, with a runtime, they perceive and react, they remember, they learn -- they think. They're just not human minds; they're only focused on the jobs they are supposed to do.

Few doubt that a dog has a mind, it's just one evolutionarily optimized to, well, being a dog. KITT and GENIE are, in general, far smarter than any dog. Do they really know what they're doing? As much as they "need to know", yes. Given such machines, which are perfectly within our capabilities, nobody with sense would see any good reason to argue they aren't thinking machines -- though as superficial in their resemblance to thinking humans as a hummingdrone is to a hummingbird.

* In any case, what we can see in the future of machine intelligence is much more GENIE than Data the android, who amounts to little more than an expensive toy of questionable utility. Certainly, we can't rule out such advanced robots in the future, or rule out the possibility that they might achieve literally superhuman intellectual capabilities; but they're far over the horizon, and there's no reason to assume they're inevitable or even likely. The current push in machine intelligence is to build AI systems to do specific jobs, these systems being designed on a "need to know" basis -- they won't be any smarter than they need to be to do the job. Why add to the cost?

KITT is mature technology of the future, he knows how to drive skillfully and safely; he might be improved upon in incremental ways, or built more cheaply, but following generations of KITT technology won't be fundamentally different in capability. He's a vastly better driver than a human, always following the rules, always alert, never tiring, with sensors capable of penetrating night and murk. However, all he basically knows about is driving. It's all he needs to know about. GENIE operates in a wider domain, not as well defined, but still restricted to the tasks she has been assigned.

There is no particular reason to think that AI will undergo an indefinite growth in capability. Technologies have learning curves, typically growing rapidly in capability early on, then maturing and encountering diminishing returns, progress slowing down to progressive minor refinements. Consider, say, pocket calculators; although nobody would think of them as mindful, by human standards they are "super-intelligent" -- able to perform calculations far beyond the ability of any human. However, they are a mature technology, there haven't been any real innovations in them in years -- and in fact, popular cheap scientific calculators today are usually just tweaked versions of those available ten years ago.

In much the same way, personal computers underwent tremendous advances in their first decades, but now they're in diminishing returns, and next year's PC isn't that much more refined than last year's. Looking back to the past, from the 1930s to the 1970s, aircraft underwent an improvement in performance by an order of magnitude, but from the 1970s, there has been effectively no improvement in raw performance.

Could we build a rocketliner that could take passengers from New York City to Sydney in less than an hour, as a regularly-scheduled service? Technically speaking, yes, but it would be a practical and economic absurdity; it might be fun for those who like to ride roller-coasters or such, but others might find ballistic flight unpleasant at least, and terrifying at worst. It is difficult to see how the cost of a ticket could be reduced below millions of dollars. The contemporary leading edge in air transport design are drones and energy-efficient electric / hybrid aircraft for air taxi services -- technologically sophisticated of course, but nothing radical in terms of airframe design or performance.

From the 1940s, Isaac Asimov wrote enduring sci-fi stories about sophisticated humanlike robots; had he been brought forward in time from then to the present, he would no doubt be surprised and disappointed to find that we not only do not have such robots running around, but that nobody had so far built one that is at all in a league with his "positronic" robots.

From the 1980s, the Honda Company of Japan developed a series of humaniform robots, culminating in the "Advanced Step in Innovative Mobility (ASIMO)" series robot. It was a technological marvel, but expensive, and required substantial effort to get it to perform routine actions. Projections that robots like ASIMO would be commonplace by the present day. Honda eventually ended the ASIMO program, focusing on more specialized robots that were easier to implement and were more suitable for real-world jobs.

Mobile robots are increasingly in use, but mostly in the form of helidrones, used for surveillance and inspection. They can perform such jobs effectively at reasonable cost. We have every reasonable expectation that KITT and GENIE will be in common use by 2050, but no good cause to believe that anything convincingly close to Asimovian positronic robots will be as well.

Even to the extent that AI systems will get the jump on humans, is there good reason to worry about it? Consider, as an example, the "AlphaGo" system created by the DeepMind AI company of London to play the Asian game of Go. Go is a conceptually simple game, consisting of a 19 x 19 grid in which players alternate placing white or black stones on the intersections according to a handful of rules, the winner being the player that has the most stones at the end of the game. It is, however, more open-ended in its play options than chess, and a tougher goal for AI. A chess-playing machine was able to beat the human chess world champion in 1995; it wasn't until 2016 that AlphaGo beat the human Go world champion.

AlphaGo was initially trained on examples of Go games, and then played millions of games with itself to refine its skills. It was followed by a refined version, "AlphaGo Zero", which didn't use examples, instead playing games against itself as self-training. It started out simply placing stones at random, to then work itself up win by win to more intelligent play. Within a few days of cycling upward through millions of games, it had reached an expertise level never approached by a human player.

A discouraging revelation to Go players? Possibly, but what cause was there for despair that a machine could play Go better than any human? One might as well get upset over the fact that a fork lift can pick up and carry heavy objects that a human weightlifter can't budge. Indeed, professional Go players found being defeated by AlphaGo Zero instructional. AlphaGo Zero, not being trained by examples from human play, came up with strategy and tactics that were not known to Go tradition, and they provided inspiration to human players.

BACK_TO_TOP
< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME