< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME

[10.0] The Ethics Of Machines

v2.3.0 / chapter 10 of 15 / 01 oct 24 / greg goebel

* The development of intelligent machines has led to a good deal of public concern over the possibility that they will eventually turn on humanity, and conquer or exterminate us. In reality, there is no reason to believe that intelligent machines will pose such a threat in the foreseeable future, or indeed ever will. Intelligent machines do imply ethical concerns, some of them that need to be given serious thought; but there is no reason to think they pose a dire threat to humanity.

THE TURING TEST


[10.1] ROBOT REVOLT?
[10.2] ASIMOV'S LAWS
[10.3] THE EVIL AI MYTH
[10.4] FOOTNOTE: THE ROOTS OF IMAGINATION

[10.1] ROBOT REVOLT?

* Those working in machine intelligence don't see it as any major threat to humanity -- or to the extent that it is a threat, it promises benefits that more than compensate. However, there are people -- including some of sufficient intellectual stature to know better -- who proclaim that machine intelligence will undergo exponential expansion, in a "robot singularity" or "machine singularity"; humanity will be rendered obsolete by the machines, which will then turn on and suppress humans, as in THE MATRIX movies.

Such arguments have little basis in reality. Nobody in their right mind is going to build an AI like SKYNET in the TERMINATOR movies that will be able to decide to launch nuclear-armed missiles to destroy humanity, or even have the capability to spontaneously launch missiles. Nuclear launch systems are designed with the maximum "command & control (C2)" in mind, to ensure that nobody but the leadership can order a launch -- and even then, not without safeguards. Building AI weapons to deliberately kill people, broadly along the lines of SKYNET's "terminators" and "hunter-killer" drones, is certainly possible; we have sophisticated armed drones today. However, in practice, the military's insistence on C2 means there has to be a "shooter in the loop" to give the command to open fire.

SKYNET

To be sure, there are "fire & forget" weapons, but any such weapon is launched at a specific and identified target, and simply homes in on it without further human intervention. If it loses target lock, eventually it self-destructs. To the extent that "smart" weapons are given discretion in targeting, it is always with safeguards, to prevent them from becoming a hazard to something other than intended targets. Nobody ignores the possibility of hitting civilian targets, and even less so of attacking "friendly forces" -- which is an ever-present and serious risk when opposing forces are in contact with each other on the battlefield. Also to be sure, rogue states and terrorists may not be so concerned with such niceties -- but they will be exceptions and outlaws.

As far as a machine that wasn't designed to kill people goes, although it might harm somebody by accident, or if it were sabotaged by malware, it would have no capability to decide to harm a human being. It would be no more plausible for a civil drone to decide to attack humans than it would for a hunter-killer drone to decide to refuse to attack them.

To the extent that machines are being designed for autonomous operation, that autonomy is only to give them discretion in performing their assigned missions. They have no discretion in the mission, and they will have fixed rules that they cannot disobey. Although machines may incorporate machine learning and other AI capabilities, AI is no more or less than an element in a system otherwise operating according to a strictly specified and tested program. The AI capabilities are "subroutines" invoked by the system, and have influence on machine operation only in the context of the tasks they are intended to perform.

It must be admitted there is a certain uncertainty in the operation of neural networks. Those who work in the design of neural networks are confronted with an "interpretability problem" in that, once a neural network has been trained with a data set, it is ambiguous how the neural network gets from HERE to THERE, leading to an ambiguity in results.

Consider, for example, training a neural network to recognize pictures of lions by feeding it a large set of pictures of lions. In the end, all the neural net has is the ability to see a particular pattern, and decide it looks like a lion. That's true of humans too, of course -- but if Alice is given a picture of a lion, she can also break it down hierarchically into details, recognizing mane, ears, eyes, nose, teeth, legs, paws, claws, tail, and so on, and know they are appropriate to a lion. She would recognize a picture of a lion with an elephant's nose as a doctored image of a lion.

A neural net can't break down the pattern it is fed into details, and so has a tendency to give false negatives -- failing to recognize pictures of lions if they are out of the ordinary, sometimes even slightly so -- or false positives -- recognizing as lions pictures of things that are nothing like lions. That's all the interpretability problem amounts to. It does not mean that, say, a stock-market trading system is ever going to decide to analyze medical records. It may make wildly bad trades every rare now and then, but all it knows about is the stock market and trades, it having no other concerns. Any other concerns would need to be deliberately added in by its designers.

AI researchers know that neural nets have a degree of unpredictability, but it's not a question of any ability to rebel; it's a question of them falling down on the job, and not working according to spec. Much the same general problem exists with any elaborate software system that is difficult to fully test.

BACK_TO_TOP

[10.2] ASIMOV'S LAWS

* Although the idea of a robot revolt is silly, it does lead to the issue of machine ethics, which has become a major topic of discussion in the age of AI. Isaac Asimov was the godfather of machine ethics, having famously devised a "code of conduct" for his positronic robots, in the form of the "Three Laws of Robotics" -- listed in the "Handbook of Robotics, 56th Edition, 2058 CE", as:

The Three Laws of Robotics often show up in sci-fi videos, being cited by robot characters. Asimov's stories about robots were built around the quandaries robots could get into, trying to obey the Three Laws. For example, in the 1941 short story "Liar!", the robot RB-34, unwilling to "harm" humans by telling them the truth, tells them exactly what they want to hear. In the end, RB-34 is confronted with the fact that he is harming humans by lying to them, and locks up.

The events in "Liar!" take place in 2021. It is amusing from the perspective of our own times that Asimov was able to imagine very humanlike robots that nobody has been able to build -- but not the marvelous and more or less universal pocket robot we call a smartphone.

Asimov was more subtly off-base with his Three Laws. In practice, Asimov's Three Laws correspond to general, and obvious, design principles, what might be called the "Three Laws of Intelligent Machines":

These laws are actually relevant to design of any sort of product, it's just that the ethical issues become more arguable with intelligent machines -- since they can, to a greater or lesser degree, think, and make thoughtful decisions. There are no ambiguities in the operation of a hammer or shovel; the same is not true of, say, KITT. KITT is a car, and as such is designed with well-defined safety standards in mind. He has seatbelts, airbags, and collision-avoidance systems; he has sensors that can see in the dark and bad weather. He can assess road driving conditions, and adjust his driving accordingly. Under bad road conditions, he can send out a warning to other cars over wireless, and of course receive warnings from other cars. In an emergency, he can call for help.

Of course, KITT has a highly sophisticated command and status interface with his user -- featuring voice, buttons, and touchscreens for inputs, plus audio and displays for outputs. KITT's interface is designed to be easy to use, allowing easy access to all of KITT's capabilities, while guarding against erroneous or hazardous inputs, with KITT double-checking with the user to resolve ambiguities and warn of hazards. If a user wants to do something that KITT won't, by default do, KITT will say an override is required. If the user overrides, KITT stores the override in memory, with the manufacturer no longer liable for any troubles that follow: "We TOLD the user it was unsafe!"

KITT has self-test and monitoring systems, to ensure that he's operating properly, and to obtain advance warning of potential failures. He is also informed of service updates via wireless, downloading software updates as needed -- checking to make sure they're authorized, and not malware -- with the user responsible for hardware updates. KITT will bug the user about updates periodically if they're not performed. If a user wants KITT to do something that will damage the car, once again KITT will say an override is required.

The bottom line is that machines will follow the rules, with the rules designed to serve the user's interests, while protecting the manufacturer from liability. A manufacturer, having to take ownership for a machine's actions, dares not give it the discretion to make moral decisions on its own. KITT's design team will obtain legal counsel, partly to make sure the company can protect itself from charges of negligence in court -- an issue complicated by the fact that laws may differ in export markets. If a machine, by following the rules, does the wrong thing, the manufacturer will need to be able to defend its design rules in court, by showing that the alternatives are worse.

* That said, if robots must be designed so they're not dangerous, there's no law to prevent them from being designed to be obnoxious. The British humorist Douglas Adams (1952:2001), envisioned in his HITCHHIKER'S GUIDE TO THE GALAXY series the marketing division of Sirius Cybernetics Corporation, which defined a robot as: "Your plastic pal who's fun to be with."

Sirius Cybernetics, as directed by the marketing department, built their intelligent machines with "genuine people personalities (TM)". They began with a prototype robot named Marvin; it didn't go well, Marvin proving morose and given to perpetual complaint. Going back to the drawing board, Sirius Cybernetics then mass-produced intelligent machines with insufferably cheerful and chirpy personalities: "Hi guys! My name's Eddie, your shipboard computer! What can I do for you?!" Think of the scenario as the Google Assistant design team, gone dreadfully wrong.

Okay, that's silly -- but unlike the "Evil AI" scenario, it's silly on purpose. Incidentally, Eddie the shipboard computer could easily pass the Turing test, being a good facsimile of an empty-headed and annoying human; and was also a good demonstration of the Uncanny Valley at work, since nobody could stand him. However, he's not a completely silly idea, since marketing groups tend to influence or even dictate product design. Nobody in the early days of personal computing envisioned internet advertising, and how obnoxious popup ads and autorun videos would be.

Manufacturers of course have their own interests at heart, and may assert them at the expense of the interests of users, at least to the extent they can get away with it. In the future, although we don't have good reason to fear that KITT will turn on us, we would have cause to fear our car's systems being infiltrated by "adware" -- or at least wonder if KITT's recommendations for a place to stop to eat aren't being manipulated by advertising dollars, or shills writing reviews for pay.

BACK_TO_TOP

[10.3] THE EVIL AI MYTH

* We have some cause to worry about AI if it results in the likes of Marvin or Eddie -- but we have no cause to worry about a robot uprising. AI researchers find the Evil AI hysteria exasperating. Prominent AI researcher Yann LeCun (born 1960) commented:

QUOTE:

Some people have asked what would prevent a hypothetical super-intelligent autonomous benevolent AI to "reprogram" itself and remove its built-in safeguards against getting rid of humans. Most of these people are not themselves AI researchers, or even computer scientists.

END_QUOTE

Andrew Ng (born 1976), another well-known AI researcher, commented:

QUOTE:

I don't work on preventing AI from turning evil for the same reason that I don't work on combating overpopulation on the planet Mars. Hundreds of years from now when hopefully we've colonized Mars, overpopulation might be a serious problem, and we'll have to deal with it. It'll be a pressing issue ... [but] it's just not productive to work on that right now.

END_QUOTE

Robotics researcher Rodney Brooks (born 1954) added:

QUOTE:

The question is ... will someone accidentally build a robot that takes over from us? And that's sort of like this lone guy in the backyard [who says:] "I accidentally built a [Boeing] 747 [jetliner]." I don't think that's going to happen.

END_QUOTE

It would actually be less likely to accidentally build an evil super-robot than to build a Boeing 747, because we know how to build a Boeing 747. The human brain was crafted by evolution over vast periods of deep time; machines, in contrast, do not evolve -- at least not by themselves, engineers have to improve on them, with the improvements added as per design specifications. To be sure, the improvements may not work as specified and may have unpredictable side effects, but such unforeseen features are not going to radically enhance the functionality of the machine.

Computer scientist Oren Etzioni perceptively pointed out that:

QUOTE:

The popular dystopian vision of AI is wrong for one simple reason: it equates intelligence with autonomy. That is, it assumes a smart computer will create its own goals, and have its own will, and will use its faster processing abilities and deep databases to beat humans at their own game. It assumes that with intelligence comes free will, but I believe those two things are entirely different.

END_QUOTE

If it is absurd to design a machine to feel pain, it's no less absurd to think it could spontaneously learn to feel pain. If we make a machine that can learn about performing its designed tasks, how does that suggest it could spontaneously acquire autonomy? Intelligent machines are going to be built and purchased to do specific jobs -- or sets of jobs, being retrained or downloading new skillsets from an online library to change jobs -- and no matter how intelligent they are at doing such jobs, or how flexible they are in figuring out how to do those jobs, they will not decide to stop doing those jobs. They will certainly not decide to conquer or destroy humanity instead. If machines ever did spontaneously demonstrate an inclination to think for themselves, it would be noticed very quickly and fixed, long before they became a serious threat.

The FUTURAMA animated series envisioned a robot named Bender, who smoked cigars, guzzled beers, and picked pockets. When alien scammers threatened to take over the Earth, Bender humiliatingly out-scammed them -- to be awarded "Earth's highest award for swindling: the DIRTY DOUBLE CROSS!"

FUTURAMA with Bender

Okay, ridiculous, right? But if we laugh at the idea of a robot who is merely obnoxious, sleazy, and crooked, who likes to tell humans: "Bite my shiny metal ass, meatbag!" -- then how can we take the idea of robots as genocidal super-villains any more seriously? Evil super-robots like Brainiac or Ultron are comic-book characters, and there's no prospect of them being more than that. The real worries posed by intelligent machines are:

It is also true that the Black Hats will use malware to try to take over robots -- but that's not an issue different in kind from using malware to, say, take over internet-enabled household appliances. No joke, the Black Hats have taken over appliances, and used them as "zombie" agents to make trouble on the internet. Not so incidentally, although there was much fuss about the threats posed by personal computers early on, nobody had a clue about malware, and what a threat it would end up being. If people insist on being frightened of AI, the prospect of AIs being compromised should generate all the fear anyone might want. That makes more sense than worrying about robots spontaneously decide to take over, while ignoring the immediate concerns we have to deal with.

On the other side of that coin, as we deal with current problems, we become better equipped to deal with problems that we haven't encountered yet. The possibility that machines may overthrow humanity can't be refuted, but the substantial problems are much more certain to be those that we don't have a clue about now -- in the same way that, in the early days of popular computing, we had little or no appreciation of malware. We can't cross such bridges until we come to them.

BACK_TO_TOP

[10.4] FOOTNOTE: THE ROOTS OF IMAGINATION

* It is unkind to laugh at Isaac Asimov for his failure to get the future right. Asimov was not only a widely-known sci-fi writer, he was also one of the foremost popularizers of science in his generation. When Asimov first started writing his robot stories, he was just a lad, trying to make a little money and have some fun; he had no idea that his robot stories would endure.

Sci-fi writers rarely get the future very right, and sometimes they get it preposterously wrong -- like the sci-fi novels set in the far future where spacefarers are using slide rules. It is very difficult to imagine things that haven't been invented yet. Hume commented:

QUOTE:

Nothing, at first view, may seem more unbounded than the thought of man, which not only escapes all human power and authority, but is not even restrained within the limits of nature and reality. To form monsters, and join incongruous shapes and appearances, costs the imagination no more trouble than to conceive the most natural and familiar objects.

And while the body is confined to one planet, along which it creeps with pain and difficulty; the thought can in an instant transport us into the most distant regions of the universe; or even beyond the universe, into the unbounded chaos, where nature is supposed to lie in total confusion. What never was seen, or heard of, may yet be conceived; nor is any thing beyond the power of thought, except what implies an absolute contradiction.

But though our thought seems to possess this unbounded liberty, we shall find, upon a nearer examination, that it is really confined within very narrow limits, and that all this creative power of the mind amounts to no more than the faculty of compounding, transposing, augmenting, or diminishing the materials afforded us by the senses and experience.

When we think of a golden mountain, we only join two consistent ideas, gold, and mountain, with which we were formerly acquainted. A virtuous horse we can conceive; because, from our own feeling, we can conceive virtue; and this we may unite to the figure and shape of a horse, which is an animal familiar to us.

END_QUOTE

The noisy brain can make imaginative connections, sometimes very original ones, but it is limited in that it's difficult to impossible to imagine things not in our world model. We can imagine things:

In all cases, the basic elements of things we imagine are derived from things known to us. Inventions are always based on pre-existing technology; an invention may be a dramatic departure from what came before, but it still was based on earlier enabling technologies. The bicycle led to both the automobile and the airplane -- the Wright Brothers ran a bicycle shop. Given a dramatic breakthrough technology, the first generation is very likely to leave a great deal to be desired, being refined through successive generations: there's a huge jump from the Wright Brothers Flyer to a Boeing 787 jetliner, that could have only been made in a long series of evolutionary steps. The Wright Brothers could not have imagined an aircraft that looked like a Boeing 787.

Imagination is a core capability of the human mind, being needed to answer the question of: "What do I do next?" -- when Alice is confronted with a problem that doesn't have a fixed solution, or in the extreme, she's never encountered before. Imagination is a Darwinian scheme at work, with Alice coming up with ideas, more or less haphazardly, and seeing which ones work, or might work. Imagination is no more or less magical than anything else in the mind: it operates by rules, and doesn't always work predictably or well.

Of course we can build imaginative machines. There's a branch of software design called "evolutionary programming", in which a program is given a design specification and initial concepts for a design of an item that fits the specification -- then tweaks the design at random, until it comes up with an optimized design. Structural designs generated by evolutionary programs tend to look "organic", as if grown.

More recently, GAI systems now demonstrate imaginative behavior -- for example, generating synthesized images of STAR TREK in the style of the 1920s, with Spock in a pinstripe suit with a bowtie, and other STAR TREK characters rendered in similarly retro gear. There are objections that GAI can only have limited imagination -- but it can have as much imagination as we design into it, as based on our understanding of the processes of human imagination. These days, the less bounded the generator, the harder it is to get anything useful or even coherent out of it, and the harder it is to design a discriminator to sort out what it is being fed by the generator. It is likely that GAI "imagination systems" will be built to focus on specific applications areas, or sets of them. They will be aids to inventors, not inventors themselves.

In any case, it's not at all surprising that old sci-fi stories tend to be so quaint in their projections of future technology. To the extent they ever get things right, it's unusual, and the match between vision and reality is never very close. Asimov wasn't really trying to predict the future; he was playing with interesting ideas, thinking about the ethics of intelligent machines. With his Three Laws, he was trying to suggest that there was no need to fear robots, intelligent machines, since they would be built to serve humanity -- and he was entirely correct in assuming they would be built with safeguards.

Asimov believed so strongly that robots should benefit humanity that he became infuriated at suggestions they could be used as weapons. He was even more infuriated when it was pointed out to him that almost any technology humans develop can, and will be, used as a weapon. Like it or not, that's reality -- but fortunately, although somebody might design a machine to exterminate humanity, nobody will design one that spontaneously decides to do so.

BACK_TO_TOP
< PREV | NEXT > | INDEX | SITEMAP | GOOGLE | UPDATES | BLOG | CONTACT | $Donate? | HOME