Asimovs New Guide To Science 1993 Isaac Asimov Pdf 18

0 views
Skip to first unread message

Lynelle Staudt

unread,
Jul 14, 2024, 8:00:35 AM (4 days ago) Jul 14
to GBS-SNP-CROP: GBS SNP Calling Reference Optional Pipeline

With the death of Isaac Asimov on April 6, 1992, the world lost a prodigiousimagination. Unlike fiction writers before him, who regarded robotics assomething to be feared, Asimov saw a promising technological innovation to beexploited and managed. Indeed, Asimov's stories are experiments with theenormous potential of information technology.This article examines Asimov's stories not as literature but as agedankenexperiment - an exercise in thinking through theramifications of a design. Asimov's intent was to devise a set of rules thatwould provide reliable control over semi-autonomous machines. My goal is todetermine whether such an achievement is likely or even possible in the realworld. In the process, I focus on practical, legal, and ethical matters thatmay have short- or medium-term implications for practicing informationtechnologists.Part 1, in this issue, reviews the origins of the robot notion and explains thelaws for controlling robotic behaviour, as espoused by Asimov in 1940 andpresented and refined in his writings over the following 45 years. Next month,Part 2 examines the implications of Asimov's fiction not only for realroboticists but also for information technologists in general. Originsof roboticsRobotics, a branch of engineering, is also a popular source of inspirationin science fiction literature; indeed, the term originated in that field. Manyauthors have written about robot behaviour and their interaction with humans,but in this company Isaac Asimov stands supreme. He entered the field early,and from 1940 to 1990 he dominated it. Most subsequent science fictionliterature expressly or implicitly recognizes his Laws of Robotics.Asimov described how, at the age of 20 he came to write robot stories:"In the 1920's science fiction was becoming a popular art form for thefirst time ..... and one of the stock plots .... was that of the invention of arobot .... Under the influence of the well-known deeds and ultimate fate ofFrankenstein and Rossum, there seemed only one change to be rung on this plot -robots were created and destroyed their creator ... I quickly grew tired ofthis dull hundred-times-told tale .... Knowledge has its dangers, yes, but isthe response to be a retreat from knowledge? .... I began in 1940, to writerobot stories of my own - but robot stories of a new variety ...... My robotswere machines designed by engineers, not pseudo-men created byblasphemers"1,2 Asimov was not the first to conceive of well-engineered, non-threateningrobots, but he pursued the theme with such enormous imagination and persistencethat most of the ideas that have emerged in this branch of science fiction areidentifiable with his stories.To cope with the potential for robots to harm people, Asimov, in 1940, inconjunction with science fiction author and editor John W. Campbell, formulatedthe Laws of Robotics. 3,4 He subjected all of his fictional robotsto these laws by having them incorporated within the architecture of their(fictional) "platinum-iridium positronic brains". The laws (see below) firstappeared publicly in his fourth robot short story,"Runaround"5. The1940 Laws of RoboticsFirstLaw:A robot may not injure a human being, or, through inaction, allow a humanbeing to come to harm.SecondLaw:A robot must obey orders given it by human beings, except where such orderswould conflict with the First Law.ThirdLaw:A robot must protect its own existence as long as such protection does notconflict with the First or Second Law. The laws quickly attracted - and have since retained - the attention of readersand other science fiction writers. Only two years later, another establishedwriter, Lester Del Rey, referred to "the mandatory form that would forcebuilt-in unquestioning obedience from the robot".6 As Asimov later wrote (with his characteristic clarity and lack of modesty),"Many writers of robot stories, without actually quoting the three laws, takethem for granted, and expect the readers to do the same".Asimov's fiction even influenced the origins of robotic engineering."Engelberger, who built the first industrial robot, called Unimate, in 1958,attributes his long-standing fascination with robots to his reading of[Asimov's] 'I, Robot' when he was a teenager", and Engelberger later invitedAsimov to write the foreword to his robotics manual. The laws are simple and straightforward, and they embrace "the essentialguiding principles of a good many of the world's ethical systems"7. They alsoappear to ensure the continued dominion of humans over robots, and to precludethe use of robots for evil purposes. In practice, however - meaning in Asimov'snumerous and highly imaginative stories - a variety of difficulties arise. My purpose here is to determine whether or not Asimov's fiction vindicates thelaws he expounded. Does he successfully demonstrate that robotic technologycan be applied in a responsible manner to potentially powerful, semi-autonomousand, in some sense intelligent machines? To reach a conclusion, we mustexamine many issues emerging from Asimov's fiction. HistoryThe robot notion derives from two strands of thought, humanoids andautomata. The notion of a humanoid (or human-likenonhuman) dates back to Pandora in The Iliad, 2,500 years ago and evenfurther. Egyptian, Babylonian, and ultimately Sumerian legends fully 5,000years old reflect the widespread image of the creation, with god-menbreathing life into clay models. One variation on the theme is the idea of thegolem, associated with the Prague ghetto of the sixteenth century. This claymodel, when breathed into life, became a useful but destructive ally.The golem was an important precursor to Mary Shelley's Frankenstein: TheModern Prometheus (1818). This story combined the notion of the humanoidwith the dangers of science (as suggested by the myth of Prometheus, who stolefire from the gods to give it to mortals). In addition to establishing aliterary tradition and the genre of horror stories, Frankenstein alsoimbued humanoids with an aura of ill fate.Automata, the second strand of thought, are literally "self-movingthings" and have long interested mankind. Early models depended on levers andwheels, or on hydraulics. Clockwork technology enabled significant advancesafter the thirteenth century, and later steam and electro-mechanicswere also applied. The primary purpose of automata was entertainment ratherthan employment as useful artifacts. Although many patterns were used, thehuman form always excited the greatest fascination. During the twentiethcentury, several new technologies moved automata into the utilitarian realm.Geduld and Gottesman8 and Frude2 review the chronology ofclay model, water clock, golem, homunculus, android, and cyborg that culminatedin the contemporary concept of the robot.The term robot derives from the Czech word robota, meaning forced work orcompulsory service, or robotnik, meaning serf. It was first used bythe Czech playwright Karel apek in 1918 in a short story and again inhis 1921 play R. U. R., which stood for Rossum's Universal Robots.Rossum, a fictional Englishman, used biological methods to invent and mass-produce"men" to serve humans. Eventually they rebelled, became the dominant race, andwiped out humanity. The play was soon well known in English-speaking countries. DefinitionUndeterred by its somewhat chilling origins (or perhaps ignorant of them),technologists of the 1950s appropriated the term robot to refer to machinescontrolled by programs. A robot is "a reprogrammable multifunctional devicedesigned to manipulate and/or transport material through variable programmedmotions for the performance of a variety of tasks"9. The term robotics, whichAsimov claims he coined in 194210 refers to "a science or artinvolving both artificial intelligence (to reason) and mechanical engineering(to perform physical acts suggested by reason)"11.As currently defined, robots exhibit three key elements:

  • programmability, implying computational or symbol-manipulativecapabilities that a designer can combine as desired (a robot is a computer);
  • mechanical capability, enabling it to act on itsenvironment rather than merely function as a data processing or computationaldevice (a robot is a machine); and
  • flexibility, in that it can operate using a range ofprograms and manipulate and transport materials in a variety of ways.
We can conceive of a robot, therefore. as either a computer-enhancedmachine or as a computer with sophisticated input/output devices. Its computingcapabilities enable it to use its motor devices to respond to external stimuli,which it detects with its sensory devices. The responses are more complex thanwould be possible using mechanical, electromechanical, and/or electroniccomponents alone.With the merging of computers, telecommunications networks, robotics, anddistributed systems software. and the multiorganizational application of thehybrid technology, the distinction between computers and robots may becomeincreasingly arbitrary. In some cases it would be more convenient to conceiveof a principal intelligence with dispersed sensors and effectors, each withsubsidiary intelligence (a robotics-enhancedcomputer system). In others, it would be more realistic to think in terms ofmultiple devices, each with appropriate sensory, processing, and motorcapabilities, all subjected to some form of coordination (an integratedmulti-robot system). The key difference robotics brings is the complexity andpersistence that artifact behaviour achieves, independent of humaninvolvement.Many industrial robots resemble humans in some ways. In science fiction, thetendency has been even more pronounced, and readers encounter humanoid robots,humaniform robots, and androids. In fiction, as in life, it appears that arobot needs to exhibit only a few human-likecharacteristics to be treated as if it were human. For example, therelationships between humans and robots in many of Asimov's stories seem almostintimate, and audiences worldwide reacted warmly to the "personality" of thecomputer HAL in 2001.' A Space Odyssey, and to the gibbering rubbish-binR2-D2in the Star Wars series.The tendency to conceive of robots in humankind's own image may gradually yieldto utilitarian considerations, since artifacts can be readily designed totranscend humans' puny sensory and motor capabilities. Frequently thedisadvantages and risks involved in incorporating sensory, processing, andmotor apparatus within a single housing clearly outweigh the advantages. Manyrobots will therefore be anything but humanoid in form. They may increasinglycomprise powerful processing capabilities and associated memories in a safe andstable location, communicating with one or more sensory and motor devices(supported by limited computing capabilities and memory) at or near thelocation(s) where the robot performs its functions. Science fiction literaturedescribes such architectures.12,13 ImpactRobotics offers benefits such as high reliability, accuracy, and speed ofoperation. Low long-termcosts of computerized machines may result in significantly higher productivity,particularly in work involving variability within a general pattern. Humans canbe relieved of mundane work and exposure to dangerous workplaces. Theircapabilities can be extended into hostile environments involving high pressure(deep water), low pressure (space), high temperatures (furnaces), lowtemperatures (ice caps and cryogenics), and high-radiationareas (near nuclear materials or occurring naturally in space).On the other hand, deleterious consequences are possible. Robots might directlyor indirectly harm humans or their property; or the damage may be economic orincorporeal (for example, to a person's reputation). The harm could beaccidental or result from human instructions. Indirect harm may occur toworkers, since the application of robots generally results in job redefinitionand sometimes in outright job displacement. Moreover, the replacement of humansby machines may undermine the self-respectof those affected, and perhaps of people generally.During the 1980s, the scope of information technology applications and theirimpact on people increased dramatically. Control systems for chemical processesand air conditioning are examples of systems that already act directly andpowerfully on their environments. And consider computer-integratedmanufacturing, just-in-timelogistics, and automated warehousing systems. Even data processing systems havebecome integrated into organizations' operations and constrain the ability ofoperations-levelstaff to query a machine's decisions and conclusions. In short, many moderncomputer systems are arguably robotic in nature already; their impact must bemanaged - now.Asimov's original laws (see above) provide that robots are to be slaves tohumans (the second law). However, this role is overridden by the higher-orderfirst law, which precludes robots from injuring a human, either by their ownautonomous action or by following a human's instructions. This precludes theircontinuing with a programmed activity when doing so would result in humaninjury. It also prevents their being used as a tool or accomplice in battery,murder, self-mutilation,or suicide.The third and lowest level law creates a robotic survival instinct. Thisensures that, in the absence of conflict with a higher order law, a robot will
  • seek to avoid its own destruction through natural causes or accident;
  • defend itself against attack by another robot or robots; and
  • defend itself against attack by any human or humans.
Being neither omniscient nor omnipotent, it may of course fail in itsendeavors. Moreover, the first law ensures that the robotic survival instinctfails if self-defensewould necessarily involve injury to any human. For robots to successfullydefend themselves against humans, they would have to be provided withsufficient speed and dexterity so as not to impose injurious force on ahuman.Under the second law, a robot appears to be required to comply with a humanorder to (1) not resist being destroyed or dismantled, (2) cause itself to bedestroyed, or (3) (within the limits of paradox) dismantle itself.1.2In various stories, Asimov notes that the order to self-destructdoes not have to be obeyed if obedience would result in harm to a human. Inaddition, a robot would generally not be precluded from seeking clarificationof the order. In his last full-lengthnovel, Asimov appears to go further by envisaging that court procedures wouldbe generally necessary before a robot could be destroyed: "I believe you shouldbe dismantled without delay. The case is too dangerous to await the slowmajesty of the law. . . . If there are legal repercussions hereafter, I shalldeal with them."14Such apparent inconsistencies attest to the laws' primary role as aliterary device intended to support a series of stories about robot behavior.In this, they were very successful: "There was just enough ambiguity in theThree Laws to provide the conflicts and uncertainties required for new stories,and, to my great relief, it seemed always to be possible to think up a newangle out of the 61 words of the Three Laws."1.As Frude says, "The Laws have an interesting status. They . . . mayeasily be broken, just as the laws of a country may be transgressed. ButAsimov's provision for building a representation of the Laws into the positronic-braincircuitry ensures that robots are physically prevented from contraveningthem."2 Because the laws are intrinsic to the machine's design, itshould "never even enter into a robot's mind" to break them.Subjecting the laws to analysis may seem unfair to Asimov. However, they haveattained such a currency not only among sci-fifans but also among practicing roboticists and software developers that theyinfluence, if only subconsciously, the course of robotics. Asimov'sexperiments with the 1940 lawsAsimov's early stories are examined here not in chronological sequence or onthe basis of literary devices, but by looking at clusters of related ideas.*The ambiguity and cultural dependence of termsAny set of "machine values" provides enormous scope for linguisticambiguity. A robot must be able to distinguish robots from humans. It must beable to recognize an order and distinguish it from a casualrequest. It must "understand" the concept of its own existence, acapability that arguably has eluded mankind, although it may be simpler forrobots. In one short story, for example, the vagueness of the wordfirmly in the order "Pull [the bar] towards you firmly" jeopardizes avital hyperspace experiment. Because robot strength is much greater than thatof humans, it pulls the bar more powerfully than the human had intended, bendsit, and thereby ruins the control mechanism15.Defining injury and harm is particularly problematic, as are the distinctionsbetween death, mortal danger, and injury or harm that is not life-threatening.Beyond this there are psychological harm. Any robot given, or developing, anawareness of human feelings would have to evaluate injury and harm inpsychological as well as physical terms: "The insurmountable First Law ofRobotics states: ' A robot may not injure a human being....' and to repela friendly gesture would do injury " 16 (emphasis added). Asimovinvestigated this in an early short story and later in a novel: A mind-readingrobot interprets the first law as requiring him to give people not the correctanswers to their questions but the answers that he knows they want to hear14,16,17.Another critical question is how a robot is to interpret the term human. Arobot could be given any number of subtly different descriptions of a humanbeing, based for example on skin color, height range, and/or voicecharacteristics such as accent. it is therefore possible for robot behaviourto be manipulated: "the Laws, even the First Law, might not be absolute then,but might be whatever those who design robots define them to be"14. Faced withthis difficulty, the robots in this story conclude that ..." if differentrobots are subject to narrow definitions of one sort or another, there can onlybe measureless destruction. we define human beings as all members of thespecies, Homo sapiens."14In an early story, Asimov has a humanoid robot to represent itself as a humanand stand for public office. It must prevent the public from realizing that itis a robot, since public reaction would not only result in its losing theelection but also in tighter constraints on other robots. A politicalopponent, seeking to expose the robot, discovers that it is impossible to proveit is a robot solely on the basis of its behavior, because the Laws ofRobotics force any robot to perform in essentially the same manner as a goodhuman being7. In a later novel, a roboticist says, "If a robot is human enough, he would beaccepted as a human. Do you demand proof that I am a robot? The fact that Iseem human is enough"16. In another scene, a humaniform robot issufficiently similar to a human to confuse a normal robot and slow down itsreaction time14. Ultimately, two advanced robots recognize each other as"human", at least for the purposes of the laws14,18.Defining human beings becomes more difficult with the emergence of cyborgs,which may be seen as either machine-enhanced humans or biologically enhancedmachines. When a human is augmented by prostheses (artificial limbs, heartpacemakers, renal dialysis machines, artificial lungs, and someday perhaps manyother devices), does the notion of a human gradually blur with that of a robot?And does a robot that attains increasingly human characteristics (for example,a knowledge-based system provided with the "know-that" and "know-how" of ahuman expert and the ability to learn more about a domain) gradually becomeconfused with a human? How would a robot interpret the first and second lawsonce the Tring test criteria can be routinely satisfied? The key outcomeof the most important of Asimov's robot novellas 12 is the tenability of theargument that the prosthetization of humans leads inevitably to thehumanization of robots.The cultural dependence of meaning reflects human differences in such mattersas religion, nationality, and social status. As robots become more capable,however, cultural differences between humans and robots might also be a factor.For example, in one story19 a human suggests that some laws may be bad andtheir enforcement unjust, but the robot replies that an unjust law is acontradiction in terms. When the human refers to something higher thanjustice, for example, mercy and forgiveness, the robot merely responds. "I amnot acquainted with those words".*The role of judgment in decision makingThe assumption that there is a literal meaning for any given series ofsignals is currently considered naive. Typically, the meaning of a term isseen to depend not only on the context in which it was originally expressed butalso on the context in which it is read (see, for example, Winograd andFlores20). If this is so, then robots must exercise judgment tointerpret the meanings of words and hence of orders and of new data.A robot must even determine whether and to what extent the laws apply to aparticular situation. Often in the robot stories a robot action of any kind isimpossible without some degree of risk to a human. To be at all useful to itshuman masters, a robot must therefore be able to judge how much the laws can bebreached to maintain a tolerable level of risk. for example, in Asimov's veryfirst robot short story, "Robbie [the robot] snatched up Gloria [his younghuman owner], slackening his speed not one iota, and, consequently knockingevery breath of air out of her."21 Robbie judgedthat it was less harmful for Gloria to be momentarily breathless than to bemown down by a tractor.Similarly, conflicting orders may have to be prioritized, for example, when twohumans give inconsistent instructions. Whether the conflict is overt,unintentional, or even unwitting, it nonetheless requires aresolution. Even in the absence of conflicting orders, a robot may need torecognize foolish or illegal orders and decline to implement them, or at leastquestion them. One story asks, "Must a robot follow the orders of a child; orof an idiot; or of a criminal; or of a perfectly decent intelligent man whohappens to be inexpert and therefore ignorant of the undesirable consequencesof his order?"18Numerous problems surround the valuation of individual humans.First, do all humans have equal standing in arobot's evaluation? On the one hand they do: "A robot may not judge whether ahuman being deserves death. It is not for him to decide. He may not harm ahuman - variety skunk or variety angel."7 On theother hand they might not, as when a robot tells a human, "Inconflict between your safety and that of another, I must guardyours."22 In another short story, robots agree thatthey "must obey a human being who is fit by mind, character,and knowledge to give me that order." Ultimately, this leadsthe robot to "disregard shape and form in judging between human beings" and torecognize his companion robot not merely as human but as ahuman "more fit than the others."18 Many subtle problems can beconstructed. For example. a person might try forcing a robot to comply with aninstruction to harm a human (and thereby violate the firstlaw) by threatening to kill himself unless the robot obeys.How is a robot to judge the trade-offbetween a high probability of lesser harm to one person versus a lowprobability of more serious harm to another? Asimov's stories refer to thisissue but are somewhat inconsistent with each other and with the strict wordingof the first law.More serious difficulties arise in relation to the valuation of multiplehumans. The first law does not even contemplate the simple case of a singleterrorist threatening many lives. In a variety of stories, however, Asimovinterprets the law to recognize circumstances in which a robot may have toinjure or even kill one or more humans to protect one or moreothers: "The Machine cannot harm a human being more than minimally, and thatonly to save a greater number" 23(emphasis added). And again: "The First Law is not absolute.What if harming a human being saves the lives of two others, or three others,or even three billion others? The robot may have thought that saving theFederation took precedence over the saving of one life."24These passages value humans exclusively on the basis of numbers. A later storyincludes this justification: "To expect robots to make judgments of fine pointssuch as talent, intelligence, the general usefulness to society, has alwaysseemed impractical. That would delay decision to the pointwhere the robot is effectively immobilized. So we go bynumbers."18A robot's cognitive powers might be sufficient for distinguishingbetween attacker and attackee, but the first law alone does not provide a robotwith the means to distinguish between a "good" person and a "bad" one. Hence, arobot may have to constrain a "good" attackee's self-defenseto protect the "bad" attacker from harm. Similarly, disciplining children andprisoners may be difficult under the laws, which would limitrobots' usefulness for supervision within nurseries and penalinstitutions.22 Only after many generations of self-developmentdoes a humanoid robot learn to reason that "what seemed likecruelty [to a human] might, in the long run, be kindness."12The more subtle life-and-deathcases, such as assistance in the voluntary euthanasia of a fatally ill orinjured person to gain immediate access to organs that would save several otherlives, might fall well outside a robot's appreciation. Thus, thefirst law would require a robot to protect the threatened human,unless it was able to judge the steps taken to be the least harmful strategy.The practical solution to such difficult moral questions would beto keep robots out of the operating theater.22The problem underlying all of these issues is that mostprobabilities used as input to normative decision models are not objective;rather, they are estimates of probability based on human (or robot) judgment.The extent to which judgment is central to robotic behavior is summed up in thecynical rephrasing of the first law by the major (human) character in the fournovels: "A robot must not hurt a human being, unless he can think of a way toprove it is for the human being's ultimate good afterall."19*The sheer complexityTo cope with the judgmental element in robot decisionmaking, Asimov's later novels introduced a further complication:"On......[worlds other than Earth], . . . the Third Law is distinctly strongerin comparison to the Second Law. . . . An order for self-destructionwould be questioned and there would have to be a truly legitimate reason for itto be carried through - a clear and present danger."16And again, "Harm through an active deed outweighs, in general, harmthrough passivity - all things being reasonably equal. . . .[A robot is] always to choose truth over nontruth, if the harmis roughly equal in both directions. In general, that is."16The laws are not absolutes, and their force varies with the individualmachine's programming, the circumstances, the robot's previous instructions,and its experience. To cope with the inevitable logical complexities, a humanwould require not only a predisposition to rigorous reasoning, and aconsiderable education, but also a great deal of concentration and composure.(Alternatively, of course, the human may find it easier to defer to a robotsuitably equipped for fuzzy-reasoning-based judgment.)The strategies as well as the environmental variables involve complexity. "Youmust not think . . . that robotic response is a simple yes or no, up or down,in or out. ... There is the matter of speed ofresponse."16 In some cases (for example, when a human must bephysically restrained), the degree of strength to be applied must also bechosen.*The scope for dilemma and deadlockA deadlock problem was the key feature of the short story in which Asimovfirst introduced the laws. He constructed the type of stand-offcommonly referred to as the "Buridan's ass" problem. It involved a balancebetween a strong third-lawself-protectiontendency, causing the robot to try to avoid a source of danger, and a weaksecond-laworder to approach that danger. "The conflict between the various rules is[meant to be] ironed out by the different positronic potentials in the brain,"but in this case the robot "follows a circle around [the source of danger],staying on the locus of all points of ... equilibrium."5Deadlock is also possible within a single law. An example under thefirst law would be two humans threatened with equal danger and the robot unableto contrive a strategy to protect one without sacrificing the other. Under thesecond law, two humans might give contradictory orders of equivalent force. Thelater novels address this question with greater sophistication:What was troubling the robot was what roboticists called an equipotentialof contradiction on the second level. Obedience was the Second Law and [therobot] was suffering from two roughly equal and contradictory orders. Robot-blockwas what the general population called it or, more frequently, roblock forshort . . . [or] `mental freeze-out.' No matter how subtle and intricate a brain might be, there is always someway of setting up a contradiction. This is a fundamental truth ofmathematics.16Clearly, robots subject to such laws need to be prog

Reply all
Reply to author
Forward
0 new messages