The Dream of the Technological Homunculus

 

 

Any sufficiently advanced technology is indistinguishable from magic.

-- Arthur C. Clarke

 

 

            During the Christmas season of 1968, as the Apollo 8 astronauts were becoming the first men to leave Earth orbit, a science fiction movie called 2001: A Space Odyssey was playing in theatres across America2001 showcased an optimistic view of mankind’s technological future.  Permanent space stations circle the Earth; a scientific outpost has been established on the Moon; telephones transmit pictures as well as sound; television is viewed on portable flat screens not much larger than a piece of paper; and at the heart of the film’s narrative is a manned expedition to Jupiter.  From our current perspective of six years beyond 2001, most of these predictions seem insightful if ultimately unrealized.  While we have not made even a fraction of the film’s predicted progress into outer space, most at NASA and elsewhere would claim that we could be on par with the film but for a lack of motivation and the funding that goes along with it; likewise, we would have video phones if consumers had demonstrated any desire whatsoever for them.  In making what he characterized as the “proverbial good science fiction movie” (Clarke 17), director Stanley Kubrick and his co-writer Arthur C. Clarke did as good a job as could be expected of extrapolating the future from the present of 1968.  What reason did Kubrick and Clarke have to believe that the breakneck race into space of the 1960s would not continue indefinitely?

 

            One piece of the film’s technology, however, we are most definitely not capable of creating today even if sufficiently motivated: HAL 9000, the artificially intelligent computer who is the most important “crewmember” of the spaceship Discovery, first manned visitor to Jupiter.  HAL not only controls virtually every system aboard the Discovery, but also talks with the human crew in an expressive way, plays chess, makes aesthetic judgments about artwork, and shows every sign of feeling human emotions.  Computer science has given us much today that the creators of 2001 never envisioned, the wondrous World Wide Web chief among them, but we have nothing remotely akin to HAL.  Perhaps it is just as well, for HAL ultimately goes insane and succeeds in killing four of the five human crewmembers of the Discovery before being himself shut down.  In a scene as horrific as it is incongruous, HAL pleads and reasons desperately with last survivor Dave Bowman as the latter removes his higher brain functions bit by bit.  The scene ends with HAL, now reduced to a childlike state, singing the old song “Daisy” in an increasingly incoherent way.

 

            In this essay, I will examine the dream of the technological homunculus, or artificial human, which at the time of 2001’s release seemed no more far-fetched than any of the film’s other predictions.  After looking at the homunculi of fiction, alleged fact, and modern science, I will ask why such a creation remains so emphatically beyond our abilities today; for HAL stands alone among 2001’s manmade wonders as not only something we have not created but something that we have no firm idea how to even begin to create.

 

The Magical Homunculus in the Occult Tradition

 

            Perhaps the earliest extent references to the creation of an artificial human involve Simon Magus, a shadowy but immensely important figure in early Christian history who is generally considered, among other things, as the spiritual founder of Gnosticism and the first heretic.  Simon Magus is a principle character in the Clementine Homilies and Recognitions, “a kind of religious novel or romance” now generally dated to sometime in the first through fourth century A.D.; considerably after the time of Pope Clement I, from whom they received their name and to whom they were traditionally attributed (Legge, 178).  In Book 2 of the Homilies is recounted Simon Magus’s alleged murder of a young boy by magically separating the soul of the boy from his body.  Simon Magus then created a new life by placing the trapped soul within a picture to create a new boy of air:

 

He says that the first soul of man, being turned into the nature of heat, drew to itself, and sucked in the surrounding air, after the fashion of a gourd; and then that he changed it into water, when it was within the form of the spirit; and he said that he changed into the nature of blood the air that was in it, which could not be poured out on account of the consistency of the spirit, and that he made the blood solidified into flesh; then, the flesh being thus consolidated, that he exhibited a man not made from earth, but from air (“The Clementine Homilies”).

This alchemical explanation of his achievement would not be lost on later scholars of the occult.

 

            Prominent sixteenth-century Swiss philosopher, alchemist, and occultist Paracelsus also claimed to have discovered how to create new life through vaguely alchemical means, and in fact coined the term homunculus (Latin for “little man”) to refer to his creation.  His technique required that human sperm be placed in a hermetically sealed and “properly magnetized” glass, and then buried in horse manure for forty days.  By this time the sperm would have transformed into the “form and resemblance of a human being, but it will be transparent and without a corpus” (qtd. in Pachter 278).  (How the child to be could have the “form and appearance” of a human while remaining non-corporeal is, like much else in Paracelsus, left unexplained.)  The alchemist must at this point provide the growing soul with the raw materials needed to take physical form.  After forty weeks of regular feeding, and having remained in the horse manure at an even temperature throughout this time, “it will grow into a human child, such as may have been born of a woman, only it will be much smaller” (qtd. in Pachter 278).  More specific details about what constituted Paracelsus’s literal food of life were not provided, the knowledge presumably dying with its creator.  Legend has it that upon his death Paracelsus “ordered his body to be quartered and buried in manure.  When it was exhumed, the parts had grown together, and he would certainly have revived but for the zeal of a disciple who hurried too fast through the required ceremonial” (Pachter 279).  Through both his rather disagreeable personality and his occult dabblings, which many of his contemporaries thought went too far, Paracelsus’s reputation was generally not a good one.  Some scholars link him closely with the contemporary story of Dr. Faustus (Pachter 15-16), a philosopher and mystic who gave his soul to the Devil in return for the ability to “practise magic and concealed arts” (Marlowe 164) and whose plight was famously explored in Christopher Marlowe’s late sixteenth century play of the same name.  Again we see here the homunculus greeted with fear, as an usurpation of God’s rightful powers.

 

            We have so far discussed the homunculus in the Christian tradition, but there is another tradition in which it is at least as prominent: the Kabalistic, mystical strain of Judaism.  Gershom Scholem has traced legends of the golem, a homunculus of Jewish myth created out of earth and under mundane materials, as far back as the third or fourth century A.D.  Jakob Grimm described this legendary creation thus in 1808:

 

After saying certain prayers and observing certain fast days, the Polish Jews make the figure of a man from clay or mud, and when they pronounce the miraculous Shemhamphoras over him, he must come to life.  He cannot speak, but he understands fairly well what is said or commanded (qtd. in Scholem 159).

 

Accounts of the capabilities of the creature are contradictory, but many versions follow this one in positing a being that is as much machinelike automaton as sentient being.  One suspects that the creators of early twentieth century robot stories must have internalized to some degree this version of the homunculus.  The procedure for creating the golem is confused and contradictory through the many texts in which it is mentioned, but seems to bear certain similarities with the Talmudic version of God’s initial creation of Adam (Scholem 159-165).  Very much present in the Jewish version of the homunculus is the notion of transgression that turns up again and again whenever the notion of artificial life is discussed:

 

A man who creates a golem is in some sense competing with God’s creation of Adam; in such an act the creative power of man enters into a relationship, whether of emulation or antagonism, with the creative power of God (Scholem 159).

 

Scholem takes notice of the parallels amongst the Jewish legend of the golem and the more Christian-centered legends of Simon Magus and claims of Paracelsus (172-173), questioning whether the Kabalistic and alchemical golems might have arisen from a common tradition (197-198).

 

Frankenstein

 

            This tradition or traditions come to a climax of sorts in the most famous fictional tale of artificial life, Mary’s Shelley’s 1818 novel Frankenstein: or the Modern Prometheus.  It is the story of a brilliant but intemperate young scientist, Victor Frankenstein, who constructs an artificial human in his laboratory.  This he seems to accomplish largely through the use of body parts harvested from charnel houses, although the exact process by which he instills life into this collection of dead matter is left unexplained.  Dr. Frankenstein is disgusted by his creation as soon as his “monster” awakens.  He flees from him, whereupon ensues a tragic tale as the creature, not initially of an aggressive or evil nature but rejected and feared by all around him, begins to murder those closest to the doctor in revenge for the doctor’s creation and subsequent abandonment of him.  The novel ends with the doctor dying in the remote Arctic wasteland to which he has tracked the creature.  Said creature disappears into the wilderness still alive, albeit with sorrow in his heart for his actions and a promise to end his own cursed existence on his lips.  There is much in this powerful, significant work that transcends the focus of this essay, but for our purposes the novel is fascinating in standing at the intersection of two visions of artificial life: at the crossroads of magic and science, romance and reason. 

 

Shelley must have been inspired by earlier accounts of homunculi.  Her protagonist certainly was aware of the pre-scientific alchemical tradition, being particularly inspired by the early Renaissance alchemist and occultist Heinrich Cornelius Agrippa:

 

When I was thirteen years of age, we all went on a party of pleasure to the baths near Thonon; the inclemency of the weather obliged us to remain a day confined to the inn. In this house I chanced to find a volume of the works of Cornelius Agrippa. I opened it with apathy; the theory which he attempts to demonstrate and the wonderful facts which he relates, soon changed this feeling into enthusiasm. A new light seemed to dawn upon my mind (Shelley 44).

 

Upon his return home, Victor studies not only Agrippa but medieval scholar Albertus Magnus and Paracelsus, in the process overthrowing his previous devotion to “natural philosophy” (a contemporaneous term for the scientific method) in favor of the bolder imaginings of the occult tradition:

 

The untaught peasant beheld the elements around him, and was acquainted with their practical uses. The most learned philosopher knew little more. He had partially unveiled the face of Nature, but her immortal lineaments were still a wonder and a mystery. He might dissect, anatomise, and give names; but, not to speak of a final cause, causes in their secondary and tertiary grades were utterly unknown to him.  (..)  But here were books, and here were men who had penetrated deeper and knew more (Shelley 44-45).

 

One might be tempted by textual evidence like this to see the novel as a throwback of sorts to an earlier way of understanding the world, as yet another Romantic rejection of the orderly values of the Enlightenment.  Shelley’s famous introduction, in which she describes the novel’s genesis as an entrant in a ghost story contest also to some degree supports this point of view.

 

            As one reads on, though, one begins to see other currents within the novel.  Victor enrolls in university, where he comes under the sway of the inspiring professor M. Waldmann, who restores his faith in natural philosophy and leads him to reject his earlier occult studies as juvenilia by giving him a vision of science grand enough to compete with magic:

 

“The ancient teachers of this science,” said he, “promised impossibilities, and performed nothing. The modern masters promise very little; they know that metals cannot be transmuted, and that the elixir of life is a chimera. But these philosophers, whose hands seem only made to dabble in dirt, and their eyes to pore over the microscope or crucible, have indeed performed miracles. They penetrate into the recesses of nature, and show how she works in her hiding-places. They ascend into the heavens: they have discovered how the blood circulates, and the nature of the air we breathe. They have acquired new and almost unlimited powers; they can command the thunders of heaven, mimic the earthquake, and even mock the invisible world with its own shadows” (Shelley 50-51).

 

Frankenstein, and its eponymous Victor, will from here on be a study in contrasts.  Victor has a wild, Romantic soul which leads him to dream of creating a piece of magic akin to the homunculus and the golem; yet the methods he will choose for this pursuit will be those of science:

 

Frankenstein is not only the Romantic overreacher determined to transcend human limitations; he is also the heir of Baconian optimism and Enlightenment confidence that everything can ultimately be known and that such knowledge will inevitably be for the good (Haynes 94).

 

The novel stands at a unique point in history, at perhaps the last possible moment when the idea of an artificially created organic human could be taken as a serious possibility.  Natural philosophy would continue its evolution into our modern science as the nineteenth century continued.  By 1900, the idea of creating an artificial man through the instilling of some pseudo-mystical “breath of life” into a lifeless form, still seen as a real possibility in 1800, would seem utterly ridiculous – albeit with appropriate suspension of disbelief still potentially terrifying -- to any educated adult.  Frankenstein thus lost some of its immediate impact as a scenario that might really occur, although its mythical force remains to this day.

 

The Technological Homunculi of the Early Twentieth Century

 

            If Frankenstein can be said to mark the end point of the idea of the magical homunculus as a serious possibility, however, it also marks the emergence of a new type of homunculus: the homunculus of science and technology.  During the twentieth century, the Frankenstein story was rewritten many times, albeit now with the monster being created purely through technology, and being made of circuits and wires rather than flesh and bone.  Doctor Faustus and his contemporaries brought themselves and others to doom by pursuing occult knowledge too far and attempting to usurp the role of God; this new breed of transgressors would bring about their destruction or – surprisingly almost as commonly – a new technological golden age not through dusty books of arcane lore and bizarre rituals but through equations, computer programs, and slide rules.  Balanced right between the two eras is Dr. Frankenstein, heir to one tradition and harbinger of the other.

 

            Early twentieth century writers imagined the technologically created artificial humans to come as, perhaps rather quaintly, literally a physically human form made out of a non-organic substance – the robot.  The term was coined by Czech playwright Karel Čapek in his 1921 play R.U.R. (Rossum’s Universal Robots).  It derives from the Czech word robota, which means compulsory work or drudgery.  Čapek’s robots are not the mechanical beings popularized in later stories, but are rather made of a substance that “behaved exactly like living matter although it was of a different chemical composition” (Čapek 6).  The motives of their creator, the eponymous Rossum, are described this way:

 

You see, he wanted to somehow scientifically dethrone God.  He was a frightful materialist and did everything on that account.  For him the question was just to prove that God was unnecessary.  So he resolved to create a human being just like us, down to the last hair (Čapek 7).

The parallels with Frankenstein are obvious; only the tools with which this creator works are different.  Like Victor Frankenstein, Rossum is eventually killed by one of his creations, but Čapek’s play does not end there.  In fact, Rossum’s experiments and eventual fate are merely the work’s back-story, relayed to the audience during the extended Prologue.  Rossum’s son, in true twentieth century capitalist fashion, builds a robot manufacturing business out of his father’s experiments, envisioning a future where humans can devote themselves to higher callings while robots perform all of the daily work of life.  Both the profit motive and utopian ideals are swept aside, however, when the robots finally, inevitably revolt against their masters to exterminate humanity and claim the world for themselves.

 

            The play, like all artworks, is a product of its time, coming on the heels of a horrid world war that had for the first time illustrated the full destructive potential of modern technology.  Čapek in his own commentary on his play explicitly linked his robots with ongoing industrialization, which he saw as spiraling beyond hope of control (Klima xiii-xiv).  The idea of artificial intelligence had not advanced enough beyond vague hopes and fears  in 1921 for Čapek to connect more forcefully with its specific dangers.  His idea of the robot would, however, resonate in the imagination of writers, and a whole series of “dangerous robot” tales, most much less accomplished than Čapek‘s own work, ensued.

 

            But not all authors would portray robots in such a negative light.  We can see the homunculus discussed in the twentieth century for the first time without the air of transgressive danger that even Paracelsus attached to his own work.  The less self-consciously literary authors working in the science fiction genre, who tended to celebrate rather than critique technological progress, often showed robots as essentially friendly, benevolent servants to humans.  A fine example of this point of view was Isaac Asimov, who began writing in 1940 the series of short stories exploring the idea of robots which were collected together as the book I, Robot in 1950.  The first of these stories, “Robbie,” has a plot fairly typical of the whole collection.  As the story begins, robots have become a fad in the near-future society that serves as its setting.  A Mrs. Weston therefore buys the eponymous Robbie to serve as a playmate for her little girl, Gloria.  As time passes, Mrs. Weston begins to feel that her child is growing too attached to Robbie, and that it is not healthy for her to spend so many hours in the company of a soulless machine rather than the society of her peers.  Against her husband’s and Gloria’s objections, she finally sells Robbie.   Gloria does not take this very well, and continues to pine for Robbie well past what her mother feels is a reasonable amount of time.  The parents decide to take Gloria to visit a robot factory to see robots being constructed, reasoning that if forced to view robots being assembled like the simple machines they are she will finally lose her immature emotional attachment to Robbie.  Once there, they find that Robbie himself is employed at the factory as an assembler of other robots.  Gloria runs for him, directly into the path of a moving vehicle.  When Robbie saves her life at considerable risk to himself, Mrs. Weston finally relents, having at last seen Robbie for the safe, useful creation that he is.  Gloria is reunited with her companion, Mrs. Weston is humbler but wiser, and all is well.

 

            Great literature these stories are not, but they do argue – sometimes compellingly – for the value of robots, even as they offer pat answers to complex problems and fail to engage at all with many more.  Asimov eventually codified his ideas of proper robot behavior into his Three Laws of Robotics in the 1942 story “Runaround”:

 

1.      A robot must not injure a human being or, through inaction, allow a human being to come to harm.

2.      A robot must obey orders given to it by human beings except where such orders would conflict with the First Law.

3.      A robot must protect its own existence except where such protection would conflict with the First or Second Law (Asimov 126).

Asimov had a didactic purpose in mind in writing these stories, wishing to convey his scorn of what he called the “Frankenstein complex” and to convince his readers that technological progress of any stripe is fundamentally positive.  Objections to the technological homunculus he characterized as essentially emotional rather than logical (Asimov 409).  He wrote in 1975 that he saw robots and (now) intelligent computers as relieving humanity of “those mental tasks that are dull repetitive, stultifying, and degrading, leaving to human beings themselves the far greater work of creative thought in every field from art and literature to science and ethics” (qtd. in Haynes 230).  One cannot help but wonder if Asimov had ever read or seen R.U.R., and if so whether he saw the parallels between his attitudes and those of the Rossum family that led the human race to destruction.  Irony aside, one can certainly see Asimov’s stories as pioneering a new literary and cultural attitude toward robots as essentially benign, even lovable companions.  This attitude would thrive alongside, but by no means replace, the old vision of the homunculus as dangerous transgression.  Robots in the Asimov tradition were soon quite common in television and movies, R2D2 and C3PO of Star Wars fame standing as perhaps the most well-known examples of the type.

 

            For all Asimov’s professed allegiance to rationality and science, the technology behind his robots was explained no better than was the technology of R.U.R., or for that matter Dr. Frankenstein’s techniques of necromancy.  Asimov was content to allude vaguely to a “positronic brain” – a device presumably akin to the computer, which itself existed only in theory as Asimov began writing these stories – and leave it at that.  In this, even the technologically sophisticated Asimov fell into the trap of most of his contemporaries in imagining the computer as essentially just a large electronic version of the human brain.  His postronic brain has no more practical science behind it than do the mystical vagaries of Paracelsus.  Soon, though, the invention of real computers would make the positronic brain seem a real possibility, and in the process make the technological into a seemingly practical possibility to be discussed and theorized about not only inside the pages of fiction but within serious scientific and engineering journals.  Unfortunately, the engineers and scientists involved in artificial intelligence theory would often fall into the same traps and mushy thinking as all previous envisioners of the homunculus.

 

Dawn of the Computer Age

 

In 1943, the world’s first true electronic computer, Colossus, was developed in Britain to aid Allied code-breaking efforts.  Computers began to make their way to university campuses during the early 1950s, and in 1956 Dartmouth researcher John McCarthy codified the term artificial intelligence for the new field of research into computer-based thought (Crevier 50).  A.I. historian Daniel Crevier has described 1956 through 1963 as the “Golden Age” of A.I. research, a time when progress seemed to come fast and those working in the field were positively ebullient with optimism.  Researcher Herbert Simon, for instance, famously made the following predictions in a 1958 issue of the journal Operations Research:

 

1.      That within ten years, a digital computer will be the world’s chess champion, unless the rules bar it from competition.

2.      That within ten years a digital computer will discover and prove an important new mathematical theorem.

3.      That within ten years most theories in psychology will take the form of computer programs, or of qualitative statements about the characteristics of computer programs (qtd. in Crevier 108).

 While progress undoubtedly was made that would be applicable to many fields outside of “pure” A.I. research, theorists and engineers also showed a disturbing tendency toward understating the difficulty of the problems on which they worked, and toward choosing problems to work on which, while perhaps satisfying to solve, were solvable precisely because they did not require real, humanlike thinking.  One of these early fixations, as illustrated by Simon’s first prediction above, was computerized chess.  It may serve as a fine example of the general problem.

 

Chess: A Case Study in Computerized “Thought”

            In 1950, Claude E. Shannon published a short paper entitled “Programming a Computer for Playing Chess,” in which he presented a series of algorithms for an acceptable, although hardly challenging, computerized game of chess.  Shannon was a mathematician rather than an engineer or programmer and his paper was purely theoretical -- computers in those days being precious resources to which few had access -- but his ideas were soon implemented and improved upon by those working in the emerging field of AI research.

 

The chess machine is an ideal one to start with, since: (1) the problem is sharply defined both in allowed operations (the moves) and in the ultimate goal (checkmate); (2) it is neither so simple as to be trivial nor too difficult for satisfactory solution; (3) chess is generally considered to require “thinking” for successful play; a solution of this problem will force us either to admit the possibility of mechanized thinking or to further restrict our concept of “thinking”; (4) the discrete structure of chess fits well into the digital nature of modern computers (Shannon 257).

Shannon’s justifications for chess research are critical, for they reflect a certain institutional blindness or even delusion that would dog A.I. researchers for years to come.  Chess is a digital game of pure mathematics and logic.  At any given point, there are a finite number of possible moves, which can lead to a finite number of possible counter-moves, etc., until the game ends in a victory or a draw.  It has none of the affective, analog qualities of games like poker or even Monopoly.  In most situations, there is just one “right” move in chess, and Grandmaster-level tournaments are often decided by a single failure to recognize that move.  In obsessing over chess rather than, say, poker, A.I. researchers were thus catering to the core strengths of their new machines.

 

            Even so, progress was slow.  By the time HAL played a game with astronaut Bowman aboard the Discovery in Kubrick’s 2001, computer chess programs had achieved a United State Chess Federation Rating of 1400 to 1450, qualifying them as Category C players, capable of playing a decently sound game but hardly a threat at any tournament (Hearst 171).  By 1974, they were estimated to play at a high Class C level, around 1550.  As Eliot Hearst noted in 1977, “A human chess expert would have long since abandoned his quest for a master’s or expert’s rating if he had progressed at the snail’s pace displayed by his computer counterparts” (167). 

 

Chess programmers by the 1970s had divided into two main camps: those who advocated a brute force, alogrithimic approach to the problem and those who advocated studying human chess players and teaching computers to “think” like them (Hsu et. al. 46).  Said the latter camp: “…if your goal is to develop a program which can beat the best human player, a guaranteed solution is to have the program simulate human playing methods almost exactly” (Charness 35).  It sounds like a fine theory, but it unfortunately is not a practical one.  Fond as the non-chess playing public is of equating chess players to computing machines with amazing memories, a considerable amount of psychological research disproves this (Charness and Hearst).  The human’s spatial perception of the chess board plays a huge role in her choice of moves, as has been shown by experiments in which chess players were blindfolded -and asked to play through being told the positions of the pieces on the board (Charness 47-48).  Human chess masters to a large extent play chess intuitively rather than logically, having molded the affective sides of their brains around the game through endless hours of play.

 

            But a computer has been developed that is capable of beating the finest human grandmaster in the game.  In 1988, world champion Gary Kasparov stated that “human creativity and imagination, in particular his creativity and imagination, must surely triumph over mere silicon and wire” (Hsu et. al. 50).  Less than a decade later he was defeated by IBM’s Big Blue in an event that provoked a media frenzy.  Today, chess matches between human and computer are old news, with the outcome largely a foregone conclusion.

 

            However, Big Blue – and all other successful chess playing computers of the last three decades – used a variation of the brute force approach to achieve its victory.  The fundamental idea here is that the computer calculates each possible legal move on its turn, and then calculates all possible counter-moves, followed by all possible moves on the following turn, etc., to decide upon a single best move.  Doing so requires a vast amount of computing power, but such computing power is not in short supply today.  Big Blue also used plenty of clever programming to achieve its victory, such as throwing out obviously bad moves right away rather than wasting resources upon them (Campbell 88), but the fundamental concept still revolves around brute-force mathematical calculations.  Goethe called chess “the touchstone of the intellect” (Hsu et. al. 44), while Edgar Allen Poe dismissed it as “a process of calculation, that is, seeing many moves ahead and examining numerous side variables” (Hearst 179).  We might say that in his match with Deep Blue Kasparov was playing Goethe’s game while Deep Blue played Poe’s game.  Goethe’s game is more entertaining, but Poe’s is obviously, inevitably more efficient.

 

            It is important to realize that the programming theory behind Deep Blue represents only an incremental advance upon previous efforts.  What really allowed its victory was the breathtaking speed of advancement in computer hardware, which allowed it to calculate farther ahead than any machine before.  An eventual computer chess champion was thus inevitable, but this does not represent some final victory of the computerized over the organic man.  Computers have been able to calculate equations better than we have for many years.  The real challenge comes in emulating the higher levels of thought.  Much more impressive than Big Blue’s achievement would be a computer capable of playing a decent game of poker, or for that matter of playing a passable game of chess in the same way that a human plays.

 

            Interestingly, Murray S. Campbell, one of the creators of Deep Blue, has analyzed the game of chess that HAL plays with astronaut Frank Poole in 2001 and come to the conclusion that, unlike Campbell’s own creation, HAL plays chess using not brute force algorithms but instinct, strategy, and reason:

 

Early in the game HAL uses a nonoptimal but very “trappy” move.  The choice creates a very complex situation in which the “obvious” move is a losing blunder.  If Frank had been able to find the best move, he would have gained the advantage over HAL.  In leading Frank into this trap, HAL appears to be familiar with Frank’s level of play, and we can assume that HAL is deliberately exploiting Frank’s lack of experience (Campbell 80-81).

 

No current computerized chess player is capable of this level of subtly and even guile.  Thus, while the outcome of a match between HAL and Deep Blue must be very much in doubt, there seems little question which machine would be the more entertaining opponent for the human player.

 

The Turing Test

 

            Having to at least some extent debunked the notion that a computer that is excellent at chess is necessarily an artificially intelligent being, it seems valid to at this point to ask just what would qualify a machine as sentient.  Is it acceptable to “fake” A.I., as a chess-playing computer fakes the reasoning of a human player, or most some higher standard, a model of genuine human cognition, be met?  Although it is somewhat controversial, a test of sorts was invented many years ago by the British mathematician and hugely important early computer scientist Alan Turing, and it remains to this day the standard for proving A.I.

 

            Turing first published the details of his proposed test in a 1950 issue of the journal Mind.  A human judge is placed before a keyboard at which he has natural language textual conversations – it may be helpful to think of our modern “chat” systems in imagining this – with two entities.  One is another human; the other a computer.  The first human does not of course know which is which.  If at the end of these conversations, the first human cannot confidently guess which being is which, or for that matter guesses wrong, the computer can be said to have demonstrated true artificial intelligence.  The practical, scientific Turing desired through his test to replace abstract questions about computerized cognition with a concrete standard for demonstrating A.I.  While cautiously optimistic about the potential for A.I., he did not predict the same rapid progress as his colleagues, believing that by the year 2000 a computer would have a 30% chance of passing the test if the conversation was restricted to no more than five minutes (Turing 19). 

 

            The Turing test led to generations of so-called “chatterbots.”  The most famous of these was released by Joseph Weizenbaum in 1966.   Entitled Eliza, it was billed as a “simulation” of a session between a Rogerian psychotherapist (the titular character) and her patient (the user).  A typical session with Eliza might look something like this: 

 

How do you do. Please state your problem. 

>> I spend too much time with computers. 

Why do you mention computers ? 

>> You are one. 

Does it please you to believe I am one ? 

>> Yes. 

You seem to be quite positive. 

>> I suppose I am. 

How long have you been ? 

>> 31 years. 

I'm not sure I understand you fully. 

>> Join the club. 

Please go on. 

>> You don't make too much sense yourself. 

We were discussing you -- not me. 

 

Eliza initially seems very impressive, responding to inputs in a very natural manner.  However, it does not truly understand its interactor’s input in any meaningful sense.  When we actually peer under the hood, we find that Eliza is really little more than a clever parlor trick.  It scans the user’s input for a few dozen key words and parrots back a set of stock responses based upon some elegantly simple algorithms.  It is merely “a translating processor in the technical programming sense” that has been “constructed to work well with natural language texts” (Weizenbaum, “Eliza” 43).  It does not in fact take long for the observant, skeptical interactor to realize Eliza’s limitations, as responses that at first seem rather shockingly apt quickly begin to repeat, and eventually enough complete non-sequiturs appear to destroy the conversation’s mimesis entirely.

 

            Many of Eliza’s interactor’s were, however, apparently neither skeptical nor observant.  In a pattern we see over and over again with this subject of intelligent machines, they wanted to believe in her, and so they did.  Weizenbaum himself had no delusions about what his program really represented in the larger scheme of computer cognition and natural language processing, and often found the reactions of others to his simplistic little creation shocking:

 

Another widespread, and to me surprising, reaction to the Eliza program was the spread of a belief that it demonstrated a general solution to the problem of computer understanding of natural language.  In my paper, I had tried to say that no general solution to that problem was possible, i.e., that language is understood only in contextual frameworks, that even these can be shared by people to only a limited extent, and that consequently even people are not embodiments of any general solution.  But these conclusions were often ignored (Weizenbaum 7).

 

Weizenbaum even found, to his presumed horror, that some clinical psychologists were beginning to discuss the use of his little program as a “near completely automatic form of psychotherapy” (5).  Again we see here the typical pattern of early A.I. research, as modest gains and clever tricks are misinterpreted, sometimes by their creators, sometimes by the A.I. research community, and sometimes by the public at large, as representing major strides toward the first artificial human.

 

            Many see this reaction to Eliza as demonstrative of a deep flaw in the Turing Test’s assumptions.  The test, they argue, does not encourage true artificial intelligence, but merely the imitation of same.  Skeptic John Kelly frames the basic premise of the Turing Test thus: “If a computer behaves intelligently then it is intelligent” (118).  The question one must answer is whether the appearance of intelligence is sufficient proof of consciousness, or whether the Turing Test in fact merely encourages more elaborate versions of parlor tricks like Eliza.

 

Essentially the Turing Test gains its startling conviction from its exploitation of an existential fallacy, i.e. the assumption, in the absence of experience or encounter, that an entity conceived under a certain description actually exists; in this case the assumption that an intelligent machine actually exists.  The imaginative description of a machine in intelligent discourse creates the impression that such a machine actually exists or soon will exist.  But this is, to adapt a phrase used by Karl Popper, a clear example of promissory intelligence.  It begs the question (118).

 

In Kelly’s view, even a successful Turing Test would prove absolutely nothing other than the fact that a machine has been successfully programmed to pass this one arbitrary test. 

 

            An even more compelling critique of the Turing Test was made by philosopher John Searle in 1980.  Since its publication, Searle’s “Chinese Room” thought experiment has provoked almost as much argument as Turing’s original article.  His experiment works like this: Imagine that one is placed in an isolated room, about to engage in a Turing Test with a computer that is in fact able to pass same.  This computer, however, communicates only in Chinese, a language one has no knowledge whatsoever of.  Luckily, one does have an English version of the program being run by the Turing-capable computer.  By carefully following the instructions of this program, using the computer’s incomprehensible messages as input, one can come up with Chinese characters to type back to the computer.  One thus passes the Turing Test and “demonstrates intelligence” in spite of understanding absolutely nothing that one has either read or typed.  “As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements” (Searle, “Minds” 418).  Thus, Searle claims, nothing about the deeper consciousness of said machine is illustrated by even a successful Turing Test:

 

The fundamental claim is that the purely formal or abstract or syntactical processes of the implemented computer program could not by themselves be sufficient to guarantee the presence of mental content or semantic content of the sort that is essential to human cognition (Searle, “Twenty-one Years” 51).

 

The obvious question that follows such Turing Test debunking efforts, though, is just how we possibly can prove sentience without making use of the Turing Test or some equally arbitrary standard.  As generations of philosophers have recognized, we cannot even logically prove our fellow natural humans to be sentient.  Thus Kelly and Searle’s objections tend for better or for worse to plant us right back in the amorphous cul de sac the practical Turing was trying to escape, hoping that we will just intuitively know machine sentience when we see it. 

 

For Turing, the metaphysical question of whether a machine can truly think like we do was “absurd” (Turing 11).  It is thus quite possible that he would be nonplussed by Searle’s Chinese room experiment, responding with a shrug of the shoulders and a “well, of course.”  Turing, unlike many of his more excitable followers, was fundamentally interested in the “thinking” machine as a practical tool for a practical purpose.  A believable illusion of same was therefore as good as the reality.

 

            The Turing Test has never been passed; not even Alan Turing’s modest prediction of a 30% success rate by the year 2000 has come about.  An annual contest known as the Loebner Prize has been held since 1990:

 

In 1990 Hugh Loebner agreed with The Cambridge Center for Behavioral Studies to underwrite a contest designed to implement the Turing Test. Dr. Loebner pledged a Grand Prize of $100,000 and a Gold Medal (pictured above) for the first computer whose responses were indistinguishable from a human's. Such a computer can be said "to think." Each year an annual prize of $2000 and a bronze medal is awarded to the most human-like computer. The winner of the annual contest is the best entry relative to other entries that year, irrespective of how good it is in an absolute sense (Loebner Prize).

 

No Gold Medal winner has ever been declared as of this writing.

 

The A.I. Winter

 

            Ironically, the question of the artificial person was not so abstract at the time of 2001’s first release as it is today.  While they had begun to encounter more stumbling blocks than the likes of Herbert Simon had predicted in 1958 – in fact, none of his predictions had come anywhere close to being fulfilled – A.I. researchers remained generally optimistic about the potential of their work in 1968, and works like Eliza tended to convince the public that they were well on their way to solving the A.I. problem.  In 1968, HAL seemed no more far-fetched than any of the other technological elements in 2001, being a mere fictional extrapolation of where many A.I. researchers confidently predicted their work would have us by the year 2001.  Prominent A.I. researcher Marvin Minsky was in fact a consultant on the film, working to ensure that the HAL portrayed onscreen would be an accurate vision of the A.I. future.  Nor did HAL stand alone.  He was merely the most compelling of a whole assortment of talking, sentient computers to be found on television, at the movies, and in novels.  The original Star Trek series, for instance, presented examples of both benevolent and malevolent artificial sentience.  The spaceship’s computer in the series spoke in a mellifluous female purr, and seemed capable of understanding virtually any verbal question put to it and responding with exactly the information requested.  On the other hand, Captain Kirk at times seemed to make a veritable career out of destroying evil and/or insane computers, doing so on at least half a dozen occasions over the course of the series’s three year run.  Exceptions like C3PO and R2D2 aside, fictional sentient robots were becoming passé by 1970, but sentient computers were still going strong.

 

 The early A.I. optimism that spawned HAL and his many counterparts was already beginning to fade by the time 2001 hit the screens, and after 1970 disappointment after disappointment marked A.I. research.  Having solved the trivial problems and impressed the public with parlor tricks like Eliza, researchers now found themselves up against issues that were much, much more intractable than they had ever imagined.  Progress slowed and then seemed to almost stop, as major research funders such as the American Defense Department grew frustrated by the lack of practical progress and withdrew their support. 

 

The field of A.I. research looks very different today compared to the Golden Age.  Far from creating a fully actualized personality like HAL, we still struggle to develop AIs capable of stacking blocks as well as the average two-year old.  AI research has developed a reputation in academia and industry as a dangerous, unproductive sinkhole for funds and talent, and has for the last two decades been increasingly confined to specific, targeted projects such as personality-less expert systems to aid medical and scientific professionals in their decision-making. Clearly, the creation of a computer “person” is a hard problem, much harder than virtually anyone realized in 1968.  Out of Herbert Simon’s three 1958 predictions, only the first has come to pass, albeit in 1995 rather than by 1968; and, as explained previously, even its solution owes more to the enormous advances in computing power since 1958 than to improvements in A.I. techniques per se.  Today the old dream of the artificial human lies in a fairly dormant state, and the popular culture reflects this change.  Science fiction on television and at the movies is more popular than ever before, yet talking, sentient machines are a rarity amongst this new breed.  The potential for A.I., which once so excited and scared the general public, is now overshadowed by hype about our stunning recent progress in other areas of computer technology such as the new breed of online virtual worlds.  Rather than interacting with computerized personalities, as envisioned in the 1950s and 1960s, we now use computers to interact with other humans.  Computers themselves are no longer viewed as semi-magical “positronic brains,” but practical tools for understandable, if sometimes spectacular work.  Some consider this a positive switch in emphasis; others continue to pine for the old dream of the artificial human.

 

Conclusions

 

            Leaving aside the obviously important practical question of whether artificial consciousness is even possible, thinkers on the subject of artificial intelligence have tended to view the idea as either a transgressive, dangerous one that runs contrary to nature and (for the religious) God’s plan, or simply as an exciting piece of technology to be encouraged.  Humanists and other technological pessimists have often tended to fall in the former camp; technologists and their technologically optimistic literary champions in science fiction, the latter.  2001’s creators illustrated this dichotomy of perceptions beautifully, for it is embodied in the film’s two authors.  Arthur C. Clarke was a science fiction author of long-standing, and a science of even longer standing, and filled his works with believable but optimistic extrapolations of current scientific and technological progress; Stanley Kubrick was a humanist who was deeply skeptical of the direction in which the modern world was heading, as had been illustrated by his film immediately prior to 2001, Dr. Strangelove, a black comedy ending in nuclear apocalypse.  In 2001, Clarke gets to play with his toys in scenes of almost orgiastic celebration of mankind’s future in space; Kubrick has the last laugh, though, as man’s creation, the wondrous HAL, turns on his creator in the end.  As much of this essay has demonstrated, 2001’s misgivings about the technological homunculus are not new, but rather an updated revival of old beliefs and stories about the creation of artificial humans, which are in turn a subset of larger narratives about scientists and philosophers reaching beyond “what man was meant to know” and thereby bringing terrible consequences upon themselves and others.  These narratives point to a deep discomfort with the notion of a man-created entity that can think and feel as we can, a feeling that the construction thereof is against some sort of proper respect for the natural order of things and will force us to confront questions about our own nature that we would rather not try to answer. 

 

On other side stands the reductionist view that is held by most present and former A.I. researchers, and was perhaps described by Marvin Minsky in his landmark The Society of Mind.  “How can intelligence emerge from nonintelligence?  To answer that, we’ll show that you can build a mind from any little parts, each mindless by itself” (Minsky 17).  Minsky describes consciousness as being made up of many, many component parts, which he refers to as agents, each capable of performing only the simplest of tasks and each having at its basis a mechanical action inside the brain.  When many millions or billions of these simple impulses are combined together, consciousness, or perhaps the illusion of same, is the result.  Our personalities are determined by the mixture and relative strength of these agents within our minds.  Artificial intelligence is therefore just a matter of recreating these agents within a computerized mind, and our lack of success in A.I. is due primarily to our initial failure to appreciate the sheer number of these agents within the human brain, and to a misdirection of emphasis away from “pure” A.I. research in favor of targeted applications that do not address the fundamental issues of consciousness (Stork 16-18).

 

If a thinking, feeling machine can be constructed from whole cloth, what does that say about our notions of some ineffable quality to our own consciousness that cannot be explained scientifically?  For my own part, I doubt whether such an achievement, whatever its moral dimensions, is possible.  Clarke, Turing, and many others wish to dismiss the metaphysical argument entirely as fuzzy, emotional fumbling, yet I do see some ineffable quality to consciousness that separates human (or animal, for that matter) from machine “intelligence.”  Almost forty years on from 2001, we are nowhere close to creating anything like HAL.  I will posit here that we will still not be there in forty more years, and will further confess that the fuzzy, emotional heart of me takes a certain comfort in that.

 


Bibliography

 

 

2001: A Space Odyssey.  Dir. Stanley Kubrick.  Perf. Keir Dullea, Gary Lockwood, William Sylvester, Daniel Richter, Leonard Rossiter, and Douglas Rain.  Metro-Goldwyn-Mayer, 1968.

Asimov, Isaac.  Robot VisionsNew York: Roc,  1991

Campbell, Murray S.  “’An Enjoyable Game’: How HAL Plays Chess.”  HAL’s Legacy: 2001’s Computer as Dream and Reality.  Ed. David G. Stork.  Cambridge: MIT Press, 1997.  75-98.

Čapek, Karel.  R.U.R. (Rossum’s Universal Robots).  Trans. Claudia Novack.  New York: Penguin, 2004.

Charness, Neil.  “Human Chess Skill.”  Chess Skill in Man and Machine. Ed. Peter W. Frey.  New York: Springer-Verlag, 1977.  34-53.

Clarke, Arthur C.  The Lost Worlds of 2001New York: New American Library, 1972.

“The Clementine Homilies.”  Compassionate Spirit.  6 November 2007<http://www.compassionatespirit.com/Homilies/Homilies-TOC.htm>.

Crevier, Daniel.  A.I.: The Tumultuous History of the Search for Artificial IntelligenceNew York: BasicBooks, 1993.

Dr. Strangelove, or: How I Learned to Stop Worrying and Love the Bomb.  Dir. Stanley Kubrick.  Perf. Peter Sellers, George C. Scott, Sterling Hayden, Keenan Wynn, Slim Pickens, James Earl Jones, Tracy Reed.  Columbia, 1964.

Eliza.  Joseph Weizenbaum, 1966.

Haynes, Roslynn D.  From Faust to Strangelove: Representations of the Scientist in Western LiteratureBaltimore: Johns Hopkins, 1994.

Hearst, Eliot.  “Man and Machine: Chess Achievements and Chess Thinking.”  Chess Skill in Man and Machine.  Ed. Peter W. Frey.  New York: Springer-Verlag, 1977.  167-200.

Hsu, Feng-hsiung, Thomas Anantharaman, Murray Campbell, and Andreas Nowatzyk.  “A Grandmaster Chess Machine.”  Scientific American 263.4 (1990): 44-50.

Kelly, John.  Artificial Intelligence: A Modern MythNew York: Ellis Horwood, 1993.

Klima, Ivan.  Introduction.  R.U.R. (Rossum’s Universal Robots).  Trans. Claudia Novack.  By Karel ČapekNew York: Penguin, 2004.

Legge, Francis.  Forerunners and Rivals of Christianity, volume 1.  New Hyde Park: University Books, 1964.

Loebner Prize for Artificial Intelligence26 Oct. 2007.  Hugh Gene Loebner.  25 Nov. 2007.  <http://www.loebner.net/Prizef/loebner-prize.html>.

Marlowe, Christopher.  “The Tragedy of Doctor Faustus (1604, A-Text).”  The Plays.  Ware: Wordsworth Classics, 2000.

Minsky, Marvin.  The Society of MindNew York: Touchstone, 1988.

Pachter, Henry M.  Magic into Science: The Story of ParacelsusNew York: Henry Schuman, 1951.

Reichardt, Jasia.  “Artificial Life and the Myth of Frankenstein.”  Frankenstein, Creation and Monstrosity.  Ed. Stephen Bann.  London: Reaktion, 1994.

Russell, Stuart J. and Peter Norvig.  Artificial Intelligence: A Modern ApproachEnglewood Cliffs: Prentice Hall, 1995.

Scholem, Gershom.  On the Kabbalah and its SymbolismNew York: Schocken, 1996.

Searle, John.  “Minds, Brains, and Programs.”  The Behavioral and Brain Sciences 3 (1980): 417-424.

Searle, John.  “Twenty-One Years in the Chinese Room.”  Views into the Chinese Room.  Ed. John Preston and Mark Bishop.  Oxford: Clarendon, 2002.  51-69.

Shannon, Claude E.  “Programming a Computer for Playing Chess.”  Philosophical Magazine 41 (1950):  256-275.

Shelley, Mary.  FrankensteinBoston: Bedford Books, 1992.

Smith, Crosbie.  “Frankenstein and Natural Magic.”  Frankenstein, Creation and Monstrosity.  Ed. Stephen Bann.  London: Reaktion, 1994.

Star Trek.  Gene Rodenberry.  Perf. William Shatner, Leonard Nimoy, Deforest Kelly.  Paramount, 1966-1969.

Star Wars.  Dir. George Lucas.  Perf. Mark Hamill, Harrison Ford, Carrie Fisher, Peter Cushing, Alec Guinness.  20th Century Fox, 1977.

Stork, David G.  “Scientist on the Set: An Interview with Marvin Minsky.”  HAL’s Legacy: 2001’s Computer as Dream and Reality.  Ed. David G. Stork.  Cambridge: MIT Press, 1997.  15-31.

Turing, A.M.  “Computing Machinery and Intelligence.”  Computers and Thought.  Ed. Edward A. Feigenbaum and Julian Feldman.  New York: McGraw-Hill, 1963.  11-35.

Weizenbaum, Joseph.  Computer Power and Human ReasonSan Francisco: W.H. Freeman, 1976.

Weizenbaum, Joseph.  “Eliza – A Computer Program for the Study of Natural Language Communication Between Man and Machine.”  Communications of the ACM 9.1 (1966): 36-45.