Chapter 1. The State of the Art.
copyright 1984 and 2008 by George Johnson. All rights reserved.
In the English that computer scientists speak, ART is known as a "knowledge-engineering
environment" or "software-development toolkit" - an aid to designing a complex new breed of
programs called expert systems, which automatically make decisions in such fields as financial
planning, medical diagnosis, geological exploration, and microelectronic circuit design.
Programs, of course, are the lists of instructions, often thousands of lines long, that guide
computers as they solve problems; they are the software that tells the hardware - the banks of
microscopic memory cells and switches - what to do. In drawing the distinction between hardware
and software, computer scientists often use metaphor. A stereo system is hardware; its software
is the music it plays, whether it is encoded magnetically on a stream of tape or as microscopic
squiggles in a plastic record groove. Hardware, in other words, is the part of a machine that is
concrete; it can be touched and felt. Software is intangible.
ART is best understood as a program that helps programmers write programs, a welcome relief
to what can be a numbingly tedious task. As anyone who has tried to write even the simplest
program will attest, the world inside a computer can be an inhospitable place, unforgiving of
the most innocent human foible. Drop a comma or misnumber a line and a program "crashes,"
leaving behind a string of incomprehensible "error messages" to explain what went wrong. Once
loaded into a computer, a program like ART organizes the tabula rasa of silicon chips into more
habitable surroundings. By automatically taking care of some of the messy details of
programming, ART frees users to concentrate on the big picture. The result, according to
Inference Corporation's colorful brochure, is a more efficient
means of writing the powerful, intricate programs that capture the "concepts, facts, and
beliefs" of human expertise-software that can perform automatically some of the tasks human
experts do.
"The computer is no longer just a fast number cruncher," the company's enthusiastic
copywriter had written. ". . . [lit is now possible to program human knowledge and experience
into a computer .... Artificial intelligence has finally come of age."
This was a common refrain during the 1984 convention of AAAI, an organization of computer
scientists, cognitive psychologists, and a few linguists and philosophers who believe that
intelligence can be uprooted from its biological substrate and planted carefully in machines.
Scientists have always found it natural to use cognitive terminology to describe computer
functions. Thus the chips that temporarily store data are referred to as memory, and the
systems of symbols used to write programs are called languages. To John McCarthy and his
colleagues, the comparison between computers and brains is more than a metaphor. They
believe that it is possible to make computers that think, and, perhaps, know that they are
thinking. So sure are they that thought is computational that Marvin Minsky, of the
Massachusetts Institute of Technology, once referred to the brain as a "meat machine." Like his
contemporary John McCarthy, Minsky is one of the most prominent theoreticians of the field. In
recent years, some enthusiastic proponents of artificial intelligence have taken to calling the
neuronal circuitry that makes up our brains "wetware."
Once an arcane discipline in the upper stratosphere of computer science, Al (as initiates call
the field) had recently captured the imagination of industrialists and venture capitalists, who
hoped to profit from the attempt to simulate the workings of the mind. But in the long journey
from the hieroglyphic formulas of technical papers to the hyperbolic prose of publicists, much
had been lost, or rather added, in translation. Most of the scientists who study Al had come to the
convention, which was held at the Austin campus of the University of Texas, to meet old friends
and discuss recent advances. But some of the companies that had staked out space in the
exhibition hall acted as though the age-old dream of creating intelligent companions had already
been realized.
"We've built a better brain," exclaimed a brochure for one of ART's competitors, TIMM, The
Intelligent Machine Model. "Expert systems reduce waiting time, staffing requirements and
bottlenecks caused by the limited availability of experts. Also, expert systems don't get sick,
resign, or take early retirement." Other companies, such as IBM, Xerox, and Digital Equipment
Corporation, were more conservative in their pronouncements. But the amplified voices of
their salesmen, demonstrating various wares, sounded at times like carnival barkers, or
prophets proclaiming the dawning of a new age.
As he sat amidst the din, McCarthy seemed unimpressed. He closed his
eyes, leaned forward, and rested his head on his hands, which were clasped
in front of him like those of a man in prayer. Then he withdrew from the noise around him into
the more familiar environment of his own mind.
As much as any person alive, McCarthy appreciated how far the field had come. At Stanford, a
young colleague named Douglas B. Lenat had written a program, called the Automated
Mathematician, which seemed to have the ability to make discoveries on its own. After running
unattended for several nights on a computer in Lenat's office, the program had, in a sense,
"reinvented" arithmetic, discovering for itself such concepts as addition, multiplication, and
prime numbers (those, like 3, 17, and 113, which are divisible only by themselves and 1). In
the course of its explorations, the program stumbled upon what human mathematicians
reverently call the Fundamental Theorem of Arithmetic: any number can be factored into a
unique set of primes.
The Stanford Al lab was not the only place where such impressive programs were being written.
At Carnegie-Mellon University in Pittsburgh, Herbert Simon, a Nobel Prize-winning
economist and respected AI researcher, was working with colleagues to develop a program that
reenacted scientific discoveries. Given a ream of experimental data, the program could
rediscover such famous scientific theories as Kepler's third law of planetary motion. Eventually
Simon hoped the program would make an original discovery of its own. In honor of its ability to
perceive the patterns woven through nature, the program was named Bacon. Roger Bacon, a
thirteenth-century philosopher and early proponent of the experimental method, is often
revered as one of the founders of modern science. Efforts at machine intelligence were not
focused entirely on such orderly domains as mathematics and science. At MIT, a student of
Minsky's was working on a program that, given a melody line, could improvise Scott Joplin
piano rags.
Because of these and dozens of other successes in laboratories around the country, McCarthy was
as certain as ever that a time would come - perhaps decades or a century hence - when we are faced
with another intelligence on the planet, an artificial mind of our own making. If that day
arrives, the psychological, philosophical, and moral implications will be staggering, but
perhaps not any more so than the enormous number of technical problems that scientists like
McCarthy first must overcome.
"Things have been slower than I'd hoped," he admitted, "but perhaps not slower than I expected.
I expected that these problems were difficult. I didn't think they were almost solved." It is
possible, he said, that any day now "somebody may get a really brilliant idea," a breakthrough
that will accelerate the process of developing artificial minds. "It may be that Al is a problem
that will fall to brilliance so that all we lack is one Einstein." But he accepts that, most likely,
he and his science are at the beginning of a long, slow progression. "I think this is one of the
difficult sciences like genetics, and it's conceivable that it could take just as long to get to the
bottom of it. But of course the goal is that one should be able to make computer programs that
can do anything intellectual that human beings can."
From the time Gregor Mendel performed the first genetic experiments on pea plants in a
monastery garden to the discovery by James Watson and Francis Crick of the double helical
structure of DNA approximately a century passed. To McCarthy it wasn't surprising that it
might take as long to understand the nature of the intelligence that had deciphered the chemical
basis of life. Like his colleagues who had come to Austin, he believed that the key to the mystery
would be the digital computer.
The first anyone remembers hearing the words "artificial intelligence" was in the mid-1950s
when McCarthy drafted a request to the Rockefeller Foundation to fund a conference to explore
the proposition that "intelligence can in principle be so precisely described that a machine can
be made to simulate it." Rockefeller granted the money and the Dartmouth Summer Research
Project on Artificial Intelligence took place in 1956 at Dartmouth College, in Hanover, New
Hampshire, where McCarthy was a mathematics professor.
In the history of computer science, the meeting was a landmark. Although little in the way of
tangible results came out of the conference - which, after all, lasted only two months - it was the
first gathering of a small group of researchers who would become some of the most influential
leaders of the field. Included among the organizers were not only McCarthy but Marvin Minsky,
who was studying as a junior Fellow in mathematics and neurology at Harvard University, and
Herbert Simon and Allen Newell, who were working for the Rand Corporation and the Carnegie
Institute of Technology, now called Carnegie-Mellon University. Minsky and McCarthy went on
to found the AT laboratory at the Massachusetts Institute of Technology, McCarthy leaving in
1963 to begin his own group at Stanford. Simon and Newell started the Al program at CarnegieMellon. Today, these centers remain the preeminent Al laboratories in the world. Many of the
principal AI researchers in the United States were trained by at least one of these four men.
Having provided the field with a name, McCarthy went on to give it a language. During the late
1950s at MIT, he invented Lisp, in which almost all Al programs are now written.
A computer language is like a human language in that each consists of a system of rules and a
vocabulary that can be used to communicate with those (human or machine) who share the same
set of conventions. Another way to think of it is this: a computer language is the codebook that
programmers use to translate their ideas into forms the computer can understand. A program is
said to be written in Lisp in the same sense that an essay is written in English. No computer
language has a fraction of the richness and expressiveness of a human language, and ideally we
would be able to make computers sophisticated enough to understand English, French, Spanish,
or Japanese. But so far that has proved impossible. Languages like Lisp (short for List
Processing) are a rough but powerful compromise.
It is hard to imagine how, without Lisp, AI could have flourished as
quickly as it did. Most other computer languages, such as Fortran and Cobol, were tailored for
writing programs that performed high-speed mathematical calculations or sorted and collated
lists of employees, customers, items in inventory - the stuff of data processing. Lisp, however,
was designed for a loftier purpose - as a tool for giving computers the kinds of rules and concepts
that Al researchers believe are used by the human mind.
As it spread throughout the AI community in the late 1950s and early 1960s, Lisp quickly
gained a reputation as a powerful invention, admired for the elegance of its design. The
aesthetics of computer languages is an esoteric subject, difficult for outsiders to grasp. Again,
metaphor helps bridge the gap. Computer scientists talk about a favorite language the way a
craftsman talks about a good tool: it is durable, nicely shaped, efficient-something that feels as
pleasing to the mind of a programmer as a well-made wrench feels to a mechanic's hand. Some
languages are clumsy (they require too much fiddling), others are weak (they can't be used to
write interesting programs) or too fragile (it's extremely easy to mess things up). Imagine
multiplying two four-digit numbers using Roman numerals; not even the Romans could do it
well. Their system of Is, Vs, Xs, Ls, Cs, Ds, and Ms was ugly - a bad language for doing
arithmetic. To connoisseurs of such matters, Lisp was immediately recognized as a beautiful
computer language; its combination of power and efficiency provided just the leverage a
programmer needed to write the sprawling, complex, multitiered programs that are the
foundation of artificial-intelligence research.
For example, when working with more pedestrian languages, programmers often had to decide
ahead of time how much of the computer's memory a program would need in order to do its job.
Then they had to instruct the machine to cordon off the area in advance, before the program was
run. That was all the memory it would be allowed to use. For those writing the relatively
mindless software of data processing, this wasn't too great a burden. For Al researchers such
restrictions were absolutely stultifying. They hoped to devise programs that had, in a sense,
minds of their own, whose behavior was unpredictable. How could programmers know in
advance how much memory space their creations would require?
With Lisp, such strict preplanning was unnecessary. Memory was automatically captured and
surrendered as needed, while the program ran. With the flexibility afforded by Lisp, software
didn't have to be an unquestionable set of marching orders. Programs could be written that were
more spontaneous and fluid. Some even sprung occasional surprises, solving problems in ways
that had never occurred to their human inventors.
Even more uncanny was the way Lisp allowed the creation of programs that could change
themselves as they ran. In conventional computing, information was fed into the machine and
processed according to the instructions in the program. Then it emerged, transformed, like wool
coming out of a spinning wheel as yarn. The data might be a list of words, the instructions a
program for sorting them into alphabetical order. In any case, there was no mistaking data and
instructions - they were two very different things. In
Lisp, there was no such distinction. The program and the information it was to process were
written in exactly the same way (as words surrounded by parentheses). To the computer, they
looked identical. Because of this ambiguity between subject (the program) and object (the
information it was to process) one Lisp program could provide the grist for another's mill.
Programs could read programs and write programs. They could even inspect and modify
themselves, suggesting the possibility of software that not only learned but was aware of its own
existence.
For these and other reasons, Lisp quickly became regarded as a most receptive vehicle for
studying the nature of intelligence. How, Al researchers wondered, do people take the sounds,
words, and images that flow through their senses and arrange them into the patterns of thoughtthe complex networks of concepts and memories that form the fabric of the human mind? Unlike
the psychologists and philosophers who preceded them in this ageold pastime of thinking about
thinking, the Al researchers tested their theories of the mind by describing them as Lisp
programs and seeing what happened when they ran them on a machine. They assumed that
intelligence was a kind of mechanism - a subtle and complex processing of information. And the
computer, like the brain, is an information processor. Under the influence of Lisp, they
believed, computers would not only model human thoughtas they already modeled tropical
storms, new aircraft designs, or thermonuclear wars - but they would actually engage in the act
of thinking.
Over the years, other equally powerful languages were developed, but Lisp has remained the
favorite among AI researchers. And, like human languages, it has evolved to meet the increasing
demands of its users. As researchers began to better understand the needs of Al programmers,
new, more powerful versions were written. Lisp went on to beget MacLisp, InterLisp, ZetaLisp,
Franz Lisp (in honor of the composer who is similarly named) - a dozen or so dialects that by the
time of the Austin conference were being merged into a babelesque effort called Common Lisp,
which, it was hoped, would become the lingua franca of the field.
In fact, as the 1980s approached, the influence of artificial-intelligence programming was
beginning to spread beyond the university laboratories and into the business world, bringing
with it the promise of changing forever the way computing is done. This development was due in
part to a group of computer enthusiasts, or "hackers," at MIT, who in 1973 began to design a
high-powered personal computer called a Lisp machine, which was especially made for
artificial-intelligence programming.
It is not absolutely necessary to have a Lisp machine to use Lisp - any digital computer can
accommodate software written in almost any language. Ultimately, a built-in program called an
interpreter or a compiler automatically converts everything a programmer writes into
"machine language," the long strings of 1s and 0s that are all the hardware can really
understand. But the price of this universality is compromise. While programmers enjoy the
luxury of having many different languages for communicating with the machine, a great deal of computing power is needed to translate them each into machine language.
The more sophisticated a language, the more time-consuming the translation. The easier and
more natural a language is for a human, the more it will have to be processed in order to be
understood by a machine.
Such was the case with Lisp. In designing the language McCarthy had bought some of its elegance
and power by sacrificing efficiency and speed. When Al programmers sat down at their
terminals and logged onto the large university computer - the "mainframe" - they were sharing it
with many other programmers. Their Lisp creations - which, because of their sheer size and
complexity, would have been taxing in any language - caused an unusually large drain on
computing power, the processing time and memory space that are as precious to a computing
center as kilowatt-hours are to a generating plant. Because they commanded vast computational
resources, the more sophisticated Al programs would run annoyingly slowly.
When released from the laboratories in 1978, the Lisp machines promised to free Al people
from the tyranny of the mainframes. Even though these new devices were smaller and contained
less raw computing power, they made Al programs easier and faster to write and run. For the
Lisp machine was designed not as a lowest common denominator for many languagesmost of them
written to solve mathematical equations or send out credit card bills. The hardware was shaped
to the contours of McCarthy's language.
In 1980, the inventors of the Lisp machine formed two rival firms, Symbolics and Lisp
Machine Inc., to develop and sell their new product. While most of the important Al work
continued to be done on mainframe computers, by the mid-1980s Lisp machines were showing
up in laboratories across the country. Students and professors compared the virtues of the two
brands as others might discuss the "specs" of a new stereo receiver or foreign car. But the
purveyors of Lisp machines were hoping for a bigger and more lucrative market than academia:
the corporations whose leaders were beginning to get wind of a second information revolution,
in which the power of the computer would be used not just for bookkeeping and calculating but
for more complex and demanding tasks. Industries of all kinds were becoming intrigued by the
idea of automating many of their decision-making tasks with expert systems - a technology
that,
according to some entrepreneurial professors, was ripe for export to the marketplace.
"I would get several phone calls a week consisting of three types of questions," recalled
Stanford's Edward Feigenbaum, a jovial, pipe-smoking professor who more than any other
researcher has worked to commercialize the field. "'Can I send a guy out to your place for a
year?' 'Would you do a project for us?' And, 'Do you guys have any software we can have?' The
answer to the first question was 'No way.' The answer to the second question was that we only do
contracts that involve basic research - we're not a job shop. The answer to the third question was
'Yes, we do. Some is available in the public domain, some from the Stanford Office of Technology
Licensing.'" As the calls kept coming, Feigenbaum realized, "This sounds like a company." In
1980 and 1981, he and a number of colleagues decided to start
their own businesses: IntelliGenetics (later renamed IntelliCorp) and then Teknowledge to
provide training, consulting, and software to those interested in developing expert systems.
They had little trouble attracting investors.
"The venture capital community is always looking for the next way to stimulate technology so
that big rewards will be reaped," Feigenbaum explained. "In 1980 and '81 it was biotechnology.
Now it is Al."
With an ear for the kind of slogans marketing directors love, Feigenbaum coined the term
"knowledge engineering" to describe the work of a new corps of professionals. The knowledge
engineers would interview experts - doctors, lawyers, geologists, financial planners-and
translate some of what they knew into Lisp programs. Expertise would be broken into a complex
web of rules. If A, B, and C, but not D, then take path E . . . . To solve a problem, a computer
would search the labyrinth, emerging with decisions that, it was hoped, would rival or exceed
those of its human colleagues.
Throughout the early 1980s, articles in magazines such as Newsweek,
Fortune, Business Week, National Geographic, and even Playboy described programs that, on the
surface, seemed nothing short of spectacular. The University of Pittsburgh's Caduceus was said
to contain 80 percent of the knowledge in the field of internal medicine and to be capable of
solving most of the diagnostic problems presented each month in the New England Journal of
Medicine, a feat that many human internists would find difficult to match. Prospector, a
program developed at SRI International, a nonprofit research corporation in Menlo Park,
California, was reported to be so adept at the art of geological exploration that it had discovered
a major copper deposit in British Columbia and a molybdenum deposit in the state of
Washington. These mineral fields, which were valued at several million dollars, had been
overlooked by human specialists.
As impressive as these accomplishments were, the journalists and publicists who reported
them rarely emphasized how superficial was the intelligence involved - if indeed it could be
called intelligence. The knowledge programmed into most expert systems consisted largely of
long lists of prepackaged "if-then" rules, gleaned from interviews with human experts. For
example, one of the rules used by Mycin, a medical diagnosis program, read like this: "If (i) the
infection is primary bacteremia, and (2) the site of the culture is one of the sterile sites, and
() the suspected portal of entry of the organism is the gastrointestinal tract, then there is
suggestive evidence that the identity of the organism is bacteroides."
The words "suggestive evidence" are characteristic of the kind of knowledge programmed into
these systems. Like much of human expertise, it consists not of hard-and-fast rules but
heuristics-rules of thumb that, while not infallible, guide us toward judgments that experience
tells us are likely to be true. Provided with a few hundred rules, a computer could rapidly chain
them together into medical or geological diagnoses. This heuristic programming clearly was a
powerful new software technology, but some critics within the Al field doubted that it should be
called artificial intelligence. Unlike a human expert, Mycin didn't contain an internal model of
the human body, a mental map of how all the various organs interacted. In other systems, like
Caduceus, the knowledge was more sophisticated, but still the program didn't know that the
kneebone is connected to the thighbone, or the bladder to the kidneys, or even that blood flowed
through arteries and veins. All the programs had were some rules of thumb and information
about the characteristics of various diseases. The same criticism could be applied to Prospector.
It knew rules for how to identify various kinds of geological formations, but it had no idea what
sedimentation, metamorphism, or vulcanism were.
And, as was clear from McCarthy's 1984 presidential address, none of these programs contained
anything that could be called common sense. It's one thing to give a medical diagnostic system a
few hundred rules about diseases, but quite another to program a computer to understand the
seemingly infinite number of facts that comprise everyday existence: if a thing is put in water
it will get wet; a glass will shatter if it falls; if you squeeze a Styrofoam cup it will bend, then
crack, and the coffee might burn your hand; it is impossible to walk through walls. Until
computers know these things, they can hardly be considered our intellectual equals.
But how do you give a computer common sense? It would be impossible to come up with
information about every conceivable situation that an intelligent being might encounter. Say you
had a rule that said "All birds can fly," and another rule that said "A sparrow is a bird." Using
conventional logic a program might easily deduce that sparrows can fly. But what if the
computer was then told that the bird has a broken wing? To deduce that the bird cannot fly after
all, the computer would need another rule that said "All birds can fly unless they have broken
wings." And how would you deal with all the other exceptions: birds that are newborn, birds that
are trapped in tiny boxes, birds that are in vacuums, birds that are dead? The possibilities are
endless. It is inconceivable that a programmer, or even a team of a thousand programmers,
could anticipate the billions of special cases, even if they spent years on a marathon, federally
funded commonsense programming project.
McCarthy hoped to devise a system that would neatly and compactly capture commonsense
knowledge without becoming overwhelmingly huge and unwieldy. To do so, he believed, would
require a new kind of logic, which, after three decades, he was still working to perfect. Using
the logic McCarthy envisioned, a computer could deduce the plethora of commonsense facts from
a finite number of rules or axioms - he hoped ten thousand or so would suffice-about the way the
world works. It wouldn't be necessary to tell the computer about each of the billions of possible
situations it might need to understand - they would be implicit in McCarthy's system, just as all
of literature exists, in a dormant sense, in the rules of grammar and spelling and the twentysix letters of the alphabet. Since 1956, McCarthy had been working quietly on his own,
communicating mostly with a handful of colleagues at various universities who had taken upon
themselves this task of formalizing reality. As with Lisp, he hoped the system he would
eventually create would have an elegance and an internal consistency - a mathematical beauty - of its own.
The result, he hoped, would be programs that were more than idiot savants, experts in one,
narrow discipline. What McCarthy had in mind were machines that could hold their own in a
conversation, that would make good company.
McCarthy admired Feigenbaum's expert systems, but he found the ad hoc manner in which they
were constructed unaesthetic and their possibilities limited. On the other hand, the knowledge
engineers weren't especially interested in the rigors of logic or in constructing universally
knowledgeable systems. They would interview experts, convert some of their knowledge into
heuristics, and put them into a program. Then they would run it on a Lisp machine, adding
rules, modifying them - tinkering until they had something that worked.
By the time Feigenbaum had established Teknowledge and IntelliCorp, knowledge engineering had
become such a major part of Stanford's computer-science curriculum that the Al workers had
divided into two groups: Feigenbaum's very pragmatic Heuristic Programming Project and
McCarthy's more theoretically oriented SAIL, short for Stanford Artificial Intelligence
Laboratory. The two scientists' different approaches to Al are reflected in their personalities.
McCarthy is quiet and reserved-some would say aloof-with a reputation as something of a
recluse. His thick beard, thick hair, and thick glasses give him a stern and austere look. When
he stands in a dark auditorium addressing a crowd, the spotlight ignites his ring of hair to a hotwhite glow so that he looks almost biblical, like a prophet in a business suit. Al folklore abounds
with John McCarthy stories, in which he walks away in the middle of conversations, following
trains of thought he apparently finds more enticing than whatever is going on in the world
outside his head. But in the privacy of his office, he politely insists that the stories are
exaggerated, that he is not all that eccentric. He seems more shy than arrogant. After patiently
describing some of the details of his research, he relaxes and tells funny stories - science-fiction tales about computers and people. Eventually, he steers the conversation to one of his
favorite subjects, politics, his own having migrated from liberal (his parents, in fact, were
Marxists) to conservative. Recently he has publicly opposed some of his colleagues who
criticize Defense Department funding of Al research.
In contrast to McCarthy, Feigenbaum is clean-shaven, his hair cut short and neatly combed in
the conservative manner of a businessman or engineer. A decade younger than McCarthy,
Feigenbaum is outgoing and enthusiastic, always ready to promote the virtues of expert systems.
In fact, some of his detractors call him a cheerleader, claiming that he has become less
interested in the long, slow burn of science than in the explosive pace of the marketplace. While
McCarthy is willing to spend his career seeking the kind of theoretical underpinnings that he
hopes will make Al as solid a science as physics, Feigenbaum and his followers are more
interested in what the field can accomplish now, and how they might subsequently prosper. It is
hard to imagine two more different people at work under one institutional ceiling. Actually SAIL, until
recent years, was isolated from the main campus in a remote building in the Stanford hills. As
the years have passed, this physical separation has become symbolic of a division that is
developing in the field. This disagreement over the way Al should be done is sometimes described
as the battle between the Scientists and the Engineers. While the Engineers take a "quick and
dirty" pragmatic approach to Al, the Scientists are obsessed with developing rigorous theories.
In the eyes of scientists like McCarthy, expert systems fell far short of the goal of creating a
disembodied intelligence. But to a business community whose interest had been piqued by the
fortunes made in data processing, the idea of knowledge processing, as its marketing-minded
proponents were billing it, was irresistible. One of the earliest signs that artificial intelligence
was moving beyond the quiet domain of philosopher - mathematicians like McCarthy and into the
corporate realm came during the mid-1970s when Schlumberger, a French multinational
corporation, began examining the possibility of using expert systems for oil and gas
exploration. In 1979 the company bought Fairchild Camera and Instrument Corporation of Mountain
View, California, and established a major Al lab there. In fact, the oil industry was one of the
first to invest heavily in Al. One of Teknowledge's first customers was the French oil company
Elf Aquitaine. Feigenbaum and his associates helped the firm design a system that gave advice on
how to retrieve broken drill bits, which sounds like a mundane task but is actually a rare and
expensive expertise. One of the experts whose brain was picked was so impressed with the
results that he provided this testimonial for a Teknowledge promotional film: "[The program]
thinks like I do on my best days every day. It is my personal intelligent assistant."
"When Schiumberger does something," Feigenbaum said later, "the world pays attention." To
illustrate the revolutionary possibilities of expert systems, he liked to draw a diagram on the
blackboard, which he labeled "the exploding universe of applications." In this big-bang theory
of artificial intelligence, the primordial mass was a circle labeled 1950 when Univac offered
its first commercial digital computer, whose data-processing capabilities were of little use to
anyone but physicists, mathematicians, statisticians, and engineers. But, accelerated by the
development of personal computers, the perimeter radiated outward, expanding each year to
overtake more and more areas of everyday life. At the far reaches of his blackboard universe,
Feigenbaum drew a circle labeled ICOT, the Institute for New Generation Computer Technology, a
half-billion-dollar, decade-long effort by the Japanese to develop artificial intelligence. ICOT,
he explained, is like a fortress at the edge of the wilderness, ready to send out parties of
explorers into the terra incognita of artificial intelligence. Also poised at the border, like two
small encampments, were circles labeled Teknowledge and IntelliCorp.
"These things are going to grow into big companies," he predicted, citing a report by a Denver
research firm that, by iwo, Al software will be a $1.5 billion-a-year business.
"If there are ten companies, that's still $150 million a year a company. To someone
who started one or two of these companies that's very heartening," he said.
Feigenbaum wasn't alone in his dream of being on the ground floor of a towering new industry.
Throughout the late 1970s and early 1980s, the number of outposts on the Al frontier grew so
rapidly that the hinterlands were becoming positively civilized. While some new companies sold
readymade programs, others concentrated on knowledge-engineering environments - the do-ityourself-kit expert systems. These were designed to run not only on Lisp machines but on
hulking mainframes and (in stripped-down versions) on desktop personal computers. In 1984,
Business Week estimated that there were forty Al companies, many started by some of the most
prominent researchers in the field. Earl Sacerdoti, for example, was highly regarded for his
work at SRI International, where he studied, among other things, natural-language
understanding ("natural" meaning English or French, as opposed to Lisp or Fortran). In 1980,
he left SRI to join a company called Machine Intelligence Corporation, which worked on
commercial robotic and vision systems. He was later hired away by Teknowledge.
Roger Schank of Yale University, who has been working for fifteen years on ways to get
machines to understand what they are told, started Cognitive Systems. Among the company's
first products were programs called "natural-language front ends," which allow people to use
English to request information from their company computers-as long as the queries are
limited to restricted vocabularies and follow a certain form. A number of Al researchers from
Carnegie-Mellon University started the Carnegie Group, which attracted such major investors
as Digital Equipment Corporation and General Electric, immediately becoming a multi-milliondollar operation. Even Marvin Minsky got into the act, convincing former CBS Chairman
William Paley to help finance a company called Thinking Machines Corporation (chidingly
referred to by colleagues as "Marvco") to explore the possibility of building a completely new
kind of computer, which would handle data in a manner similar to the brain. Great profits
seemed to be on the horizon, and the academics whose ideas were behind the technology were
determined to get their share. By 1984 large numbers of faculty members in university Al
programs were either starting companies or supplementing their salaries with lucrative
consulting deals.
It soon became apparent, however, that the giants of the computer industry were not going to let
a few upstart companies crowd them from the field. By the early 1980s, IBM, DEC, Texas
Instruments, Hewlett Packard, Xerox, and a number of other computer and electronics
companies all had active Al research divisions. Some of these efforts - notably the one at Xerox - predated those of the smaller companies like Teknowledge. But the sudden wave of interest in
intelligent machinery was causing the corporations to pursue Al research more vigorously.
Xerox and Texas Instruments began marketing Lisp machines. Outside the computer industry,
other corporations - IT&T, Martin Marietta, Lockheed, Litton - followed the lead of
Schiumberger, starting their own Al groups. General Motors paid $3 million to buy more than
one tenth of Teknowledge.
By the time of the 1984 Austin convention, this "greening of Al," as Daniel Bobrow, the editor
of Artificial Intelligence journal (and a researcher at Xerox's Palo Alto Research Center),
bemusedly called it, was all but complete. In 1982, Business Week published an article
proclaiming that the world was at "the threshold of a second computer age." In 1984, a followup
story was headlined ARTIFICIAL INTELLIGENCE IT'S HERE! Even AAAI had become swept up in the
optimism, distributing bumper stickers proclaiming, "Al: It's for Real." But for many of the
scientists who attended the Austin meeting, such sloganeering was an embarassment. They were
worried that the allure of commercialization was distracting those in the field from more
serious concerns.
According to an oft-quoted heuristic, it takes about ten years for an idea to move from the
laboratory to the marketplace. Many of the theories behind the expert systems that were
unveiled in the AAAI exhibition halt were well over a decade old. But what was history to the
scientists was news to the business world. While the attendance at AAAI's first conference in
1980 was only about eight hundred (most of them serious researchers and students), AAAI-84
attracted more than three thousand conventioneers, two thirds of them from outside the
academic community.
As they moved from one technical session to the next, gathering in groups to talk with
colleagues, the scientists had to dodge the corporate recruiters"the people in suits and ties
chasing the people with the bushy hair," as Bobrow described them. AI had always attracted a
good share of eccentrics, but by the time of the Austin conference the small faction of people
with backpacks, beards, and ponytails was overwhelmed by hordes of outsiders with nametags
indicating how commercially respectable the field had become: Rockwell International, Mobil,
Amoco, the U.S. Army, Air Force, Navy, and Coast Guard, the U.S. Missile Command.
While some of the corporate representatives were there as headhunters, most had paid $i' a
session for elementary tutorials, which AAAI offered as a way to help pay its operating
expenses. To ensure a good draw, the sessions were taught by such AT luminaries as Lenat,
inventor of the Automated Mathematician, and Minsky, who was finishing a book on his Society
of Mind theory, an attempt to demystify the notion of human intelligence by explaining it as the
interaction of a number of fairly simple processes. Minsky, a short bald man with a cherubic
smile, liked to illustrate his theory with the example of a child building a tower of blocks. An
agent called BUILDER directs the process by calling on various other agents such as SEE, which
recognizes the blocks and helps manipulate them by calling on GET and PUT, which must call on
such agents as GRASP and MOVE. Agents can be thought of as simple little computer programs.
While each one is in itself a fairly unintelligent being, millions of them work together
to form a society from which intelligence emerges synergistically. Minsky, who takes issue
with McCarthy's reliance on logic in his theories, describes the mind as a "kludge." The word is
hacker jargon for a system that is a mishmash of parts, thrown together-in this case by
evolution-according to what works, not by any well-wrought plan.
Intelligence, Minsky told his audience, is not so mysterious. It is simply "the set of things we
don't understand yet about what the mind does. Whenever we understand something, we become contemptuous
of it. I think the same thing will
happen to intelligence in the next ten or a thousand years." In ten thousand years, he said, we'll
look back at our romantic notion of the elusiveness of the mind and laugh. Can a machine be
conscious? Can it know and feel the meaning of the things it is dealing with? Minsky clearly
thought that it could. Consciousness is not any more mysterious than intelligence, he said. "I
don't think it's subtle at all." It's just that in thinking about thinking, consciousness "steps on
itself and squashes itself." If we can learn to avoid such self-referential vertigo, he believed,
eventually consciousness will be mechanized.
And, if he has his way, the breakthrough will be made at the MIT Artificial Intelligence
Laboratory. Since he helped start the lab shortly after the Dartmouth Conference in 1956,
Minsky has become something of a father figure to aspiring young Al scientists, many who, like
Bobrow, have gone on to become leaders of the field. Kind and enthusiastic, Minsky often seems
as excited about his students' ideas as about his own. Over the years he has fostered an open
atmosphere and a sense of camaraderie that have made MIT one of the most desirable places in
the world to study Al. For more than twenty years, Minsky has watched his students produce
programs, the best of which represent small but important steps in what he believes might be
"a hundred years of hard work" toward the goal of making intelligent, humanlike machines.
"My experience in artificial intelligence and cognitive science is that it is often a decade
between each step and the next," he said one afternoon in his office in Cambridge. "But there are
other things going on. What you don't want to do is sit in front of a pot watching one plant grow,
because nothing seems to happen. But if you have a whole garden, then it's not so bad."
The image of the philosophical Minsky addressing an auditorium of corporate acolytes captured
the incongruous aura that surrounded the convention and has come to characterize the field. The
scientists at the conference spent a large amount of their time attending presentations of
esoteric technical papers on such topics as automatic speech recognition and natural-language
understanding (hearing Graeme Hirst of the University of Toronto on "A Semantic Process for
Syntactic Disambiguation," or David L. Waltz and Jordan B. Pollack of the University of Illinois
on "Phenomenologically Plausible Parsing"); or on computer vision (Demetri Terzopoulos of
MIT on "Efficient Multiresolution Algorithms for Computing Lightness, Shape-from
Shading, and Optical Flow"). They participated in panel discussions on "The Management of
Uncertainty in Intelligent Systems" (in dealing with the fact that real-world information is
incomplete, inexact, and often incorrect, should one use "fuzzy logic," "standard probability
theory," or "evidential reasoning"?) or on "Paradigms of Machine Learning." But while the
scientists discussed the subtleties and difficulties that naturally surround an enterprise as
ambitious as artificial intelligence, the emissaries from the corporate and military worlds
were drawn to sessions on "Al in the Marketplace," or the "Strategic Computing Project," a
Pentagon effort to use vision and languageunderstanding systems to develop self-guided tanks,
automatic intelligent copilots for fighter jets, and expert systems for naval warfare. This latter
effort was euphemistically referred to as "real-time battle management."
But the division between the commercial and the theoretical was anything but clear-cut. In the
industrial exhibition hail, one recently formed company boasted that Patrick Winston, director
of MIT's AT lab, had helped develop the tutorial for a new version of Lisp they were marketing
for personal computers. Winston, however, is primarily a theoretician and a staunch supporter
of pure research. At the conference he served as leader of the panel on Machine Learning because
of his work in getting computers to learn by analogy (generalizing, for example, from a
simplified version of Shakespeare's Macbeth that a man with a greedy wife might want his
boss's job). More recently he had been designing a program that could learn the concept "cup" -
a
liftable object with an upward-pointing concavity-and use it to recognize the myriad variations
that exist in the physical world.
While Winston was largely concerned with theory for its own sake, he hoped his work would
someday lead to expert systems that won't have to be spoon-fed knowledge but can discover it on
their own, much as human students do. But some scientists were afraid that with the growing
demands of the marketplace, basic research like Winston's was losing its appeal. Just as AI was
beginning to make some important breakthroughs, it was being robbed of some of its best
theorists. At the conference Douglas Lenat announced that he was leaving Stanford to become a
scientist for the Microelectronics and Computer Technology Corporation, a cooperative research
effort funded by such computer and semiconductor manufacturers as Control Data Corporation,
Digital Equipment Corporation, RCA, Motorola, National Semiconductor, and Honeywell. Each
year only about twenty-five to thirty students receive Ph.D.s in artificial intelligence, Bobrow
said. "Bell Labs would hire them all if they could. So would IBM. So who will go to the
universities? Some people claim that we're eating our own seed corn."
"Firms are using money simply to buy people," Feigenbaum admitted. "It's like movie stars and
baseball players. People who are called 'senior knowledge engineers' - that means they've done a
couple of systems (in the land of the blind, the one-eyed man is king) - senior knowledge
engineers are offered salaries of seventy, eighty, ninety thousand dollars a year. If they'll be
heading a group, it can go as high as a hundred thousand." But he believed that the shortage is
temporary. Enrollment in AI curriculums is increasing. And, as more Al software tools become available (products like ART and TIMM,
which make it easier to write AI programs), companies will be able to build expert systems
without hiring Ph.D.s. By developing such software, companies like Teknowledge and IntelliCorp
would help break the monopoly of what has been "an arcane, esoteric priesthood," Feigenbaum
said.
Others were not so optimistic.
"Businesses are constantly knocking at our door, asking us to do consulting, trying to steal our
students," complained Richard Granger, who, as director of a small Al program at the
University of California at Irvine, was using programs to explore his theory of how human
memory works. "It's hard to turn down a lot of money. People with a long history of doing good
work are now talking about how to build expert systems to find oil fields for Schlumberger.
There are a lot of people that's happening to - a scary amount.
"At the time I chose to go to graduate school in Al [1974-75] the field was almost completely
unknown. I had an opportunity to go to business school. I consciously made the choice of going for
what I thought was the ivory tower, to give up making a lot of money for doing what I loved. Now
I get huge consulting fees - more than I would have gotten if I had gone to business school. I'm
part of a whole generation of students who got into Al when it was unknown and now find that
they are catapulted into the limelight. Recently I had an offer that would have doubled my salary - and I'm not making peanuts. I spent a very bad weekend deciding to turn it down."
He was helped in his decision by knowing that he could continue to moonlight as a consultant.
Granger had learned that the commitment to pure research was rife with compromise. While he
and his colleagues accepted research money from the Defense Department, which funds the
majority of AT research, he worried that the Strategic Computing program was "a dangerous and
distorted thing. It's very sad to see ideas that could be put to good, peaceful uses used to find
better ways to kill people."
After touring the AAAI exhibition hall, computer scientist Douglas Hofstadter was especially
appalled. "You'd think," he complained, "that artificial intelligence had already been invented
and all that you had to do now was decide which machine to run it on." Hofstadter wrote the book
Gödel, Escber, Bach: An Eternal Golden Braid, which won the Pulitzer Prize for general
nonfiction in 1979, when he was thirty-four. Now he was involved in a project to get
computers to understand analogies and solve word-jumble puzzles, unscrambling MEPTUCRO,
for example, to find COMPUTER. He hoped to write a program that would do this not by
generating every possible combination of letters and checking them against a dictionary, but by
a process more akin to intuition. Hofstadter doubted that intuition was magic, therefore there
must be some way it could be mechanized. He believed that understanding these largely
unconscious processes was more important to Al than developing expert systems. One of the
questions that most intrigued him was how people recognize letters when they can be written in
an almost endless number of ways. Why is A - whether cast in Gothic type,
Helvetica, or Bodoni, bold or italic, whether printed or written in cursive - still the letter A?
What, in other words, is A-ness? And how can a computer learn concepts like that? The
principal problem of Al, he once wrote, is, "What are the letters 'a' and 'i'?" The year before
the Austin convention he published a paper criticizing the field for ignoring such basic
problems, which led to his denunciation by such veterans as Carnegie-Mellon's Allen Newell.
Among his more conservative elders and peers, Hofstadter is considered a maverick. With his
thick dark hair and boyish grin, he has become a familiar presence at AI conferences. In his
quiet, thoughtful way, he serves as a lightning rod for a younger generation of researchers who
share the enthusiasm of Al's pioneers but who believe that the elders are pursuing their dream
in the wrong direction. Hofstadter is also controversial for another reason. An ardent supporter
of the nuclear-freeze campaign, he is one of the few researchers who do not take money from
the Defense Department.
All fields, of course, have their theoretical and applied sides, their philosophers and engineers.
But some participants in the Austin conference felt that these two forces were becoming
dangerously unbalanced, making Al an endangered species. At the same time that the quality and
amount of basic research was being threatened by the competition of the marketplace,
expectations were very high. The businesses and government agencies, which were investing
heavily in the field, expected fast results. So did the public, which was intrigued by the
futuristic scenarios reported in the press - sometimes exaggerated and distorted but often
accurate accounts of the scientists' own predictions or the claims of their companies' public-relations departments.
Some researchers were so concerned about the potentially disastrous effects of this combination
of enthusiasm, hoopla, and inadequate theoretical work, that they anticipated an "Al winter,"
when disappointed supporters stopped funding research. At the closing session of the Austin
conference, "The Dark Ages of Al - Can We Avoid Them or Survive Them?", Yale's Roger Schank
made a rhetorical plea:
"Remember Al? ... the first conference and the second conference. We used to sit and argue about
things, not whether or not [our companies] should go public . . . . [W]e as an AI community have
forgotten that we're here to do science, and that we are nowhere near the solution. We used to sit
and fight about these things in public. Now we all sit and talk [as though] it's all solved and give
nice, slick talks with pretty slides .... But I'm very concerned about the fact that people don't
want to do science anymore. It is the least appealing job on the market right now .... It's easier
to go into a start-up company and build products; it's easier to go into a big company and have a
little respite and do some contract work; it's easier to do all those things than to go into a
university and try to organize an Al lab ... and sit there on your own, trying to do science. It's
difficult. But I can say that if we don't do that we'll find that we are in the dark ages of Al."
As founder of Cognitive Systems, Schank was hardly one to harp about
the taming effects of commercialization. In 1977, when neurologist and science writer Richard
Restak interviewed Schank for a book about the brain, the young AI scientist sported black
shoulder-length hair and a bushy beard. He looked, Restak wrote, like Rasputin. Six years
later, with hair and beard trimmed shorter and showing a bit of gray, Schank appeared on the
cover of Psychology Today: "Getting Computers to Understand English: Yale's Roger Schank
Enters the Marketplace." In 1984, at the age of thirty-eight, he was included in one of Esquire's
lists of America's up and coming, "The Best of the New Generation." But Schank recognized the
irony of his role as researcher-his language and learning work at Yale is as seminal as it is
controversial - and entrepreneur. He closed his speech with an attempt to explain why, in this
sense, he and much of the field have become "schizophrenic."
"It is incumbent upon Al, because we've promised so much, to produce - we must produce working
systems. Some of you must devote yourselves to doing that, and part of me devotes myself to
doing that. It is also the case that some of you had better commit to doing science, and part of me
commits to doing that. And if it turns out that our AI conference isn't the place to discuss
science, then we had better start finding a place where we can.
Because this show for all the venture capitalists is very nice . . . but I am concerned that the
people here who are first entering this field will begin to believe that a Ph.D. means building
another expert system. They're wrong."
In his speech, and in the way he has conducted his career, Schank captured the essence of what
is, after all, a very new science. The division between the pure and applied is not so much a
split but a polarity - the voltage that drives the field. Science and technology have always played
off against each other. Theories lead to applications, whose shortcomings suggest directions for
further research. Older sciences, like physics, have had time to divide between the theoretical
and practical. While nuclear physicists discover particles and seek elegant systems to explain
them, nuclear engineers apply some of what has been learned to building bombs and reactors. Al
is only now emerging, and it's moving so fast that the distinction between science and
engineering often blurs.
Philosophers and historians of science look at their subjects in retrospect. The field of
artificial intelligence provides an opportunity to observe, firsthand, a science in the making, to
study the ways idea and necessity - theory and application - intertwine. There is a self-consciousness to the field - a sense of being present at the birth of a new science that makes all
the frustrations and conflicts worth abiding. Randall Davis, a young professor at MIT's Al lab,
explained the allure.
"In college I was studying physics because it was the most fundamental thing there is, where you
figure out how the universe works and where it comes from. In some sense, at least, physics
underlies everything else. It's not enough, but it is, at some level, how the universe works. I got
as far as applying to graduate schools and being accepted at a bunch of different places before
discovering that it was too far to the frontiers. Physics is too oldit's two thousand years old, and
it takes you years to get to the frontiers. In
1970, when I went off to graduate school, it took about two weeks to get to the frontiers of
computer science." AI, he discovered, is as close to that edge as one can get.
It's not only the academicians who share this sense of being pioneers in an earth-shaking
venture. As chief scientist for Teknowledge, Frederick Hayes-Roth is largely concerned with
developing products. But he also feels that he is part of a larger effort that will lead to a day
when, through the use of computers, we have a more intelligently run world.
"I'm not aware of any great human thinking going on. We're not great thinkers," he said. "It's
very difficult for me to imagine what life would be like if every activity were guided by
intelligent decision making. Would the world be like it is now? I doubt it.
"I'll make much greater use of doctors when they're artificial than when they're real, because
I'll be able to get them on the telephone and talk to them - have meaningful interchanges with
them - and they won't be impatient and they won't think I'm an idiot. People should be able to get
much greater access to other scarce expertise. But of course people who are already getting
access to the best experts will probably have an overwhelming ability to exploit expertise. Will
this mean that access to good lawyers will be distributed more widely? Probably the answer is
yes, maybe through Sears. Sears may add paralegal or legal boutiques to their stores and then
they'll also put Sears terminals in your home. But how does that compare to the kind of legal aid
IBM will have? It could go either way. It depends on who brings this stuff to market. And who
they want to sell to. Either way, it should pose a threat to the guilds, to the professional
societies that currently protect the knowledge.
"In terms of what most of us experience day to day, I don't think Al will solve our interpersonal
problems. I don't think it will solve our marital problems or our adolescent drug problems. But
it should be very interesting. I mean this might be the difference between walking around in the
dark and walking around in the light, where the dark is what you get when you're basically
ignorant. And we're all basically ignorant. Because we have very limited experience, very short
lives, a low tolerance for learning and study. So imagine that in various ways we could get
access to real experts to tell us ways to do things that are much more effective than the ways we
stumble on ourselves. I think qualitatively life will be very different. One hundred years from
now we'll look back and think it is absolutely magical where we've gotten. The only thing that
frustrates me is that we're not getting there faster."
When they speculate about the future, Al researchers often switch into what might be called
science-fiction mode. No one imagines more intriguing scenarios than Edward Fredkin, a
computer scientist at MIT. Fredkin, a contemporary and longtime colleague of Minsky and
McCarthy, believes it is inevitable that we will be surpassed by the intelligence of our
creations
"I just imagine that wherever there's any evolutionary process it culminates with artificial
intelligence. That's sort of obvious if you think about it for a while. We're this latest step-but
what are we? We're a creature that's mostly very similar to a monkey. We have the same
instincts as a monkey. But our brain is just over the threshold that barely lets us think. When
you get things that can really think, they'll put us to shame. One computer will be able to think
billions of times faster than all the thinking of all the people in the world. And it will have more
at its immediate beck and call in its memory. That represents another stage in evolution and it's
a remarkable evolution, because all the evolution up until now has proceeded according to very
strange rules. One of the sad things about the system we have now is that the child doesn't know
what the father learned unless the father laboriously teaches him. All the genes do is give the
design of the device.
"When Al systems construct a new computer system - a new artificial intelligence-they will
obviously tell it everything they already know. So every new creature born will know
everything that all the other creatures have learned. Instantly. They'll be born knowing it all
and start from there. This allows evolution to proceed at a pace that's totally different. People
think, 'Gee, it went so fast with humans, in fifty thousand years humans made loads of progress.'
In fifty thousand seconds Al will make much more progress than that. So it's going to be very
dramatic when it happens."
Eventually, he imagines, artificial intelligences will lead lives of their own.
"My theory is that after some initial flurry of them helping us and them being involved with us,
they won't have anything to do with us. Because they won't be able to convey to us what they're
doing or why. And the reason is simply that we won't be able to understand. If two of them are
talking to each other and you want to know what they're saying-well, in the time it takes you to
ask that question, one of them will have said to the other more words than have been uttered by
all the people who have ever lived on the planet. So now what does it say to you? It's discussed
every subject in such depth that no human could have touched it. So it will say something like,
'Well, you know, things in general.' What else is it going to say? You have to understand that to
it a conversation is like an Encyclopaedia Britannica every picosecond, a Library of Congress
every second. Hopefully, if we make Al right it will have a fond spot in its integrated circuits
for us. The systems may decide that they'll have to take some of our toys away from us, like our
hydrogen bombs. On the other hand, they'll basically leave us alone. They'll probably go off
someplace and - if I were a big computer I'd put myself in orbit and get some sunlight. If I needed
more energy I'd get closer to the sun; otherwise, farther away."
Perhaps, Fredkin said, in other parts of the universe, vast artificial intelligences already have
evolved.
"When you look out with a telescope, you see some very weird things in the sky." Recently, for
example, astronomers discovered a structure that
seems to be blowing giant "smoke rings" of stellar matter. "Those smoke rings are each larger
than our galaxy. Now the assumption of all astronomers is that whatever they see is some
natural thing just happening out there. My theory is that that's total baloney - that the
complicated things you see out there are for the most part something that someone's organized.
There's no reason why if you have great intelligence you're stuck with fiddling with the surface
of the planet."
But it's easy to get carried away. As Fredkin spoke, he was sitting in the living room of his
house in Brookline, Massachusetts, several miles from MIT, where Winston was trying to get a
computer to learn the meaning of a simple concept like "cup." Fredkin was involved in equally
basic endeavors. He walked up two flights of stairs to his computer room where, with a Lisp
machine loaded with Common Lisp, he was designing a learning program of his own - a
simulation
of a mouse running a maze. Hiding within the corridors were dangerous creatures - cats,
snakes,
mongooses - each represented by structures built from Lisp. From time to time food would
appear in various places and the mouse would try to get it without being eaten. Fredkin hoped to
develop the program into a general learning system. But at this point he was still typing in the
code, line by line, and looking for bugs to repair. Even with the accelerating evolutionary
process he envisioned, it would be a long way from mice and mazes to intergalactic smoke rings.
But that's the nature of Al. The excitement is not so much in what has been accomplished but in
what has yet to be done. And there is satisfaction in working on one small piece of a puzzle whose
shape is continually unfolding.