For centuries, if not millenia, humanity has concerned itself - in fiction and often in engineering - with the creation of devices meant to mimic human behaviour, or to behave in a seemingly intelligent way. Even before science fiction brought the ideas of superhuman robots and megalomaniacal computers to the masses, people built bowing statues, letter-writing dolls, and player pianos in an effort to make the inanimate seem somehow more human.
Once fiction got ahold of the idea, bookshelves filled with ideas that until recently, were entirely speculative, with no hope of being realised. Bruce Mazlish, at Stanford university, has written an excellent article (if link fails, click here for a local copy) that further discusses the history of automata and other AI in culture and literature. This article helps the reader to appreciate what a vast cultural impact work in AI can have on society. It is well researched and documented, as well as being an interesting read.The Modern Birth of AI
Like many technological advances of this century, the development of computer science began as a military effort. In the early 1940's, both Germany and the United States were racing to produce electronic computers that could be used in ballistics calculations, or in deciphering coded messages. It wasn't until after the war that computing facilities, often several rooms in size, could be spared for less essential tasks. The excess computing power after the war, coupled with some major advances in the design of the machines provided a fertile ground for exploring some more esoteric ideas for computing.
Among the first researchers to attempt to build intelligent programs were Newell and Simon. Their first well known program, Logic Theorist, was a program that proved (or attempted to prove) statements using the accepted rules of logic and a problem solving program of their own design. Their results were promising: not only could Logic Theorist reproduce many of the proofs that humans had developed, but in the case of one theorem, it actually produced a better (i.e. shorter, more direct) proof than the one commonly found in logic textbooks.
The fury of enthusiasm that emerged out of Logic Theorist's humble accomplishments was astounding. In the summer of 1956, within a year of Newell and Simon's accomplishment, John McCarthy organised "The Dartmouth summer research project on Artificial Intelligence" (McCarthy actually coined the term 'Artificial Intelligence'). This conference is considered an important milestone in the development of AI.Early Successes... and failures
Hubert Dreyfus, a prominent critic of the AI movement, notes that almost all early work in AI was concerned with one of three areas: Language Translation, Problem Solving, or Pattern Recognition. In each area, significant early successes were produced. Language translation was an early leader, Dreyfus estimates that in the first 10 years of the research program (that is, from 1955-1965) five governments spent over $20 million on its development. By the late fifties, programs existed that could do a passable job of translating technical documents and it was seen as only a matter of extra databases and more computing power to apply the techniques to less formal, more ambiguous texts. In reality, the programs simply could not scale up in the expected ways.
In spite of journalistic claims at various moments that machine translation was at last operational, this research produced primarily a much deeper knowledge of the unsuspected complexity of syntax and semantics. Hubert Dreyfus, in "What Computers Can't Do", p92.One fundamental problem was the inability to mimic the human capacity to use context to disambiguate the meanings of words and sentences. By the mid-sixties, the program had been substantially scaled down. Although current research in this area has produced recent success, it is still considered one of the great failures of early AI, one of the first signs of over-optimism.
Work in Problem solving followed the same course of early success and eventual failure that Dreyfus later argued was at the heart of almost every AI attempt. Most problem solving work revolved around the work of Newell, Shaw and Simon, on the General Problem Solver or GPS. The general problem solver employed abstract problem solving rules (e.g. "If you can, always try to reduce differences between your current state, and your goal state") to solve a wide variety of problems. The rules, most of which were heuristic in nature (i.e. they were good rules-of-thumb, rather than perfect, tedious algorithms) were postulated to be the same as those employed by human problem solvers. Over-optimism was also pervasive in this area. Unfortunately, the GPS did not fulfill its promise, and not because of some simple lack of computing capacity, but rather, because of deep theoretical problems. While it could solve some problems with its general techniques, any but the simplest eluded it. The fundamental problem is that general problem solving strategies are limited. People use domain-specific knowledge and skills to solve problems in different contexts. In general, domain specific skills and knowledge do not generalize across areas. For example, knowledge relevant to chess will not be useful in the area of physics. What the GPS had were broad, weak strategies. Strengthening its abilities would mean adding domain-specific knowledge for all possible problem areas - an impossible task. In 1967, roughly 10 years after it began, Newell announced that the GPS program was being abandoned.The last of the three early areas of AI was pattern recognition. Once again, there were some early successes. Computers were built that could handle morse code, if there was very little line noise, and the sender was another computer or a very efficient and precise human operator. There were even programs that could, with a reasonable (though not human level) degree of accuracy decipher handwriting (or at least the letter 'A') in various styles and orientations. None, however, did so by way of some fundamental discovery in pattern recognition. Instead, they used ultra-specific and inflexible templates, and were defeated by any significant distortion of the data. They were also incapable of resolving ambiguity or utilizing context.
From overwhelming optimism, the field was reduced to expressions like the following, from Vincent Giuliano, a researcher in the area.
Alas! I feel that many of the hoped-for objectives may well be porcelain eggs; they will never hatch, no matter how long heat is applied to them, because they require pattern discovery purely on the part of machines working alone. The tasks of discovery demand human qualities. (As cited by Dreyfus)