Computers at the dawn of creativity
Ada, Countess of Lovelace, certainly knew how to put computers in their place. Writing in 1843 about her friend Charles Babbage's proposed Analytical Engine, the first computer, she said: 'It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine. The Analytical Engine has no pretensions whatever to originate anything. It can do (only) whatever we know how to order it to perform.' One hundred and fifty years later the countess's views might still seem a model of common sense. Computers now perform billions of calculations per second against the few per minute that Babbage's machine would have been capable of, had he ever built it. But computers still do no more than follow the rules programmed into them. Does that mean computers are forever condemned to be no more than the dumb slaves of humans, who alone possess the powers of creativity? Not at all, according to one small school of researchers, who want to put a spark of creativity into a computer's brain. Other researchers have persevered for years in their search to extract creativity from a computer - and some have even claimed to have seen it. Elsewhere, creativity itself is not the goal, rather the hope that studying the limitations of a computer will cast light on the processes within the mind of a human artist. And some researchers are now scoring successes with a new line of research that gives computers the links which allow creative humans to integrate seemingly unconnected ideas into a new whole.
Mindlessly breaking the rules is not enough, however: a close inspection of any creative work will show the presence of some constraints. In music, for example, most postmodern composers and jazz improvisers do conform to at least some of the traditional rules of harmony and chord progression. Those who apparently defy these rules are in fact sticking to their own constraints, which include avoiding traditional musical forms.
Douglas Hofstadter of Indiana University points out that without some constraints the result is not necessarily better ideas, just more complex ones. The ludicrous wheels-within-wheels of the Ptolemaic model of the Solar System shows the dangers.
Philip Johnson-Laird, a psychologist now at the University of Princeton, has built a program that can improvise modern jazz. Like many of the creative computer projects, this work was a spin-off from research into human creativity. Johnson-Laird had been investigating the memory capacity of jazz players.
At first glance, the creative act of improvisation seems to demand the rapid manipulation of complex chord sequences - acts requiring huge computational and memory resources. But the apparent ease with which good jazz players do their stuff suggested to Johnson-Laird that perhaps some simple rules could do the trick. Using a standard rule-based AI programming language called Lisp, he wrote a program for manipulating chord sequences and melodies which didn't demand gigabytes of memory.
The product of this program, in the form of chord sequences or bass lines, could be fed into a synthesiser and played back. According to Johnson-Laird, himself a keen jazz pianist, the improvisations are often rather grey. Even so, he believes they are up to the standard of a competent beginner - which, considering the relative crudeness of the approach, is very surprising.
But is this creativity? Certainly the program does, in some sense, expand its own field of endeavour: it does not need constant outside supervision, and feeding in the same starting chord sequence does not guarantee the same result. Also, its 'creations' meet the constraints of standard music theory. So, although it may not be about to tour with Wynton Marsalis, it would be hard to deny that it is in some sense creative.
Even more impressive evidence for the claim that computers can be creative is an art generating program developed by Harold Cohen, the Sixties abstract artist turned computer programmer, who now works in the computer science department at the University of California, San Diego. For almost twenty years Cohen has groomed AARON, a set of programs to produce 'drawings' and 'paintings' (see 'Winner takes all') of ever more sophisticated quality. At first AARON merely produced line drawings, but over the years Cohen has refined the programs to experiment with colour.
Like Johnson-Laird, Cohen's aim was not to build a creative computer, but to cast light on the processes within the mind of a human artist. To this end, he has spent years building up a vast Lisp-based array of rules of drawing, composition and perspective that the program can exploit.
Cynics may ask whether AARON is any more creative than an intelligent paintbox program that prompts human artists through a series of stored templates. In reply, Cohen points to AARON's endless ability to surprise, and the fact that it never comes up with the same work twice. It doesn't even need any inputs, but will happily produce some artistic creation without detailed prompting. So, again, it is hard to deny that AARON is, in some sense, creative.
What it cannot do, however, is to break the bounds set by its rules. In its present state, AARON could not elect to subvert reality, like Dali, or create a new reality, like Picasso. Cohen himself fights shy of the 'creative' label for AARON. He believes the program will deserve that accolade only when it shows some signs of artistic development, of creating something today that it could not have done a year ago. How that might be achieved is, as yet, unclear: formulating some kind of metarules is one possible route.
At first sight computer researchers have done even better at capturing creativity in science than they have in the arts. In the mid-1970s, a young computer scientist at Stan-ford University named Douglas Lenat created a stir by unveiling AM, a program said to discover fundamental results in mathematics. Again based on the Lisp programming language, AM had been provided with a collection of very basic concepts drawn from mathematical set theory, such as 'intersection' and 'union', combined with about 200 'heuristics' - or rules - for such tasks as proposing new things to do, checking truths and spotting regularities. Once set running, AM used these heuristics to look for new concepts that emerged, and decide what constituted an 'interesting' discovery.
VOYAGE OF REDISCOVERY
This is Goldbach's conjecture, a famous unproven idea in number theory first put forward by a Prussian mathematician in the 18th century. Like Goldbach, AM itself failed to prove the conjecture; indeed, shortly after finding the conjecture, its font of ideas dried up. Yet simply discovering it was an achievement that had once won immortality for a human. AM, it seemed, was proof that computers were capable of top-flight scientific creativity.
Inevitably, all was not as it seemed. In 1984, Graeme Ritchie, now a leading AI researcher at the University of Edinburgh, and Keith Hanna, a researcher in the department of electrical engineering at the University of Kent, published a study of AM's supposed operation. They concluded that AM's achievements were genuine, but their real origin was unclear. Ritchie and Hanna found that AM may have benefited from ideas unconsciously built into its code by Lenat which were designed to help it discover new ideas, and also from 'outside help' from Lenat in identifying 'interesting' ideas.
Lenat himself added a further fundamental criticism. Lisp, the language used in AM, has its origins in the so-called lambda calculus - itself invented to probe the foundations of mathematics.
Lenat came to suspect that AM's mathematical successes were chiefly the result of the strong correlations between Lisp and mathematics. Lenat pointed out, however, that this is not necessarily a bad thing. Arguing that the ends justify the means, he and others have seen Lisp as a source of power in computer creativity. Lenat himself went on to develop Eurisko, a program which, unlike AM, could evolve new rules for discovery as it went along. Eurisko proved particularly effective when overseen by a human able to weed out duff ideas. Helped by Lenat, Eurisko even won a war game, using a strategy based on small, fast-moving attack vessels. Human competitors thought it ludicrous - until Eurisko beat them all. Yet in 'serious' mathematics Eurisko got no further than AM. This is probably because it lacked a large enough base of ideas on which to draw - a factor which seems crucial to major breakthroughs in mathematics.
But history also shows that mathematicians and scientists in general have often made spectacular discoveries by scouring not the foundations of their subject, but observational data. Pat Langley, a researcher in the psychology department at Stanford University in California and an eclectic team made up of computer scientists, another psychologist and a philosopher, have taken this different route to computer creativity. Since the early 1980s, they have worked on a family of rule-based programs which analyse experimental results and deduce the general principle behind them. One such program, called Bacon - after the pioneer of empirical science Francis Bacon - can take numerical data and deduce the mathematical law connecting them. Another, named Glauber, takes qualitative results from a chemistry experiment and deduces general principles, such as 'acid plus base gives a salt plus water' from the information.
The result of such analysis, which would be quite beyond any human, could be the discovery of new chemical reactions, better economic forecasts, even new laws of nature. Yet by their very design, all these programs are inherently incapable of breaking out of 'standard' ways of thinking - the hallmark of the ultimate type of creativity. They can only ever be Balmer-level creative.
For many AI researchers, this is enough. Much of the research into computer creativity aims only to cast some light on a small aspect of human creativity. Even so, the idea of genuine Newton-level creativity from a computer continues to tantalise, and it has led some AI researchers to move away from rigid programs based on 'discovery rules'. Instead, they are aiming to create something more flexible, a program allowing potentially creative ideas to emerge, be tested out, modified and then pursued to greater truths.
Just such an approach has been attempted by Derek Partridge of the University of Exeter and Jon Rowe, now at the University of Buckingham. Their starting point has been the 'Society of Mind' concept of AI guru Marvin Minsky at the Massachusetts Institute of Technology. Roughly speaking, this is based on the idea of forming different representations of a problem, and then bringing together those that were useful in solving some part of it. These collections are then used to solve more of the problem, until, guided by their success or otherwise, a way of tackling the whole problem emerges.
of breakthroughs by humans. In 1880, the great French mathematician Henri Poincare was struggling to find a proof for a theorem involved in solving general algebraic equations. Day after day he would engage in a few hours of futile calculation, then give up. Then one evening, the answer came.
'Contrary to my custom, I drank black coffee and could not sleep,' he recalled. 'Ideas rose in crowds. I felt them collide until pairs interlocked, so to speak, making stable combinations.' By the next morning, he could see his way to the proof - all he had to do was write down the results.
A crude simulation of this process has been captured in the Genesis program, developed by Partridge and Rowe as a way of implementing Minsky's concept of the Society of Mind. They used the 'emergent representation' idea to play a card game in which the players have to discover the rules being used by the dealer. It turned out to work surprisingly well: the Genesis programs worked out the rules as more cards were dealt and showed some signs of creative thinking en route, such as changing tack to avoid getting stuck in ruts. As more information arrived, the representations evolved and sometimes underwent radical changes - producing far better results. This is perhaps the nearest a computer has yet come to mimicking the 'Aha! !' of discovery - that moment when barriers suddenly fall.
At Yale University, AI researcher David Gelernter believes we should take accounts of human creative inspiration even more seriously. He points out that such accounts often mention that the breakthrough occurred when the mind was least focused ('Trainers for your brain', New Scientist, 3 September). At such times, says Gelernter, the mind conjures up memories associated not with logical processes, but emotional experience. This 'affect linking' of memories, as Gelernter calls it, can bring together apparently 'illogical', unconnected ideas - and lead to radical breaks with old ways of seeing a problem.
On the face of it, medical diagnosis may not appear to be the best test of creativity. Indeed, most patients would be happier if their doctors stuck to tried and tested rules rather than inspiration. But Gelernter points out that in some ways this is the advantage of testing the idea with medical diagnosis. Rule-based systems that carry out this task have been around for years, so it is easier to see if the program has provided 'creative' input or not.
When it is presented with a specific medical case, the current version of FGP 'files' the case among all its other memories like a single 'sheet' in a pile of transparencies. The program then 'peers through' the whole pile, looking for commonalities. Its final diagnosis is then based on what the program has seen in the past. This gives FGP the ability to learn from past experience, and to back its diagnoses by citing specific cases - features giving it advantages over conventional, rule-based diagnostic programs. But Gelernter sees this as just the start: he is now trying to equip FGP with some sense of affect linking, in the hope that this will enable it to come up with the occasional brilliant insight as well as routine decisions.
But will these new 'rule-free' approaches give us a computerised Newton, capable of the very highest forms of creativity? The answer, ultimately, is a matter of definition. While some would argue that programs like AARON are genuinely creative, no one would claim that they approach the level of the great human creators in their respective fields. Even if a computer does approach a human level of creativity the history of technology suggests that the first insightful statement generated solely by computer will be derided by pundits and public alike. Such doubters, however, should ask themselves a question: 'If you're so smart, how come you didn't think of it?'
Winner takes all (box)
In 1950, the mathematician and code-breaker Alan Turing published his famous test for deciding whether or not a computer could be deemed to be 'thinking'. You simply ask the computer as many questions as you like. If, by the end of the interrogation, you are unable to tell whether the responses were any different from what you might expect from a human, then, argued Turing, the computer must be deemed to be 'thinking'.
If the interrogation stuck to questions about mathematics or logic, even today's computers might pass the Turing test. But how many would get past a request like 'Come up with a joke'? Are there any programs capable of fooling a human interrogator with their responses to such requests? Now you can decide: below are two examples of both human and computer creativity in different fields. Can you tell who, or what, did which?
1 A sense of humour: At Edinburgh University, AI researcher Kim Binsted has developed Jape-1, a program for telling jokes. The program builds up the jokes according to simple 'templates', such as 'What do you get if you cross an X with a Y ?', and chooses words for X, Y and the pay-off word Z according to properties of the words, such as their sound and associations. Can you spot the Jape-1 jape, and the two from The Crack-a-joke Book by human joke-merchant Kaye Webb?
A What do you give a hurt lemon? Lemonade.
B What kind of tree can you wear? A fir coat.
C What runs around a forest making other animals yawn? A wild boar.
2 Philosophical insights: In 1984, the New-York based writer and computer programmer William Chamberlain published 'The Policeman's Beard is Half-constructed', the collected works of Racter, a program that uses the basic rules of syntax to bolt together random words and phrases. Some of Racter's output are eerily reminiscent of the musings of French philosophers. Can you decide which of these quotes is from Racter, and which are pearls of wisdom from the influential French social philosopher Simone Weil? A 'Distance is the soul of reality' B 'Reflections are images of tarnished aspirations' C 'Love is not consolation, it is light' (Answers below:)
1 B is by Jape-1
2 B is by Racter
Robert MATTHEWS is science correspondent of The Sunday Telegraph.
Further reading: The Creative Mind by Margaret Boden (Cardinal 1992): Computers and Creativity by Derek Partridge and Jon Rowe (Intellect 1994); The Muse in the Machine by David Gelernter (Fourth Estate 1994).
New Scientist Volume 144. Issue 1955.
© Copyright IPC Magazines 1995 / 31 August 1995 issue