The workplace and the school are potent arenas where creativity is discouraged. In most jobs, daydreaming (or creative thinking) is disparaged as a waste of time. Schoolchildren know that the teacher expects the "Right Answer" and will punish them unless they produce it on the test. They begin to think that a Right Answer always exists. The message is that it does not make sense to waste time trying to be creative. Instead, one should memorize the answers others have provided. Only the few who manage to climb up onto the pedestal of creativity need worry about generating new answers.
This message is false. The world is not full of standard problems amenable to standard solutions. Everybody needs to be somewhat creative simply to get through a typical day and deal with the innumerable shifts from the ordinary that arise. These small acts of creativity, though they differ in scope, are not different in kind from the brilliant leaps of an Einstein. Creativity is commonplace in cognition, not an esoteric gift bequeathed only to a few.
The social cost of this false message is immense. Because of them, many people do not take advantage of their potential for creativity. Living in an environment that exalts the creativity of a few while discouraging it in most, they internalize the belief that they simply do not have the ability to be creative. Or, they become fearful, afraid of taking the risk involved in being creative.
The scientific cost is similarly large. For much of the history of AI and cognitive science, creativity was viewed as an esoteric and perhaps somewhat magical process that was above and beyond "normal" processing. As a result, few researchers have risked tackling it.
Instead of being banished to the untouchable heights of cognition, creativity belongs squarely in its center. Far from being esoteric, creativity arises from relatively simple mental processes. Far from being magical, it depends on pre-existing, though complex, mental structures. The creative process is not above and beyond "normal" reasoning, but rather is central to it.
Besides being interesting in its own right, creativity is important to study for extrinsic reasons. In order to construct intelligent machines, we need to understand learning. And to build viable learning machines, we need to build machines that can be creative. Creativity is a critical component of the learning process.
What is creativity? It is the "intelligent misuse" of the knowledge structures underlying routine cognition. People depend on "scripted" knowledge in much of what they do. They can use a pay phone, for example, because they have dealt with pay phones many times before. They have developed an internal script which tells them, among other things, to pick up the receiver before putting in money. But people are also able to function in situations in which they either have no scripts or they want to look beyond them. How do people operate when their scripted knowledge does not directly apply? The answer is that they find some knowledge that does not quite apply and then see how they can modify it. In other words, they intelligently misuse it.
We are not proposing that to be creative, people simply loosen the constraints they use when searching for and applying knowledge. A system which worked in this way would not be creative, but would instead progress from schizophrenia (as it leapt from one random idea to another) to catatonia (as it found itself buried under a combinatorial avalanche of attempts). Creativity may be hard work, but it is also knowledge-based. A creative system must know which knowledge structures to retrieve and how to modify them. To paraphrase Thomas Edison, "creativity is 99% perspiration and 1% inspiration."
In order to build a cognitive theory of creativity, we should not stop at the boundaries of the individual when asking what a creative act is creative with respect to. People use a variety of mental constructs in routine understanding and they intelligently misuse a similar variety in creative thinking. When we want to explain how some agent generated some creative act, we should ask ourselves: "With respect to what knowledge structures inside the agent is the act creative?" We should ask which types of knowledge structures were applied directly in the act, and which types were intelligently misused.
By taking this stance, we can understand that instead of being an all-or-nothing affair, there are different types and levels of creativity. It is one type of creativity when someone responds to a new interruption in one of their standard routines with a type of repair they have used many times before. An office worker faced with a failure of the phone network, for example, might decide to go to the corner store to make a call by applying the standard repair of finding a different source for an interrupted resource. It is another type of creativity when someone invents a type of repair which is new to them. The same office worker might puzzle for a while and generate a new perspective such as considering the situation to be a blessing in disguise. The worker might then use the lack of interruptions to get some work done. The worker did different processing in each case and engaged in different forms of creativity. The fact that either of these plans seems like creative behavior is precisely the point. The creation of new plans, adapting those new plans from old ones, is what creativity is all about.
Creativity does not flow from some elegant, unitary process. Rather, there is a different type of creativity for each type of knowledge structure that we are able to use for a purpose beyond that for which the structure was built. In other words, creativity is a set of processes each of which is creative with respect to some specific type of knowledge structure. Because people are able to use a number of different types of knowledge structures in a variety of ways, creativity can be achieved through a number of different processes. A theory of creativity should describe these processes and the knowledge structures each employs.
It is only when routine knowledge structures fail that we need to be creative. If my car had died on a major street, or if my planned route was clogged with construction, or if the driver in the car next to me began to make frantic hand signals that I could not interpret, then I would need to be creative.
At least, I would need to be creative if I had not experienced such problems many times before. If I had, I would be able to handle them routinely. People have knowledge structures that lay out plans for the goals they frequently encounter as well as typical paths for those plans to follow. If someone simply wants to make sense of his world and never encounters events out of the realm covered by his pre-coded knowledge structures, then he need never be creative.
But, what happens when something new or unexpected happens? What happens if I do not happen to know much about auto mechanics but am called on to restart a stalled car in the middle of traffic? I am faced with an anomaly, something that my top-down routine structures do not tell me how to handle. Such an anomaly indicates that I have a missing element in my knowledge or a flaw in my beliefs about the world. In order to make sense of the world, repair my beliefs, and expand my knowledge, I should try to explain the anomaly. The processes of anomaly detection and explanation are the link between routine and creative processing.
How do we perform such explanations? Often, we can apply routine explanations that we have used before for similar anomalies. For example, I might know that the explanation behind the occasional stalling fits my lawnmower suffers is that the wire leading to the mower's spark plug gets loose and needs to be jiggled. When my car stalls, I might react by jiggling the wires leading to its spark plugs. This action would be an example of using routine explanation to be creative with respect to routine processing.
Routine explanation is not the only process people use to be creative. We cannot always find routine explanations and, even when we do find them, they do not always work. In the case of the stalled car, for example, what might I do if I have jiggled the wires and the car still does not start? In such a case, routine explanation has failed and I have experienced another type of anomaly. I need a different type of creativity. I need to generate a new explanation -- I need to be creative with respect to my explanation knowledge structures themselves.
In both of these types of creativity, anomaly-detection provides the motivation to be creative and explanation provides the mechanism. We experience a problem in routine processing, we characterize the anomaly underlying the problem, we search for knowledge to explain the anomaly, and we see whether the explanation suits our needs.
In 1975 we built such a system, called MARGIE (Rieger 1975). We did not design it to be a creative understander. We simply wanted to have a routine understander, which we felt was a significant advance at that time. MARGIE worked by performing bottom-up inference to try to relate sentences in an input text. Regardless of our intentions, MARGIE did indeed produce creative explanations. Given a simple story such as "John hit Mary. " and a long time to work, it would generate, among other understandings, the sadomasochistic hypothesis that "John wanted to be hit. He wanted Mary to be mad at him so she would hit him. So, he hit her."
While we felt the program to be quite a success at the time, as a theory of understanding, MARGIE failed on psychological grounds. First, people can understand situations of much greater complexity than any system based on MARGIE's bottom-up inferencing would be able to. The MARGIE program suffered from combinatorial explosion -- the number of inferences that the program could derive at each step was so large that the program quickly drowned relevant inferences in a swamp of irrelevant ones. Second, it is clear that people understand familiar situations more easily than unfamiliar situations. MARGIE, with its exhaustive chaining, never got any better at understanding stories. It would read the same story 100 times, do the same processing each time, and never get bored.
As a theory of understanding, MARGIE ranks as a useful failure. As a theory of creativity, it fares somewhat better. Given enough resources, the program would generate every way of understanding an input that its inference rules licensed. In other words, MARGIE could hypothesize every interpretation, creative or not, that it could comprehend. Any program that exhaustively applies whatever inference rules it contains will be able to generate similarly "creative" outputs. In one sense, assuming no magical insights, no program could ever do better. No system will ever be able to generate outputs that do not come from some inference rule or knowledge structure the program contains. MARGIE's exhaustive inference provides a zeroeth-order theory of creativity. The question is, how can we do better? We managed to build a program based on 99% perspiration. Where is the 1% inspiration?
The primary problem in MARGIE was uncontrolled inference. Once the problem is described in this way, the solution is obvious: provide some method of controlling inference. Much of the work done at the Yale AI lab while Schank was director was aimed at determining what sorts of knowledge structures could play this role. This work revolved around the fundamental idea that when people build useful inference chains, they generalize them and cache them away. When later faced with situations similar to those they have reasoned through before, people apply this cached knowledge without having to work through the detailed inferencing all over again.
Others have studied such notions of "chunking" (e.g., (Newell 1990)). A concern with two questions sets the Yale work apart: "What is the actual content of such structures?" and "What is the organization of a memory composed of them?" We did a significant amount of work describing what people actually know about the situations and tasks they face every day and determining how an intelligent system might represent that knowledge in a way that would it to function efficiently. This work developed three types of knowledge structures: scripts, plans and MOPs . Below, we describe how these structures contribute to routine understanding.
We also found that, unlike the bottom-up inference used in MARGIE, script-based processing was efficient. For instance, systems using scripts could quickly infer that actions in stories occurred even if they had not been explicitly mentioned. To take an example, when told that Lucy went to a restaurant and had left a big tip, the SAM (Script Applier Mechanism) program could readily infer that "Lucy had ordered," "Lucy had eaten," and "Lucy had enjoyed what she had eaten."
Script-based systems could also quickly determine when something happened that was not expected. As an example, here's a small story:
Lucy went to a restaurant. She did not like her meal. But she left a big tip.
When processing this story, SAM was able to note that the big tip was not expected. But scripts did not indicate how to proceed from there. Once SAM had isolated this anomaly, it had no method of dealing with it. So, it ignored it and moved on.
To enable computer systems to generate such understandings, we developed additional knowledge structures that allowed them to explain stories by referring to the themes, goals, and plans of the actors in the story (Schank and Abelson 1977) . The PAM (Plan Applier Mechanism) program embodied these ideas (Wilensky 1981). When scripts fail, these knowledge structures provide an alternative method for an intelligent system to link actions in the world to mental constructs.
Like the earlier structures, the primary function of MOPs was to provide top-down expectations. But MOPs had additional advantages. They provided a method for sharing knowledge between structures that was lacking in earlier theories. They also provided an organization that allowed us to understand how an intelligent system might learn new MOPs when old ones failed.
But MOPs retained a critical shortcoming of the earlier structures, a shortcoming that is endemic to top-down knowledge structures. What happens when the expectations provided by MOPs fail? While MOP-based programs such as Cyrus (Kolodner 1980) and IPP (Lebowitz 1980) could process stories of much greater complexity than MARGIE could, they could not duplicate MARGIE's flexibility. Because they were used for top-down processing, MOPs provided no bottom-up method to repair expectation failures.
The way people understand and act in the world is strongly influenced by top-down structures. When someone experiences something for the first time, they understand it in terms of similar experiences they have previously had. When they experience a second time, they understand it in terms of what happened the first time. This perspective on human reasoning, called case-based reasoning, has been a focus of work done both in previous years at the Yale Artificial Intelligence Project and currently at Northwestern's Institute for the Learning Sciences (e.g., Riesbeck and Schank, 1989).
Work in case-based reasoning has shown that not everything can be understood directly in terms of previous experiences. Sometimes, none of a person's pre-packaged top-down structures immediately apply to the current situation. Other times, a person may have top-down structures which could apply, yet not know to use them. In yet other cases, a person might locate structures that seem to apply, but which propose an incorrect theory of the world and need to be corrected. In all of these situations, the person must find a way to make do with his or her top-down structures. Said differently, the person must find a way to use those top-down structures creatively.
Some form of bottom-up inferencing must be part of the answer. The tricky part about developing a theory of creativity is to target the use of bottom-up inference in a plausible way. A theory that suggests simply dropping into MARGIE-like exhaustive inference whenever an expectation failure is encountered is not plausible. Such a theory fails for the same reasons MARGIE failed. Exhaustive inference is too expensive.
The 1% inspiration, the secret to developing a theory of creativity, lies in understanding what knowledge we use when faced with an expectation failure. For example, we use additional knowledge to characterize anomalies, determine which knowledge structure caused the anomaly, adapt the failed structure, and judge the appropriateness of the repair. In order to understand creativity, we must understand what these additional knowledge structures look like and how they are used. They are what allows us to recover gracefully from processing failures without dropping into a computational catatonia.
Below, we discuss two types of knowledge structures used in creative cognition. The first, called Explanation Patterns (XPs), are useful for adapting MOPs. The second group includes Analogy Molecules and Restructuring Rules. These are useful for adapting XPs.
... the most interesting stories often contain anomalies -- events that are not handled by active expectations. When we encounter anomalies, we feel the need to explain them. Explaining an anomaly means bringing knowledge to bear that will tie the anomalous action to something we understand, and thus render it non-anomalous. (p. 193)
Schank (1982) provides a model of how people respond to expectation failures. In brief, when such an expectation fails, we try to explain the failure. Then, we put the explained failure in a place in memory that will allow it will come to the fore if we encounter such a failure again. In particular, we attach it to the component of the MOP that generated the expectation, using the proposed explanation as an index. If we later encounter a failure involving the same component and the same type of explanation, we begin to believe that the combined failures reflect a problem with our understanding of how the world works. To update our understanding, we create a new MOP that includes an expectation predicting what was previously the anomalous event. We use the explanations we generated for the anomalies to indicate when to use the new MOP instead of the old one.
Given this model, it becomes clear how closely related creativity is to learning. People depend on top-down structures to understand the world. Creativity is required to stretch those structures in the cases in which they don't quite fit. When a structure gets stretched in the same way a number of times, it tells us that it's time to create a new top-down structure to compensate. Further, the explanations that stretch MOPs (i.e., that are creative with respect to MOPs) are also required to index memory so that recurrent failures can be efficiently matched. Without creativity, a person could neither stretch nor index.
How do people generate explanations? "Quite easily" is the quick answer. Think back to "The Mystery of Big-Tipping Lucy." Try to generate a couple of explanations to explain Lucy's behavior. Doing so is not difficult, and so we tend to think that such hypothesis-making is not creative. But unless these explanations followed some standard sequence of events you already had cached in memory, they are creative with respect to your restaurant-meal MOP. They are stretching the MOP to cover a situation beyond those for which it was built.
How can people perform this stretching without falling into a MARGIE-like morass of bottom-up inferencing? As before, the answer is that we use additional knowledge to guide our work. Schank (1986) hypothesized that XPs were central in this process. XPs provide "frozen explanations," pre-packaged chains of inferences that can be used to explain a specified anomaly. The table below shows how XPs are used in routine explanation.
XPs allow for creativity with respect to MOPs. But, XPs themselves are just standard, routine explanations. And, as standard MOPs do not always allow us to understand the world, so standard explanations do not always allow us to resolve the anomalies that result. What happens when an XP fails?
In such cases, we must generate a new explanation. We must achieve a higher level of creativity -- we must be creative with respect to the XP. Imagine you are asked to explain Big-Tipping Lucy's behavior, but are told that each of the first five explanations you generate do not apply. You probably generated these first five directly from XPs. How would you go about generating a sixth? A tenth? At some point, you will run dry of standard explanations and will need to start creating new ones. Maybe Lucy enjoyed going to restaurants that served bad food because it allowed her to tell great stories about it later. Maybe she owned a competing restaurant and wanted to reward this restaurant for a bad performance, thereby training them to set their sights too low. What additional work does creating such new explanations entail?
Table 2 shows how the explanation process described above can be enhanced to allow explanations to be creative with respect to XPs. The lines that are italicized refer to processes that are creative from this new point of view.
We coined the term "analogy molecules" to describe the class of knowledge structures used in reformulation. The reason why is that the process of reformulation is really the process of drawing a pointed analogy (an analogy which serves a specific goal) between something that does not fit into an explanation as currently conceived and something that does. The purpose of the analogy is to enable the system to use the misfit explanation in the current situation. The role of analogy molecules is to provide a set of rules describing the conditions under which such a redescription is feasible. In the abstract, analogy molecules describe when some X may be seen as similar to some Y in order to correct some shortcoming Z in a current explanation.
Researchers at the Yale AI lab and at ILS have constructed two programs that are able to perform creative explanation using reformulation: SWALE and BRAINSTORMER. The SWALE program used creative explanation to hypothesize explanations for a mystery surrounding the racehorse Swale. Swale was a champion three-year-old racehorse who was found dead in his stall days after winning of one the biggest races of the year. For more details on the SWALE program, see Kass (1990), Leake (1990), Owens (1990), Schank (1986). The BRAINSTORMER program used creative explanation in a planning task: generating proposals for how to react to terrorist attacks (Jones 1992) .
This view provides a picture of a waterfall of corrective knowledge structures. Each stage in the waterfall consists of a type of knowledge structure. When that knowledge structure is applied to some situation that it does not immediately fit, it needs to be adapted. The structures at the next stage then supply suggestions for how to do the adaptation. Knowledge about how to use one stage creatively is contained in the next stage.
Our working hypothesis is that this waterfall is not very many levels deep. The waterfall bottoms out when an intelligent system is forced to use weak methods (like MARGIE's exhaustive search) in the attempt to creatively use some knowledge structure. The need to use weak methods indicates that no additional knowledge can be applied, or, in other words, that no lower-level knowledge structures exist. In such cases, there is no role open for a lower level of the waterfall to fill. For example, we have found that the knowledge structures used in the relatively simple process of restructuring XPs are bottom-level. Because restructuring uses weak methods, it is one place where the waterfall terminates.
This is not to say that the products of creative thought must be as simple as the waterfall that produces them. One might think up a creative explanation for the story of Big-Tipping Lucy that would take pages to explain. But, any such explanation would consist mostly of a linked set of sub-explanations. Our hypothesis about the depth of the waterfall does not have to do with any notion of how complex a network of explanations a person can construct as an output, but rather with how many different sorts of knowledge structures might be involved in building any single component of that network.
Creative power does not come from these simple processes, but rather from the complex knowledge structures (or, better said, the complex of knowledge structures) on which these processes operate. To improve our understanding of creativity, we need to build content theories that describe what these knowledge structures contain and how they are organized in memory. Yes, we need to know what types of knowledge people have. But, we need to know more what specific knowledge they have and how to represent that knowledge.
Accordingly, several projects at The Institute for the Learning Sciences are concerned with representing knowledge in a way that allows an intelligent system to get reminded of that knowledge at appropriate times. The Dear Abby program (Domeshek 1992), for example, is concerned with how to represent knowledge about social plans. When told about a problem of the type for which one might write to Ann Landers (or her sister Abigail Van Buren), the program uses this knowledge to retrieve relevant stories. The Creanimate program (Edelson 1993) provides another example. This program is concerned with how to represent relationships between form and function to support case-based design decisions in the realm of animal adaptation.
In doing such work both at Yale and at Northwestern, we have come to realize the importance of questions in triggering and channeling the need for explanations. As we have already seen with Big-Tipping Lucy, once we ask why Lucy left a big tip, hypotheses are not hard to construct. Once we ask what alternative relationships Lucy might have to the restaurant, we can readily build a whole new set of creative hypotheses. Because questions have buried within them the seeds of the answers they will permit, asking the right question sets the stage for a creative answer. But how do we generate a question?
One way, the way we discussed above, is from the failure of expectations derived from scripts or MOPs. The SWALE program generated questions from the processing failures it experienced, as did the IVY program. The IVY program (Hunter 1989) improved its performance on a diagnostic task over time by learning more about the task. It pursued questions generated by knowledge acquisition planning. The central question it posed itself was "What kinds of knowledge will help me avoid diagnostic failures I have previously experienced and are therefore worth looking for?"
Pursuing processing failures is not the only way people generate questions. People have external goals that require them to understand things. They have open questions left over from before that they were never able to answer. They simply become interested in some subject and delve in. To be more creative, our computer systems will need to be able to actively question, rather than simply react to expectation failures. They will need to actively seek out anomalies.
Two other programs actively generate questions, extending the range of questions computer systems can ask. The AQUA program (Asking Questions and Understanding Answers (Ram 1989)) developed a range of questions beyond those generated by script- or MOP-based understanders and illustrated how those questions can form the basis for a memory that extends itself over time. The Interview Coach program, currently under construction at ILS, takes a somewhat different approach to the task of constructing interesting questions. This program, aimed at helping a person perform a knowledge-acquisition interview, suggests compelling questions for the interviewer to ask. The primary question it asks itself is "Given what I know so far, what is it that I would like to know next?" To answer, the program analyzes the structure of the knowledge in its memory, focusing its attention on what it views as important holes in that structure. Essentially, it is an experiment in building a system with a primitive sense of curiosity.
These differences raise an important question. What are the implications of this theory in the real world? One important area that this theory relates to is education. Since theories of creativity are so intimately bound up with learning, they allow us to move out of the academic realm of cognitive science and into the practical realm of one of society's most pressing problems -- the troubled education system. Theories of creativity can be viewed not only as descriptions of how people learn, but as prescriptions for how we should structure the schools to help people learn. As Schank and Farrell (1987) said:
To teach students to be creative, we must teach them to become aware of just how wrong everything is. They must notice when things around them don't work. They must seek out anomalies in the world around them, in people's behavior, in their own behavior. They must wonder why they do what they do every day. If they have been going through school thinking that everything is fine, this might be a shock to them.
Both creativity and learning start with questions. The work described above on routine understanding shows that people often cannot even understand an answer unless they have first generated the underlying question by themselves. Why, then, do schools typically emphasize answers? To teach our students to be creative, we must first teach them to ask good questions. Or, better yet, we must help them retain the natural talent all children have for generating questions. Too often, this talent is squelched in today's schools and hence needs rekindling.
Once students are able to generate questions that interest them, we should teach them how to creatively answer those questions. The creative process consists of a number of skills that are learnable -- they may be strengthened and improved. But how? Like other skills, they cannot be taught directly. We cannot simply describe the laws of creativity to a student, have the student memorize them, and then expect the student to then be creative. Rather, we must provide an environment in which students may practice and hone their creativity.
The tendency to squelch creative question asking which many learn in school carries through to adult life. Many people do not take advantage of their potential for creativity. Living in an environment that exalts the creativity of a select few while discouraging it in most, people become afraid of taking the risk of being creative. Or, they internalize the false message that they simply do not have the ability to be creative.
Everybody needs to be creative in response to anomalies. In this chapter, we have spoken about creativity mostly as a reaction to expectation failures. But what we have perhaps not made clear that it is often useful to create understanding failures (Schank, 1988). When we wish to understand something more deeply, develop a counterargument to the opinions we hold, or simply create something new, it is often necessary for us to distance ourselves from our routine understandings. If I want to surprise my spouse on Valentine's Day, I know that I must disregard the first idea that comes into my mind for what to do. I will probably have to disregard the second as well. In order to be creative in routine situations, we must get beyond our pre-cached standard knowledge structures and force ourselves to generate something new.
Such "intentional creativity" is not essentially different from the type of creativity which is forced on us by understanding failures. The differences lie in the way we choose questions to creatively pursue (i.e., the input to the system) and the way in which we evaluate the options we generate (i.e., the output of the system). The actual act of creativity itself is similar in both cases.
Intentional creativity is a skill all students should learn. To become
intentionally creative, one must develop the ability to engage in a type
of inner dialogue with oneself while working through the creative process.
To help students become intentionally creative, we must provide environments
in which they can engage in such inner dialogues and grow capable in them.
We must give them the opportunity to propose new problems, fail in their
attempted solutions, ask questions, explore anomalies, and create explanations.
Boden, Margaret A. The Creative Mind: Myths & Mechanisms. New York: BasicBooks, 1990.
Cullingford, R. "Script Application: Computer Understanding of Newspaper Stories." Ph.D. Thesis, Yale University, 1978.
Domeshek, Eric A. "Do the Right Thing: A Component Theory for Indexing Stories as Social Advice." Ph.D., Northwestern University, Institute for the Learning Sciences, 1992.
Edelson, Daniel Choy. "Learning from Stories: Indexing and Reminding in a Case-Based Teaching System for Elementary School Biology." Ph.D. Thesis, Northwestern University, Institute for the Learning Sciences, 1993.
Gick, Mary L., and McGarry, Susan J. (1992). "Learning from Mistakes: Inducing Analogous Solution Failures to a Source Problem Produces Later Successes in Analogical Transfer." Journal of Experimental Psychology: Learning, Memory, and Cognition. 18, 3, 623-639.
Hunter, Lawrence E. "Knowledge Acquisition Planning: Gaining Expertise through Experience." Ph.D., Yale University, Department of Computer Science, 1989.
Jones, Eric K. "The Flexible Use of Abstract Knowledge in Planning." Ph.D. Thesis, Northwestern University, Institute for the Learning Sciences, 1992.
Kass, Alex M. "Developing Creative Hypotheses By Adapting Explanations." Ph.D. Thesis, Northwestern University, The Institute for the Learning Sciences, 1990.
Kolodner, J.L. "Retrieval and Organizational Strategies in Conceptual Memory: A Computer Model." Ph.D. Thesis, Yale University, Department of Computer Science, 1980.
Leake, David Browder. "Evaluating Explanations." Ph.D. Thesis, Yale University, Department of Computer Science, 1990.
Lebowitz, M. "Generalization and Memory in an Integrated Understanding System." Ph.D. Thesis, Yale University, Department of Computer Science, 1980.
Newell, Allen. Unified Theories of Cognition. Cambridge, MA: Harvard University Press, 1990.
Owens, Christopher Charles. "Indexing and Retrieving Abstract Planning Knowledge." Ph.D. Thesis, Yale University, Department of Computer Science, 1990.
Raaijmakers, J.G.W., and Shiffrin, R.M. (1980). "SAM: A Theory of Probabilistic Search of Associative Memory." The Psychology of Learning and Motivation, 14, 207-262.
Ram, Ashwin. "Question-Driven Understanding: An Integrated Theory of Story Understanding, Memory, and Learning." Ph.D. Thesis, Yale University, Department of Computer Science, 1989.
Rieger, Charles J. "Conceptual Memory and Inference." In Conceptual Information Processing, ed. R.C. Schank. 157-288. Amsterdam: North-Holland, 1975.
Riesbeck, Christopher K. and Roger C. Schank. Inside Case-Based Reasoning. Hillsdale, NJ: Lawrence Erlbaum Associates, 1989.
Schank, Roger C. and Robert P. Abelson. Scripts, Plans, Goals, and Understanding. Hillsdale, NJ: Lawrence Erlbaum Associates, 1977.
Schank, Roger C. and Robert Farrell. Creativity in Education: A Standard for Computer-Based Teaching. Computer Science Department, Yale University, 1987.
Schank, Roger and Alex Kass. "Knowledge Representation in People and Machines." VS 44/45 (1986): 181-200.
Schank, Roger C. Dynamic Memory: A theory of reminding and learning in computers and people. Cambridge, England: Cambridge University Press, 1982.
Schank, Roger C. Explanation Patterns: Understanding Mechanically and Creatively. Hillsdale, NJ: Lawrence Erlbaum Associates, 1986.
Schank, Roger with Peter Childs. The Creative Attitude. New York: Macmillan, 1988.
Wilensky, Robert. "PAM." In Inside Computer Understanding, eds.
Schank, R. and Riesbeck, C.K. 136-179. Lawrence Erlbaum: Hillsdale, NJ,
It is difficult to judge, of course, from external behavior what type of creativity is involved in some act. In the example above, we have assumed that our office worker already had a knowledge structure encoding the "find an alternate source" fix ,but would have had to generate one for the "make the curse into a blessing" fix. Another office worker with different experiences might habitually use "make the curse into a blessing" but need to be creative to generate "find an alternative source."
 We implemented a "stopping rule" such as that later discussed by Raaijmakers and Shiffrin (1980) which allowed the program to give up and move on when faced with conceptualizations it was unable to link. But the stopping rule only allowed the program quit trying -- what it needed was a way to try better, a way to target the inferencing it performed more effectively.
 Gick and McGarry (1992) provides experimental evidence supporting the position that people use an analysis of their processing failures to help store items in and retrieve items from memory.
 This process outlines only the basic method of explaining an anomaly - through accessing explanation patterns. An intelligent system must have a variety of more complex methods, such as coordinating anomalies or retrieving cases which contain similar failures. All of these methods, however, call on this basic method at some point. Essentially, the complex methods reduce the conceptual distance that the explanations produced by the basic method must cross. See [Schank 86] for a description of some of these other methods.
Kass and Jones discuss two different sets of such structures. Kass proposes
structures called component specifiers, component generalizers, and tweaks
which contain the knowledge required to reformulate by specifying, generalizing,
or substituting components. Jones proposes a knowledge structure called
a viewing schema to guide a system in inferring that one concept may be
seen as an instance of another.
* Please note that the electronic version of this paper has no figures.