taking up the situated cognition challenge with ripple down rules

32
Int. J. HumanComputer Studies (1998) 49, 895926 Article No. hc980231 Taking up the situated cognition challenge with ripple down rules DEBBIE RICHARDS AND PAUL COMPTON Department of Articial Intelligence, School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia email: debbier, compton@cse.unsw.edu.au Situated cognition poses a challenge that requires a paradigm shift in the way we build symbolic knowledge-based systems. Current approaches require complex analysis and modelling and the intervention of a knowledge engineer. They rely on building know- ledge-level models which often result in static models that suffer from the frame of reference problem. This approach has also resulted in an emphasis on knowledge elicitation rather than user requirements elicitation. The situated nature of knowledge necessitates a review of how we build, maintain and validate knowledge-based systems. We need systems that are flexible, intuitive and that interact directly with the end-user. We need systems that are designed with maintenance in mind, allowing incremental change and on-line validation. This will require a technique that captures knowledge in context and assists the user to distinguish between contexts. We take up this challenge with a knowledge acquisition and representation method known as Ripple-down Rules. Context in Ripple-down Rules is handled by its exception structure and the storing of the case that prompted a rule to be added. A rule is added as a refinement to an incorrect rule by assigning the correct conclusion and picking the salient features in the case that differentiate the current case from the case associated with the wrong conclusion. Thus, knowledge acquisition and maintenance are simple tasks, designed to be performed incrementally while the system is in use. Knowledge acquisition, maintenance and inferencing are offered in modes that can be performed reflexively without a knowledge engineer. We further describe the addition of modelling tools to assist the user to reflect on their knowledge for such purposes as critiquing, explanation, ‘‘what-if’’ analysis and tutoring. Our aim is to provide a system that lets the user choose the mode of interaction and view of the knowledge according to the situation in which they find themselves and their own personal preferences. ( 1998 Academic Press 1. Introduction The challenge proposed by situated cognition for symbolic knowledge-based systems (KBS) constitutes a scientific conflict that, in Norman’s (1993) words, is not solved by redoubling one’s efforts but by adopting a different paradigm. The attitude that more knowledge or deeper models (Swartout & Moore, 1993) will solve the KBS brittleness problem is a relic of the knowledge-acquisition-as-transfer mistake (Clancey, 1993, p. 106). With Greeno and Moore (1993), we believe that ‘‘breaking completely from symbolic cognitive theories would be the wrong thing to do, but 2 that something like departing fundamentally is required’’ (1993, p. 57). There has been some debate as to whether symbolic systems are able to support situated cognition. Some (Agre & Chapman, 1987; Brooks, 1991) have chosen a 1071-5819/98/120895#32 $30.00/0 ( 1998 Academic Press

Upload: debbie-richards

Post on 15-Jun-2016

216 views

Category:

Documents


1 download

TRANSCRIPT

Int. J. Human—Computer Studies (1998) 49, 895—926Article No. hc980231

Taking up the situated cognition challenge withripple down rules

DEBBIE RICHARDS AND PAUL COMPTON

Department of Artificial Intelligence, School of Computer Science and Engineering,University of New South Wales, Sydney, NSW 2052, Australiaemail: debbier, [email protected]

Situated cognition poses a challenge that requires a paradigm shift in the way we buildsymbolic knowledge-based systems. Current approaches require complex analysis andmodelling and the intervention of a knowledge engineer. They rely on building know-ledge-level models which often result in static models that suffer from the frame ofreference problem. This approach has also resulted in an emphasis on knowledgeelicitation rather than user requirements elicitation. The situated nature of knowledgenecessitates a review of how we build, maintain and validate knowledge-based systems.We need systems that are flexible, intuitive and that interact directly with the end-user.We need systems that are designed with maintenance in mind, allowing incrementalchange and on-line validation. This will require a technique that captures knowledge incontext and assists the user to distinguish between contexts. We take up this challengewith a knowledge acquisition and representation method known as Ripple-down Rules.Context in Ripple-down Rules is handled by its exception structure and the storing of thecase that prompted a rule to be added. A rule is added as a refinement to an incorrect ruleby assigning the correct conclusion and picking the salient features in the case thatdifferentiate the current case from the case associated with the wrong conclusion. Thus,knowledge acquisition and maintenance are simple tasks, designed to be performedincrementally while the system is in use. Knowledge acquisition, maintenance andinferencing are offered in modes that can be performed reflexively without a knowledgeengineer. We further describe the addition of modelling tools to assist the user to reflecton their knowledge for such purposes as critiquing, explanation, ‘‘what-if’’ analysis andtutoring. Our aim is to provide a system that lets the user choose the mode of interactionand view of the knowledge according to the situation in which they find themselves andtheir own personal preferences.

( 1998 Academic Press

1. Introduction

The challenge proposed by situated cognition for symbolic knowledge-based systems(KBS) constitutes a scientific conflict that, in Norman’s (1993) words, is not solved byredoubling one’s efforts but by adopting a different paradigm. The attitude that moreknowledge or deeper models (Swartout & Moore, 1993) will solve the KBS brittlenessproblem is a relic of the knowledge-acquisition-as-transfer mistake (Clancey, 1993,p. 106). With Greeno and Moore (1993), we believe that ‘‘breaking completely fromsymbolic cognitive theories would be the wrong thing to do, but2 that something likedeparting fundamentally is required’’ (1993, p. 57).

There has been some debate as to whether symbolic systems are able to supportsituated cognition. Some (Agre & Chapman, 1987; Brooks, 1991) have chosen a

1071-5819/98/120895#32 $30.00/0 ( 1998 Academic Press

896 D. RICHARDS AND P. COMPTON

non-representationalist approach which avoids encoding, manipulation and decoding ofsymbols. Vera and Simon (1993) argue that such approaches are really using representa-tion anyway and ‘‘that the goals set forth by the proponents of [situated action] SA canbe attained only within the framework of symbolic systems’’ (1993, p. 7). The key to thisargument is what is viewed as a representation. In the earlier AI view, the use ofa representation referred to the use of descriptive models and maps based on verbaliz-ations. Through the work of people such as Brooks it is now more widely accepted thatrepresentation includes more than words but this has confused the issue of representa-tion vs. non-representation. We are not really interested in what constitutes a symbol orwhat category of symbol- we are using. Our attitude is that much more research intohuman cognition, supported by empirical evidence, is needed before we can say howhumans think and the extent to which humans make use of representations. Using theextreme definitions of Norman (1993), symbolic processing focuses on what goes oninside the head and situated cognition focuses on the influence of historical, social,cultural and environmental factors. These two approaches ‘‘emphasize different behav-iours and different methods of study’’ (Norman, 1993, p. 3). Progress in understandinghuman cognition has been slow due to: dense information, complexity of the world andthe impossibility of observing all the relevant aspects of human cognition (Norman,1993). Situated cognition necessitates interaction with the real world which Vera andSimon (1993) characterize as having real-time involvement; immediate responses toexternal stimuli and complexity. But the situated view is more than just taking intoaccount interaction between the individual’s inner state and the external environmentand trying to record all the influencing factors. The problem is that thinking and actingare not two separate activities but thinking is itself an act that modifies further action(Clancey, editorial comment). Context in a situated sense is more than just the environ-ment but occurs at a conceptual level that exists within a social setting involving suchthings as activities, participation, roles, contribution and norms (Clancey, editorialcomment).

Since we do not fully understand what goes on inside the head and it is notpossible to capture all that influences human action, we have focused on buildingsymbolic KBS that support situated cognition by being grounded in the real worldthrough the use of cases. We also use a direct interaction environment between the expertand the machine which is based on observed behaviour of the expert when they manuallyinteract with those cases. Our approach makes no claims on how the expert achieves thetask.

The situated way in which a human arrives at a conclusion suggests that the appro-priateness of many of the mainstream approaches to knowledge acquisition (KA),explanation and teaching are unnecessarily complex in some situations and need to beredefined. Given the complexity of human cognition it may seem inconsistent to advo-cate simpler techniques. However, we believe by capturing a simple model of the

-For an interesting description of the various possible categories of symbols see the discussion by Vera andSimon (1993) in their reply to Clancey. These categories are further developed by Clancey (1997), who showshow a taxonomic distinction can be made between symbols in computer programs, other artifacts, and inhuman heads. This distinction reveals that symbol systems differ in how the symbols developmentallyy relate toone another and how they change when the system acts.

RIPPLE DOWN RULES AND SITUATED COGNITION 897

knowledge based on expert behaviour is a way of overcoming or at least avoiding thedifficulties associated with the development of high-level models. We do not meanbreaking the task down into smaller components in the way that many other approacheshave done, such as Generic task framework (Chandrasekaran, 1986), KADS and Com-monKADS (Schreiber, Weilinga Breuker, 1993), Role-limiting methods (McDermott,1988) and components of expertise and the componential methodology (Steels 1993). Wehave shown (Edwards, Compton, Malor, Srinivasan & Lazarus, 1993; Compton et al.1998) that we can avoid in-depth analysis and the need to develop high-level models incomplex domains through the use of a simple pre-processor to abstract features from thedata. The model built can be simple because RDR provides for refinement of featureswith respect to raw data in local contexts. In the next section, we consider the main-stream approaches to KBS development and reuse and make further comparisonbetween these approaches and RDR.

2. Current approaches to KBS and the reuse of knowledge

One of our major research interests is the reuse of knowledge and the implications ofsituated cognition for reuse. There have been two mainstream approaches to the reuse ofknowledge. One focuses on the reuse of problem-solving methods (McDermott, 1988;Puerta, Egar, Tu & Musen, 1992; Chandrasekaran & Johnson, 1993; Schreiber, et al.,1993; Steels, 1993) and the other focuses on reusable ontologies (Guha & Lenat,1990; Patil et al., 1992; Pirlein & Struder, 1994). These approaches are based onNewell’s (1982) knowledge-level model. The problem-solving method approachimposes a particular structure on the knowledge to enable the current problem to bemapped into the appropriate class of problem situation. The structure chosen willdepend on the features of the task and the domain of expertise. The development ofontologies allows the domain to be better understood and defined and includes the termsused, the relevant concepts, the relationships between concepts and the structure ofthe knowledge. Both approaches offer a framework within which knowledge can beacquired.

While each of the methods mentioned above are different, Van de Velde (1993, p. 1218)considers three concepts to be generally included as part of the knowledge-level model.These are the domain model, the task model and the problem-solving method. Clanceydefines such symbolic models as ‘‘a description and generator of behaviour patterns overtime, not a mechanism equivalent to human capacity’’. (Clancey, 1993, p. 89). If modelsare at best imperfect representations that vary between users and the same user over time(Gaines & Shaw, 1989) and if taking a situated viewpoint of knowledge as something thatis reconstructed to fit a particular situation, then the view implied by many KBSapproaches ‘‘if we get the model right in the first place the rest will fall into place’’ seemssomewhat unrealistic.

The knowledge level describes knowledge and goals and assumes certain actions willbe performed by rational agents. This is not necessarily so and does not differentiateadequately between different possible situations or contexts. This may be one of thereasons for the difficulty experienced in determining which problem-solving method toapply. Breuker (1994) points out that most problems will require a suite of problem-solving methods. Selection and combination of this suite is difficult (Zdrahal & Motta,

898 D. RICHARDS AND P. COMPTON

1995) and made more difficult by the situated nature of problem solving. What oneperson perceives as the solution to a problem will often differ to what is perceived byanother person. Even what constitutes a problem in the first place will vary (Lave, 1988;Schon, 1979).

While there is widespread support for the development and use of generic solutions toproblem solving, Rademakers and Vanwelkenhuysen (1993) describe some of the prob-lems that can occur with generic models and their appropriate use. They define genericmodels as ‘‘predefined partial or abstract models about tasks, problem-solving methodsor domain ontologies’’ (Rademakers & Vanwelkenhuysen, 1993, p. 354). As they pointout, generic models can assist as templates for top-down knowledge acquisition and asskeletons for abstracting raw data. The problems occur because the knowledge level isa model and is therefore subjective and imprecise. In fact, we are dealing with two modelsthat need to be kept separate, the expert’s problem-solving behaviour and the intendedsystem behaviour. Schmalholfer, Aitken and Bourne (1994) describe four misconceptionsupon which the knowledge level is based.

1. ‘‘Knowledge and goals are in themselves inadequate to fully characterize intelligentsystems.

2. Knowledge-level descriptions are developed as if intelligent systems were causal systems.3. For this level of abstract description, the distinction between an agent and its environ-

ment is artificial.4. The knowledge level does not lie directly above the symbol level and there is no tight

connection between them. As noted by Clancey (1991a), therefore knowledge-leveldescriptions cannot be reduced to the symbol level’’ (Schmalhofer, Aitken & Bourne 1994,p. 87).

They further cite Clancey’s criticisms concerning the frame of reference problem (Clan-cey, 1991a) which basically argues that a description of knowledge is not the same as theknowledge and that definition and interpretation of that knowledge will require enteringinside the agent so that the situation is framed in the same way. Another problem is thatas systems are embedded in larger systems their appropriateness changes. The know-ledge level does not accommodate change which results in knowledge-level ontologiesthat are static and difficult to reuse. The limits of such descriptive models and the effortneeded to interpret that model when it is applied is summed up by Chapman when hestates:

‘‘Representation is generally thought to make problems easy; if a problem seems hard, youprobably need more representation. In concrete activity, however, representation mostly justgets in the way’’ (Chapman, 1989, p. 48).

Schmalholfer, Aitken and Bourne (1994) propose an extension to the knowledge-levelmodel using behaviour level descriptions that take into account goals, knowledge, skillsand performance and the relationships between them. Behaviour descriptions can beadjusted to support new uses. Context is important in allowing sharing and reuse ofbehaviour descriptions. We discuss context in more detail in Section 6.2.

We do not wish to imply that there is no merit in general problem-solving methodsand ontological approaches to reuse. The point we have tried to make is that they have

RIPPLE DOWN RULES AND SITUATED COGNITION 899

sufficient limitations to warrant consideration of alternative approaches. We argue, andlater show, that complex descriptive models are not necessary prerequisites for buildingKBS. If we use the first and third person interpretation analogy used by Clancey (1993)we want to build a system which performs KA in the first person. Clancey describes thefirst person as primary learning that is ‘‘inventive’’, requiring reperceiving, and reshapingprevious experiences. Primary learning involves automatic recoordination and is a re-flexive type of action. We regard an expert viewing a pathology report, assigninga conclusion and picking the salient features in the case to form the rules as reflexiveactions. On the other hand, the explanation of why a conclusion and features werechosen is a reflective action that requires third person interpretation. This is analogous toother situations, such as in social interaction, where models may not be necessaryprerequisites but use of them in retrospect is often beneficial for understanding what hasoccurred (Clancey, 1993).

Further, we do not wish to imply that RDR KBS do not constitute a model,but that the nature and use of the model is different to mainstream approaches.Modelling applies to RDR in three ways. Firstly, any KBS, including an RDRKBS, is a qualitative model in Clancey’s (1985) sense. The work described inSection 6.5 of this paper on formal concept analysis (FCA) (Wille, 1982) and the earlierwork by Lee and Compton (1995) on deriving casual models from RDR implicitlyrecognizes that RDR are models of the domain—and so can usefully be reorganized.Secondly, RDR depends on an attribute-value data representation and decisions have tobe made about modelling in this way. Thirdly, RDR KBS invariably contain a prep-rocessor to abstract the raw data into features that are more meaningful to theexpert. These later two modelling stages still require a conventional softwareengineering/knowledge engineering exercise. However, while some KE effort is requiredwe deemphasize modelling as a critical activity in building RDR KBS compared toother approaches (for example KADS). Unlike other approaches neither the expertnor the knowledge engineer think or know about the organization of the rules in anRDR KBS.

All human thought (or expression of thought) is modelling. However, in humanthought our models can be thrown away and replaced at the drop of a hat. Modelling inKBS is a much more critical activity with the emphasis on getting it right. RDR is anattempt to move to a more simple human-like approach. The whole point of the FCAwork and the earlier work on transforming rules to causal models, is to introduce someof the flexibility in changing between models that people use to better express someinsight.

Finally, there is one critical area where the importance of modelling must be acknow-ledged. Systems like KADS emphasize modelling because they attempt to providea methodology for all system development for all problems. The problem-solvingmethods (PSM) approaches seek to develop a general PSM for each type of problem. Forthese systems modelling the problem is critical. RDR avoids this by concentrating onsoluble problems—classification and more recently has shown that configuration can becovered by the same approach. It is our goal to see if the RDR PSM, with possiblemodifications to the existing PSM, can be applied to cover all problem types. This wouldbe the ultimate reuse of a PSM and demonstration that we can build useful symbolicKBS despite the situatedness of knowledge.

900 D. RICHARDS AND P. COMPTON

3. The implications of situated cognition for human—computerinteraction issues

The need for complex modelling of the domain before KA can take place has resulted ina preoccupation with knowledge elicitation rather than user requirements (Salle& Hunter, 1990). This has further resulted in systems designed for knowledge engineersrather than experts or end-users. By simplifying the KA technique and making the userresponsible for KA and maintenance we are able to concentrate more on user require-ments elicitation rather than knowledge elicitation.

A more appropriate term for the type of reuse focussed on in this paper is adapting orrepurposing, a term Clancey has taken from the hypermedia field. The ability to captureknowledge for one purpose, such as consultation, and reuse it for other purposes, such asteaching, crtiquing or ‘‘what-if ’’ analysis, is more akin to the situated view that know-ledge is not a fixed structure that is retrieved and applied from fixed storage positions inmemory as many AI programs presume but that knowledge will need to be reperceivedand reconstructed for each situation. If we take the view that ‘‘perceiving, behaving andlearning are one process’’ (Rosenfield, 1988) then it makes sense to build a system thatdoes not separate these processes but allows the user to perform KA, find concepts,perform inferencing, learn about and explore the knowledge, and so on, all within the onesession. To make the type of reuse we are pursuing clearer we will refer to it asactivity-reuse.

The ability of humans to adapt to different situations is considered ‘‘perceptive andintelligent’’ (Clancey, 1991b, p. 256). We view the inability of first generation expertsystems (ES) to provide systems that match the decision situation or style of users asa major problem and cause of the poor acceptance of first generation ES by end-users.The KA bottleneck, maintenance and brittleness problems are often cited as the mainreasons for the decline in the initial popularity of ES but we would argue what they arenot the main reasons. Decision support systems (DSS) such as spreadsheets and databasepackages, were also launched into the market in the 1980s and gained widespreadacceptance even though these systems are often unreliable due to user bias (Phillipakis,1988). We attribute acceptance of DSS to the high degree of user control, system flexibilityand ‘‘user-friendliness’’ of these systems. We see ‘‘user-friendliness’’ as a good matchbetween the things the user sees and does with what they feel to be appropriate. In otherwords, interaction is intuitive. This intuitiveness is better described as ‘‘transparency ofinteraction’’ by Winograd and Flores (1986, p. 164). It has been found that even when anES has been shown to provide good advice, the systems have not been utilized because oflack of attention to computer-user cooperation issues (Langlotz & Shortliffe, 1983; Salle& Hunter, 1990). Cooperation is more than the user interface. The interface is concernedwith usability. Cooperation also includes the mode of interaction which is more concernedwith usefulness (Rector, 1989). It is this latter aspect that we have concentrated on in ouractivity-reuse research. More recently, the human—computer interaction (HCI) commun-ity has moved away from the concept of cooperation, which implies equality of machineand user. Currently, the greater emphasis is on the user as the key participant and theburden is on the system developer to build systems that are both useful and usable.

We agree, with Winograd and Flores (1986) and Suchman (1987), that HCI methodsshould be re-evaluated and changed to support the lessons of situated cognition. This

RIPPLE DOWN RULES AND SITUATED COGNITION 901

requires understanding the context in which the system is situated (Winograd & Flores,1986). Vera and Simon describe what they term ‘‘the soft form of investigation of SA[that] builds AI systems that incorporate principles of representing objects functionallyand interacting in a direct and unmediated way’’ (1993, p. 11). The system that wedescribe later has taken this form by using an object oriented interface environment andhas minimized the need for a priori analysis of the domain or the use of a knowledgeengineer in the KA process.Not only do we need to study how humans use computers but also their uses and what

problems, or ‘‘breakdowns’’ occur (Winograd & Flores, 1986). We view the underutiliz-ation of ES as a breakdown.

A system that provides a limited imitation of human facilities will intrude with apparentlyirregular and incomprehensible breakdowns. On the other hand, we can create tools that aredesigned to make the maximal use of human perception and understanding withoutprojecting human capabilities onto the computer. (Winograd & Flores, 1987, p. 137).

This last point is note worthy. By creating systems that are more usable by humans weare not saying that we are making systems that are more like humans (Winograd& Flores, 1987). This is not necessary.

‘‘The ability to resolve breakdowns adaptively is a basic human cognitive capacity’’(Vera & Simon, 1993, p. 15). Progress in building systems that are truly reflective andadaptive (e.g. Davis, 1977; Laird, Newell & Rosenbloom, 1987; Stroulia & Goel, 1994) isslow. Our goal is less ambitious. The system we describe later offers a wide range ofinteraction modes and leaves the user to select the tools appropriate for their purpose.Rather than self-adaptive systems we have built a user-adaptive system. With Lave(1988) we would like to move from a ‘‘learning transfer’’ attitude and argue that manyapproaches to explanation or tutoring do not take the situated nature of learning intoaccount. We see allowing the user to decide how to use the system as a better alternativeto systems that anticipate what the human wants to do, such as systems that include usermodels (Cawsey, 1993; Swartout & Moore, 1993).

The simplicity of KA in the system we describe aims at minimizing system breakdownsby providing an intuitive environment. As we describe later, the expert is required to lookat a case, decide whether the conclusion is correct and if not assign a new conclusion andpick which features in the case warrant the new conclusion. The user is further assisted intheir decision making by being shown a list of differences between the current case andthe case associated with the rule that incorrectly fired.

The system allows the user to interact in a number of modes, switching betweendifferent screens according to their requirements. If the user prefers to input theirproposed solution to a case and have that critiqued by the system, they can. Alterna-tively, the user can perform ‘what-if ’ analysis by adjusting attribute-values to test theoutcome on the recommendation. If they wish to learn something about the domain orreceive an explanation of the recommendation given they can look at a rule trace, browsethe knowledge base rules or build an abstraction hierarchy of the part/s of the KBS theyare interested in using formal concept analysis. We have endeavoured to build a systemwhich supports direct and intuitive interaction in a wide range of user-selected modesand matches the users desire for system flexibility, system usability and user control.

902 D. RICHARDS AND P. COMPTON

4. The implications of situated cognition for system development,maintenance, verification and validation

Situated cognition offers some exciting as well as some potentially discouraging implica-tions for the building of KBS. Much knowledge is non-verbal and cannot be realistically‘‘inventoried’’ or shared (Clancey, 1993, p. 109). The immediacy of much human action,the ever-changing nature of contexts, the uniqueness of each act and the complexity offactors that affect an action may lead us to abandon the endeavour to build useful KBSor at least to ignore the need for system verification and validation. This perhapsaccounts for the research emphasis on building KBS with little thought for how tomaintain these systems once they are in operation (Menzies & Compton, 1995; Soloway,Bechant & Jensen, 1987). Most approaches to verification and validation supporta system development life cycle that assumes the system will be built, verified andvalidated before it is put into routine use (Kang, Gambetta & Compton, 1996). Part ofthe reason why verification and validation (V&V) approaches focus on pre-implementa-tion is due to the problem that in a typical rule-based system each time a modification ismade there may be unwanted side-effects that occur (Grossner, Preece, Chandler,Radhakrishnan & Suen, 1993). It is common in the literature to read of systems that wereliterally abandoned once the rules became too hard to maintain (Mittal, Bobrow & deKleer, 1988). Gomez-Perez (1994) particularly criticises the lack of an approach toverification and validation of Knowledge Sharing Technology (KST). He finds thissituation has occurred due to a vicious circle of no standards so no evaluation, noevaluation so no need for standards. Hemmann (1992) argues that widely acceptedstandards are necessary if reusability is to be possible. Gomez-Perez suggests the use ofcompetency questions or counting the number of applications that successfully reuse theontology. He provides an extensive table of approaches to evaluation by other re-searchers. These attitudes however are reminiscent of the platonic view of knowledge-as-an-artifact that can be described once and for all and disseminated for universalapplication.

As mentioned, many KBS approaches assume the domain must be fully describedbefore we can begin to acquire knowledge and seem to take the view the ‘‘there is noperception without prior learning’’ (Rosenfield, 1988, p. 7). However, the focus of KBSshould be on systems that support change and evolution of the knowledge since newperceptualisations of categories are continually reconstructed. This also means thatit is not necessary or feasible to try to encode all the knowledge before it can be put intouse as most approaches to KBS development require (Kang et al., 1996). We can,and have already done, put systems into routine use with a minimal set of rules anddevelop the KBS online. It also means that rather than using KBS methods that rely onidentification of the task structure or problem solving method/s before the appropriatemethod can be applied it is desirable to have a KA and representation technique thatallows a system to be built incrementally and ‘‘gradually evolve into whatever typeof2 system is best suited to the problem as new cases are seen’’ (Kang et al., 1996,p. 267), which was first suggested in Compton, Kang, Preston and Mulholland (1993).With our recent work on RDR for configuration (Ramadan et al., 1997) we have madea small refinement to the MCRDR inference process that allows the same inferenceengine to be used for classification and configuration tasks. The major focus of current

RIPPLE DOWN RULES AND SITUATED COGNITION 903

knowledge acquisition research is on PSMs. This emphasis tends to view domainknowledge and the PSM as two quite separate issues and little attention seems to be paidto actual acquisition. of the domain knowledge. In contrast, RDR are a family of PSMswhere the major focus is on facilitating acquisition of domain knowledge, so thata domain expert is able to add the bulk of the knowledge without knowledge engineeringsupport or skills. RDR have been used successfully for classification, configuration,control and heuristic search tasks. The key difference between RDR and other ap-proaches seems to be that RDR support the addition of domain knowledge specifically toovercome the limitations of the problem solver. This raises the question (or ratherreturns to one of the earliest questions for KBS) of whether PSM reuse may be betterachieved by facilitating knowledge acquisition with a few coarse grained PSMs ratherthan developing libraries of highly specific PSMs. It also leads to the further suggestionthat modelling should be secondary to KBS development rather than being a necessaryprecursor.

5. The implications of situated cognition for RDR and vice versa

The focus on complex analysis and modelling as a precondition to KA may be becausemost research has been based on laboratory experiments which have shown the regulari-ties between expert behaviour and the use of reflection and abstraction (Shalin, Geddes,Bertram, Szczepkowski & Dubois 1977). However, such settings are missing the broadercontext and many of the features and constraints that typically affect expert behaviour innatural settings (Lave, 1988). Laboratory settings usually involve an individual ina predefined task that has already been analysed and the problem-space already struc-tured. The results from these studies characterize expert behaviour as reflective makinguse of ‘‘abstract, accepted methods that constrain the activities of practitioners andrender the activities of practitioners predictable’’ (Shalin et al., 1997, p. 195). Given thelimitations of laboratory experiments, we may question the value of building KBS to beused in the real world. However, Shalin et al. (1977) go on to report studies of behaviourof real-world tasks which show use of accepted methods but the decision of when toemploy which method is based on ‘‘perceptual experience and temporal awarenessobtained through engagement with the physical world’’ (Shalin et al., 1997, p. 196). Thisfinding is significant to the discussion of whether symbolic KBS can support the situatedview. Shalin et al. (1997) have studied some contextually rich environments that coverwhat Woods (1988) calls ‘‘complex, dynamic and physical domains’’. Such domains ofteninvolve large systems, multiple experts, quick turn-around from problem to solution andcritical decision making where a wrong decision may have direct or indirect catastrophicresults. The good news is that although at the low level the domains may be ‘‘complex,ill-structured and unpredictable’’ at some abstract level human behaviour can be seen aspredictable. Shalin et al. (1997) state: ‘‘because accepted methods with well-definedconditions of applicability appear to dominate performance in these domains, we areoptimistic about the viability of artificial aids for certain functions ordinarily thought tobe intelligent’’ (p. 213). Collins (1987) also comes to the conclusion that it is viable to usesymbolic systems but we must acknowledge the role of the human in filtering the inputand output according to their social context. Similarly, while Suchman (1987) showedthat plans are adapted to fit the situation and the adaptations are executed reflexively,

904 D. RICHARDS AND P. COMPTON

she does not argue do not make any plans because they will change. Plans are useful butwe must be able to adapt them.

RDR were developed in response to the observation that experts could not fullydescribe why they reached a particular conclusion. It was apparent that knowledge wasnot something that could be fully captured and reproduced but that certain aspects of anexperts behaviour could be emulated particularly when cases were provided. RDR doesnot attempt to deal with the nature of knowledge. RDR is primarily a response tosituated cognition by recognising the situatedness of knowledge through the use ofgrounded cases and an emphasis on system maintenance and evolution. However, RDRdoes act as a proof that symbolic systems can be used to address the problems that thesituated cognition stance raises. Rosenfield claims that categories are not ‘‘stored things’’but ‘‘categorization occurs at runtime’’ (Clancey, 1991b, p. 243). To build symbolicclassification systems it is necessary to capture these perceptual categories. Althoughperceptual categorization is dynamic from our experience with systems built using RDR,experts are able to state some rules that can accurately predict to which class an objectbelongs since the act can be considered predictable or stable behaviour (Clancey, 1991b,p. 249). We have observed that the rules developed tend to be overgeneralizations whichtake two to three refinements, resulting in an unbalanced tree.

The system we describe in Section 6 has been used in domains that qualify as‘‘complex, dynamic, physical’’ and have been built not only to support but to encouragecontinual adaptation or refinement. We have avoided the complex analysis route andopted for capturing a simple model focused on the observed stable behaviours of theexpert. In the system described, the role of the user is paramount in building, maintainingand using the system and the expert acts as the filter that controls the inputs and outputssupporting customization of the knowledge to suit the particular environment (Edwards,1996).

6. From theory to practice

With such a lengthy discussion of what should or could be done we could be deserving ofVera and Simon’s criticism: ‘‘although the situated action studies are valuable in drawingour attention to important phenomena, they offer little new theoretical constructs thatare not already present in previous work’’ (Norman, 1993, p. 4). We take up theirchallenge and describe a knowledge acquisition and representation technique, Ripple-down Rules (RDR) that does represent a paradigm shift in the building, maintenance anduse of KBS and much of this shift is due to our belief in the situated cognition view. RDRhas been used in a number of different domains, the most notable being chemicalpathology where the system pathology expert interpretative reporting system (PEIRS)(Edwards et al., 1993) went into routine use with only 200 rules and grew online to over2000 rules over a four-year period (1991—1994). The system was built by experts withoutthe intervention of a knowledge engineer after initial set up of the system. PEIRSprovided comments for about 20% of the 500 reports issued each day. Each report wasreviewed by a medical expert resulting in four to five corrections each day meaning thesystem was 95% accurate. Each rule took about 3 min to add, most of that time beingtaken up in deciding on the wording of the conclusion or in locating an existingsuitable conclusion. This constitutes a development time of around 100 h for the

RIPPLE DOWN RULES AND SITUATED COGNITION 905

2000#rule-base. This result is in marked contrast of the two-to-three rules per daytypically associated with the maintenance of medium to large KBS.

The KA bottleneck and maintenance problems that prevail in most KBS have beenaddressed by RDR. In addition, we claim that RDR have addressed a number of theissues that situated cognition has raised. These issues are context, validation anddevelopment of an intuitive and adaptable interaction environment. The latter two issueshave led us to incorporate tools for reflection on the performance model to assistexploration and understanding of the underlying higher concepts represented in oursimple model. First we give a brief description of RDR and then consider the way RDRhandles context, validation and reflection.

6.1. RIPPLE DOWN RULES

RDR looks at differences between cases and is conceptually close to research based onPersonal Construct Psychology (Kelly, 1955) using Repertory Grids (Gaines & Shaw,1989) and the use of a discernability matrix in Rough Sets (Pawlak, 1991). In support ofthe use of differences by these approaches Clancey (1991b) states: ‘‘apparently, the verybusiness of perception is to view the world conservatively (noticing only what is different)in order to adopt previous successful ways of behaving’’ (Clancey, 1991b, p. 256).

RDR were first developed to handle classification tasks where only a single classifica-tion per case was required. A binary tree structure linked the rules. Each rule had a singlecase associated with it. This case was known as the cornerstone case. A rule was added inresponse to an incorrect conclusion and was added as a refinement to the rule that firedincorrectly. KA involved assigning the correct conclusion and picking the salient featuresin the new case that differentiated it from the cornerstone case associated with the rulethat gave the wrong conclusion (Compton & Jansen, 1990). We can define a single-classification RDR as a triple Srule, X, NT, where X are the exception rules and N are theif-not rules (Scheffer, 1996), see Figure 1. When a rule is satisfied the exception rules areevaluated and none of the lower rules are tested. There has been concern that sucha structure would result in extensive repetition in the knowledge base but simulationstudies have shown that knowledge bases produced by correcting errors as they occurare at least as compact and accurate as those produced by induction (Compton, Preston& Kang, 1994, 1995).

A more recent development is multiple classification RDR (MCRDR) whichhandles classification tasks that need multiple independent classifications (Kang,Compton & Preston, 1995; Kang, 1996). Instead of a binary tree, n-ary trees areproduced. There are no false branches and each rule may be seen as a pathway againstwhich the data is evaluated. In MCRDR there may be multiple cornerstone casesassociated with each rule. Thus, KA requires the user to distinguish between the presentcase and all the stored cases. The user does this firstly by constructing a rule whichdistinguishes between the new case and one of the stored cases. If other stored casessatisfy the rule, further conditions are required to be added to exclude a further case andso on until no stored cases satisfy the rule. It has been shown that a sufficiently preciserule typically requires the user to consider two or three cases (Kang et al., 1995).Incorrect conclusions can be stopped in the same way by specifying a rule with a nullconclusion.

FIGURE 1. A single classification RDR KBS. Each rule can be seen as a pathway that leads from itself back tothe top node which is rule 0. The highlighted boxes represent rules that are satisfied for the case Ma, d, h, iN.

906 D. RICHARDS AND P. COMPTON

MCRDR is defined as the triple Srule, C, ST, where C are the children/exception rulesand S are the siblings. For inferencing, all siblings at the first level are evaluated and iftrue the list of children are evaluated until all children from true parents have beenexhausted. The last true rule on each pathway forms the conclusions for the case. Figure 2shows an example of an MCRDR. MCRDR is a more compact representation (Kang,1996) and matures more quickly than a single classification RDR system (Kang et al.,1995). Due to the use of cases, RDR is a type of case-based reasoning (CBR), but unlikeconventional CBR, RDR recognizes that despite any apparent generality of the know-ledge provided, it is given in the context of a particular case and is prompted by theparticular case. In conventional CBR there is an over-emphasis on cases and there is noclear approach for properly dealing with the knowledge from experts that is requiredsuch as transforming cases, selection of salient features, indexing and so on. RDR focuseson explicitly handling these issues through the involvement of experts. For example, inRDR the expert identifies the sailent features in the case which form the new rule and ina CBR sense provides the index by which to retrieve that case.

6.2. THE PRIMACY OF CONTEXT IN RDR

As discussed in Section 5, the existence of stable behaviours and accepted practices byexperts supports the argument that we can build symbolic KBS despite the challengethat situated cognition poses. The issue of context is highly relevant to stable behaviours.If we consider the phenomena of only being able to remember a person’s phone number

FIGURE 2. An MCRDR KBS. The highlighted boxes represent rules that are satisfied for the case Ma, d, g, h, kN.We can see that there are three conclusions, Class 2 (Rule 2), Class 3 (Rule 4) and Class 8 (Rule 10).

RIPPLE DOWN RULES AND SITUATED COGNITION 907

whilst in the process of dialing that number (Norman, 1988), stable behaviours can beseen as context dependent and they tend to become subconscious or reflexive actions.RDR stresses the importance of context on the appropriateness of conclusions throughthe use of grounded cases.

Without consideration of context attempts to collect commonsense knowledge‘‘like somany butterflies’’ would be advocating a return to the 1960s where the AI communityheld the views ‘‘memory as storage’’ and ‘‘knowledge is power’’ (Clancey, 1991b, p. 245).To avoid such errors, context is becoming an important factor in the reuse and sharingcommunity (Macarthy, 1991; Chandrasekaran & Johnson, 1993). Research is being doneby the knowledge sharing effort (Patil et al., 1992) into adding contexts into theknowledge interchange format (KIF) to facilitate the translation of facts from onecontext to another. In the area of natural language, context plays a major role as thecontext of a word will often determine its meaning. In answer to the growing problem ofgenerality in AI, Guha under the supervision of McCarthy began looking at the need toprovide context. To facilitate sharing and reuse of knowledge, large KBS such as CYCrequire context, in addition to knowledge, to be understood and described so that it isknown when the knowledge should be applied and how it can be adapted, if necessary, tofit the new situation.

Context in RDR is preserved by the storing of the case that prompted a rule to beadded. Difference lists are generated between the current case and the case/s associatedwith the rule/s that incorrectly fired. The use of difference lists in RDR allows the user tosee how the two contexts differ and decide whether one or more of the differences justifiesan alternate conclusion. In Figure 3 we see the MAKE screen in MCRDR for Windows

FIGURE 3. The Make screen in MCRDR for Windows where the user makes new rules.

908 D. RICHARDS AND P. COMPTON

where the user selects the conclusion and is then required to select attribute—value pairsto form the rule conditions. The current case is presented together with the caseassociated with the rule that gave the misclassification (known as the cornerstone case).The new rule cannot be installed until all relevant cornerstone cases have been presentedand a sufficiently specific rule developed to distinguish between the current case and allcornerstone cases in the case list. By taking this approach we are asking ‘‘What is the bestRecommendation for this Case?’’ instead of the more general question ‘‘What is the bestRecommendation for these types of Cases? In conventional KA the second question isbeing asked so it is necessary to try to prespecify all of the contexts in which thatparticular recommendation would apply. The creation of global rules implies that theknowledge is explicit (Tyler, 1978). This is giving primacy to the classification, ratherthan the context and means that the expert has the task of deciding whether the system’soutput fits the actual problem (Edwards, 1996). The way that RDR only considers anindividual and real case moves the focus back to context and makes the experts decisionapplicable to that particular case. The emphasis in RDR is on local knowledge- orknowledge in context rather than global knowledge.

-This view of local knowledge is not to be confused with the localization hypothesis (Geertz, 1993) whichholds that memory is a storage place for things such as words, images which are retrieved unchanged asrequired.

RIPPLE DOWN RULES AND SITUATED COGNITION 909

Given that humans are affected by environmental, emotional, social and historicalfactors, storage of the case may only provide one small aspect of what makes upa context. Emotions appear to be the most difficult aspect of context. On the one hand,emotions colour our perception and affect our recollections. On the other hand, thesystems we build should give consistent recommendations. This does not mean ignoringemotions will result in more objective systems but that emotions pose too much ofa problem and the alternatives to treating it as just another attribute are not clear.

Clancey states that regarding the modelling of diagnostic strategies ‘‘classificationsand production rules are fine for stating behavioural patterns’’ (1993, p. 278). The mainproblem he points to is providing some explanation of how the diagnosis is developed.He found that for explanation or learning purposes ES ‘‘need to articulate how rules fittogether, how they are constructed’’ (Clancey, 1984, p. 59). To some extent this issupported by the structure of an RDR KBS, see Figure 4. The RDR pathways andassociated cases provide a contextual explanation of why a conclusion has been reached.In contrast, the knowledge base structure in conventional KBS is difficult to determinedue to complex interaction of rules and the numerous possible pathways to arrive ata conclusion.

6.3. VALIDATION IN AN RDR KBS

RDR were originally developed to deal with the situated nature of knowledge providedby experts, particularly as observed during KBS maintenance (Compton & Jansen,1990). It was observed that experts provide justifications rather than explanations of

FIGURE 4. A rule trace in RDR.

910 D. RICHARDS AND P. COMPTON

their actions and the justification given would differ depending on the context, namelythe audience to whom the justification was directed. This led to the exception structureand storage of the cases that prompted rules to be added, as discussed in Section 6.1.RDR is described as providing validated KA because the method does not allow theexpert to add any rules which would result in any of the cornerstone cases being givena different conclusion from that stored. This is not total validation, but it ensures that thenew rule is sufficiently specific and different to the wrong rule to assign the newclassification to the current case but not to cover previously correctly classified cases.

In contrast to most approaches to KBS verification and validation (Kang et al., 1996),RDR does not distinguish between initial KA and system maintenance. RDR developsthe whole system on a case by case basis and automatically structures the KBS in sucha way to ensure changes are incremental. Even KA techniques that try to minimize theamount of knowledge engineering support and the need for a priori modelling such asRepertory Grids (Gaines & Shaw, 1989) and FCA, require consideration of the wholedomain and do not support incremental maintenance. If amendments are made, implica-tions must be regenerated.

We have also been looking at providing KA in a critiquing mode so that when a userselects a certain conclusion it is evaluated against all other paths (rules) that reach thatconclusion. If the proposed rule is inconsistent with the current case the user is notified ofthe anomaly. In Figure 5, using the data from the SISYPHUS III (Shadbolt, 1996)

FIGURE5. The critiquing screen in MCRDR for Windows allow the user to see if any existing rule pathways forthe selected conclusion would conflict with the current case.

RIPPLE DOWN RULES AND SITUATED COGNITION 911

experiments, we have attempted to add a rule that states that a particular rock, alreadyclassified as volcanic, should also be classified as plutonic. These conclusions aremutually exclusive, therefore it is inconsistent to assign both conclusions to the one case.In Figure 5, this inconsistency has been detected by the system. The user is shown therules in conflict and can then change their conclusion, select an existing rule to modify oradd a new rule. Once the changes are made the case is run again and the user is given allconclusions for that case. This should now show the new conclusion and any conclusionsthat were not being altered should be given again.

Once a conclusion has been decided on, there is also assistance in forming the rule. Theuser is able to compare the proposed rule against existing rule pathways using a nearestneighbour algorithm which assigns a similarity score or find where the new concept fitsinto the hierarchy of concepts derived using FCA. The purpose of showing the user thisinformation is to give them an understanding of how the new rule fits in with the existingknowledge. If the new rule is identified with concepts that seem inappropriate this isa warning to the user that the knowledge in the new rule or an existing rule is incorrect.Figure 6 shows how rule pathways can be compared. Firstly, the user selects a pathway,which could be a proposed new rule. If the nearest-neighbour algorithm (NNA) option istaken the user is shown a list of scores between 0 and 1 of the closeness of the specifiedpathway to all other pathways. If the formal concept analysis (FCA) option is taken theuser is presented with a listbox of all the other pathways in the knowledge base that arematches, sub or superconcepts.

FIGURE 6. Comparing rule pathways in MCRDR/FCA.

912 D. RICHARDS AND P. COMPTON

6.5. REFLECTING ON KNOWLEDGE IN AN RDR KBS

Although we only needed to build a simple model that encoded the performanceknowledge for KA, inferencing or maintenance of the knowledge, we wanted to find thehigher-level explanation model contained in our KBS due to their ‘‘explanatory value aspsychological descriptions’’ Clancey 1993, p. 89) and their usefulness in instruction(Schon, 1987). Since we already had a successful way of managing the rule-base, wewanted to find a way of using our assertional RDR KBS to build a terminological- KBSthat would reveal the abstractions and structure of the concepts in our KBS. Byunderstanding the relationships between concepts we would be able to cope withchanging contexts, which includes changes in the usage of the knowledge, because theywould provide the ‘‘continuity’’ or applicability of a context to a new situation (Lave,1988, p. 20). Agre (1993) also identifies relationships between agents and worlds asa critical issue in the use of categories for locating things. When applied to a KBS we canview this as the need to capture the relationships between the objects in our KBS, whichare the rules and conclusion, in order to provide different views and activity-uses ofknowledge (Richards, Gambetta & Compton, 1996). This involved taking the primitiveconcepts in our RDR KBS and finding abstractions that represent the higher concepts inthe model. The work by Edwards (1996) on reflective KBS considered how the behaviourof an RDR KBS could be modified using what he termed ‘‘exceptions, prudence andcredentials’’. By collecting various statistics regarding rule and conclusion usage thesystem could determine when it was reaching its limits. In our search for a mechanism toidentify relations between the attribute-value pairs and conclusions we have takena different tack to Edwards to providing reflective systems through the incorporation ofretrospective modelling based on formal concept analysis.

Formal concept analysis, first developed by Wille (1982), is a mathematically based‡method of finding, ordering and displaying formal concepts. FCA is ‘‘based on thephilosophical understanding of a concept as a unit of thought consisting of two parts: theextension and intension (comprehension); the extension covers all objects (entities)belonging to the concept while the intension comprises all attributes (or properties) validfor all those objects’’ (Wille, 1992, p. 493). The set of objects and their attributes, knownas the extension and intension respectively, constitute a formal context which may beused to derive a set of ordered concepts. The lattice structure is similar to a semanticnetwork but it provides a hierarchy of objects and attributes. Concept labelling has beenreduced in the lattices in Figures 9, 10, 12 and 13 to reduce screen clutter. To determinethe full set of attributes and objects that belong to a concept we need to tranverse thelattice. All attributes inherited by a concept are reached by ascending paths and allconcepts that are inherited by objects are reached by a descending path.

Clancey points out that semantic networks can embody a cognitive model thatexhibits patterns of human behaviour. However, since they are limited to words they

-Terminological KB consist of terms structured into inheritance networks (Brachman, 1979). Their mainbuilding blocks are concepts and roles and they reason by determination of subsumption between concepts(Nebel, 1991). Assertional KBS are made up of executable assertions (such as rules) that assert the relationshipsbetween terms.

‡We do not give the mathematical foundation here but the interested reader is directed to Wille (1982).

RIPPLE DOWN RULES AND SITUATED COGNITION 913

constitute a ‘‘grammatical model of cognition’’ (Clancey, 1991b, p. 251) and do notcapture non-verbal conceptualizations or model the perceptual—conceptual learning thatoccurs when humans attach meanings and reinterpretations of the words. We acknow-ledge these limitations and the fact that to some extent we are treating concepts as‘‘things’’ rather than ‘‘processes of perceiving and processes of behaving’’ (Clancey, 1991b,p. 252). Again we stress that while RDR systems can exhibit behaviour similar toa human expert we do not say that the system we have developed matches the way thatthe human mind works, although we think it is closer than systems that depend on thecreation of ontologies and problems solving methods for knowledge acquisition. A majorcriticism of programs that use a grammatical model is that because they are bounded bythe terminology used they are unable to learn at the knowledge level. In our case we aremaking use of a grammatical model to uncover concepts that may not be so easy for anexpert to identify. This occurs because in many cases an expert will perform tasks ata subconscious level and may have difficulty explaining why they have acted thus. Asexplained by Clancey (1988) concerning the process of extracting conceptual and pro-cedural abstractions from MYCIN into NEOMYCIN, the most famous reuse of know-ledge, ‘‘we are stating a model that goes well beyond what experts state without our help’’(Clancey, 1991b, p. 261).

RDR and FCA share a number of situated views. Both see that KA should be a taskprimarily performed by the expert and reduce modeling to the tasks of classifying cases(objects) and the identification of the salient features (attributes). FCA also placesa strong emphasis on the importance of knowledge in context and is ‘‘guided by theconviction that human thinking and communication always take place in contexts whichdetermine the specific meaning of the concepts used’’ (Wille, 1996, p. 23).

A formal context is a crosstable made up of rows of objects and columns of attributes.The crosses indicate that a particular object has the corresponding attribute, see Figure 7.Since we are interested in finding the higher concepts in our rule-base we interpret theMCRDR rule pathways as objects, treating each rule condition as an attribute. In thetool we have developed, known as MCRDR/FCA, we are able to select various views ofthe knowledge base to model. Our motivation for this is twofold. Practically, we arelimited by the amount of information we can depict on a screen before it becomesincomprehensible. Thus, by restricting the view we can present a clearer diagram.Secondly, users are also limited in the amount of information that they can comprehendat a time and it is useful to concentrate on particular aspects of the KBS. This approachhas previously been found to be a useful strategy (Wille, 1989) and Ganter (1988), whoshortens the context by finding subcontexts and subrelations. In the example in Figures7, 8 and 9 we have selected the conclusion ‘‘%MC002—Metabolic compensation. 2’’from the blood gases domain. In Figure 5 the set of objects, G (for gegenstande inGerman)"M9-%MC002, 10-%MC002, 14-%MC002, 15-%MC002, 19-%MC002, 49-%MC002N, where the object is referred to by the rule number and the conclusion. The setof attributes M (for Merkmale in German)"MNormal(Blood—Ph), Low(Blood—Bic),Low(Blood—PC02), 1"1,- High(Blood—Ph), High(Blood—PC02), Low(Blood—Ph),High(Blood—Bic), Incr(Blood—Pg), Decr(Blood—Bic), Curr (Blood—Ph))7.36N. The set

-l"1 is the default condition for the default rule which is inherited by all rules.

Normal Low Low l"1 High High Low High Incr Decr Currblood blood blood blood blood blood blood blood blood bloodPH BIC PCO

2PH PCO

2PH BIC PH BIC PH)

57.36

9-%MC002 ] ] ] ]10-%MC002 ] ] ]14-%MC002 ] ] ]15-%MC002 ] ] ]19-%MC002 ] ] ] ] ]49-%MC002 ] ] ] ]

FIGURE 7. A formal context for the MCRDR rules which conclude %MC002-‘‘Metabolic compensation.2’’ inthe blood gases domain.

914 D. RICHARDS AND P. COMPTON

of objects G is related to the set of attributes M by the binary relation I which definesa formal context as K"(G, M, I). The set of relations I"M(9-%MC002, Normal(Blood—PC02),2 , (49-%MC002, Curr(Blood—Ph))7.36)N.

Each row in our crosstable constitutes a formal concept, which is a set of attributesand the object/s which have those attributes. Each row represents the primitive conceptsin our KBS. We can derive higher level concepts from the low-level concepts by takingthe intersections of sets of attributes and the set of objects which share those attributes.For example we can see in Figure 7 that objects (rules) 14, 19 and 49 all share theattributes: Normal(Blood—Ph), 1"1 and High(Blood—PC02) which is concept No. 3 inFigure 8. The concepts found are ordered using the subsumption relation * and can bedrawn as a line, or Hasse, diagram as in Figure 9. For comparison of a proposed rulewith the existing concepts, as described in Section 6.3, we are only using the intensionaldefinition of the concepts derived using Wille’s technique because an intensional defini-tion implies an extensional definition but the converse is possibly but not necessarily true(Zalta, 1988) thus the extensional definition was too restrictive.

We have currently 13 different selection options that can be used to generate a formalcontext. For example, a user is able to select a conclusion or a family of conclusions sothat all rules that give that conclusion are added to the context. A family of conclusionsrefers to the situation where an expert has chosen a naming convention that shows thatconclusions are related to one another, such as %SH000 may mean wear shorts’’ and%SH001 may mean ‘‘do not wear shorts’’. Another option is to select a particular ruleand add all rules that share any of the selected rule’s conditions. Alternatively, the usercan select a rule condition or specify an attribute as the basis for picking up relatedrules. We are still in the process of refining and evaluating these views of the knowledgeto provide sufficient flexibility to allow the user to work with the knowledge accordingto their needs. After generating a context by one or a combination of the manymethods of selection, we generate concepts as briefly described. A more detailed descrip-tion is given in Richards and Compton (1997). The result is a concept matrix as inFigures 8 and 11 which can also be shown as a concept lattice as in Figures 9, 10, 12and 13.

FIGURE 8. The concept matrix in MCRDR/FCA for Windows for the conclusion %MC002-‘‘Metaboliccompensation.2’’ in the blood gases domain. Ten (10) concepts have been found. Each row represents a concept.The columns show the eleven attributes, which are listed first, followed by the six objects as shown in the formalcontext in Figure 7. The attribute labels have been converted to sequential numbers and the object labelscorrespond to the rule number to allow the relationships between concepts and the possible patterns to be morereadily seen. Full labelling can be obtained by using the pop-up windows as shown in this figure or by clickingon the attribute, object or concept number. The concepts have been ordered to show the subsumption relationsthat exist. The extent of the top concept, No 1, includes all objects. The intent of the bottom concept, No 10,

includes all attributes.

RIPPLE DOWN RULES AND SITUATED COGNITION 915

To date MCRDR/FCA has been used in three different domains. The first domain wasa 60-rule Blood Gases Domain, that had been developed from the cornerstone casesassociated with the 2000#PEIRS rules. The line diagram in Figure 10 has beengenerated from a context based on attributes only without regard for the values and isinteresting as a high level abstraction of the knowledge base. We can see that conclusionsfrom the same conclusion family are generally grouped together. For example, theattribute BLOOD—BIC is used by all conclusions for the %MC conclusion family andBLOOD—P02 is only used by the %OX conclusion family. These groupings of con-clusions into families are to be expected since the expert has already identified that thereis a relationship between them. It is interesting to note that while the expert had notexplicitly stated the higher-level relationship between various attributes and conclusions(the user had provided lower level A—V pairs in the form of rule conditions) the modeldrawn by MCRDR/FCA shows which attributes are the key ones for determining certaintypes of conclusions.

FIGURE 9. The line diagram in MCRDR/FCA that shows the rule conditions and the relationships betweenthem for the conclusion %MC002-‘‘Metabolic compensation.2’’.

FIGURE 10. The line diagram in MCRDR/FCA for Windows using a formal context based on the attributes inthe Blood Gases KBS without regard for the values of those attributes.

916 D. RICHARDS AND P. COMPTON

RIPPLE DOWN RULES AND SITUATED COGNITION 917

The second domain was known as LOTUS and concerned the adaptation andmanagement of the ¸otus ºliginosis cv Grasslands Maku for pastures in the Australianstate of New South Wales (Hochman, Compton, Bluementhal & Preston, 1996). FourKBS were developed by four independent agricultural advisors. Initially, each expertentered cases and rules which covered them as they occurred in the field. Periodically theKBS and advisors were brought together. Where cases offered new insights they wereshown to the group and used for KA in other KBS. We used the concept matrices andline diagrams in Figures 11 and 12 to compare the conceptual models of the advisors.This technique was seen as a useful way to identify and reconcile any differences as wellas identify the main concepts associated with this domain.

The concept matrices for the four LOTUS KBS are shown in Figure 11. The matricesare too small to be comprehensible but the four matrices together demonstrate thatpatterns between the KBS can be found. By looking at full-size diagrams a number ofobservations are possible. We can see that all KBS share a number of concepts, (the firstnine attributes and 10 objects are the same in each KBS). We can see that the fifthconcept in the Lotus3 KB is not shared by any of the others and that the advisorconsiders that when the lotus—rate is *3 the conclusion of Ryegrass no longer holds andit should be replaced by No Conclusion. We can also see that concepts 12 and 13 in theLotus3 KB are not shared by any of the other KBs. The concept intent(SCARABS"YES) and the concept extent (%SCARA), which represent a rule conditionand rule conclusion respectively, have been introduced by this advisor. The conclusion%SCARA is an abbreviation for ‘‘Maku OK, but persistence limited by scarabs’’. Bylooking at the matrix the experts are able to see not only what attributes (intents) andconclusions (extents) others consider important but also the relationship between themand how it affects other conclusions.

FIGURE 11. The concept matrices for the four LOTUS KBS in MCRDR/FCA.

FIGURE 12. The line diagrams for Lotus1 and Lotus2 KBS.

918 D. RICHARDS AND P. COMPTON

Extents are typically cases. Through the use of rules in the formal context, the rulesbecome our extents and are denoted by the rule number and conclusion code. However,since a number of rules may be given the same conclusion there is a one to manyrelationship between a case in the world and a conclusion in our model. When usingmultiple classification RDR this mapping may be a many-to-many where one case maygive rise to more than one conclusion and conclusions may be valid for a number ofcases. We have replaced the labels of the intents and extents with numbers to fit thewhole concept on the screen at the same time. However, it is difficult to understand theknowledge being modelled without labelling. To assist the user it is possible to dropdown a list of attributes and/or objects with the corresponding numbers, as shown inFigure 8, or to click on the number to get the corresponding full label.

The line diagrams for Lotus1 and Lotus2 are shown in Figure 12. Although thelabelling clutters the screen, the extra information is important in understandingthe diagram presented. The line diagram provides a more hierarchical understanding ofthe sub- and super-relationships in the domain than the concept matrix. At a glance wecan see that Lotus1 and 2 have 11 and 14 concepts, respectively, and that the threeconcepts that are different are concepts number 9, 10 and 11 in the Lotus2 KBS. Theseconcepts have introduced new attributes and conclusions (as part of the rule-object) toused by the Lotus1 KBS. The structure of the knowledge in both KBS is very similar withfour levels of concepts in both. Even though concepts 2, 3 and 4 in both KBS appear tobe slightly different structurally, due to inheritance of attributes on higher paths, bothadvisors consider that when (LOW—PH"YES) and (RYEGRASS'"15) the con-clusion should be %NC000 No Conclusion. The examples shown used all the rules in theeach LOTUS KBS since the number of rules ranged from 11 to 18. As described earlier

FIGURE 13. The line diagram for the SISPHYUS III domain from the context for the conclusion%PL000—Plutonic and % VC000—Volcanic.

RIPPLE DOWN RULES AND SITUATED COGNITION 919

we could have narrowed our focus of attention by selecting only certain objects (rules orparts of rules) to be included in our context.

The third domain was the SISYPHUS III (Shadbolt, 1996) geology domain which alsoinvolved knowledge from multiple experts. Initially we have used the tools to understandthe key concepts of the domain. We can see in Figure 13 that when GRAIN-SIZE"COURSE a rock is plutonic (%PL000) and if GRAIN-SIZE"FINE a rock isVOLCANIC (%VC000). However, when GRAIN-SIZE"MEDIUM then if SIL-ICA"LOWISH it is a volcanic rock otherwise if SILICA"VERY-HIGH or INTER-MEDIATE it is a plutonic rock. The line diagram has shown us what attribute-valuepairs are the critical ones for these conclusions.

The models shown above in the line diagrams are static and must be regenerated whenthe knowledge is updated. We do not see this as a major problem since modifications tothe rules in an MCRDR KBS upon which they are based is a simple task and the linediagrams are quickly and automatically recreated.

From the examples, we can see that the concept matrix, and even more so the linediagram, provide succinct but powerful tools for analysing and comparing conceptualmodels. If the purpose of analysis is learning about the domain then the ability to derivean abstraction hierarchy from primitive concepts is useful in gaining an understanding ofthe higher principles in the domain. To support use of the tool for reconciliation ofconflict between viewpoints we have made minor extensions to MCRDR/FCA tosupport requirements engineering (RE). A general RE framework has been developedwhich can be applied to any knowledge representation that can be mapped intoa decision table (Richards & Menzies, 1998). This framework supports detection ofconflict and identification of the nature of the conflict in terms of Gaines and Shaw (1989)four-state model of consensus, correspondence, contrast and conflict. In our implementa-tion of the RE framework, we have defined resolution operators based on the existingmodification capabilities of RDR. These operators allow the assertional RDR KBS to bechanged according to conflict detected in the FCA-derived terminological KBS pre-sented to the stakeholders of each viewpoint. Additionally, the framework and tool offer

920 D. RICHARDS AND P. COMPTON

the negotiation strategies: reconcile, ignore, delay, circumvent and ameliorate. We alsooffer a number of techniques that evaluate the extent to which the resolution operatorsare reducing the initial degree of conflict so that we know we are moving towardsconsensus.

The ability to reuse knowledge which was originally captured for one purpose, such asconsultation, for other purposes, such as explanation and learning, was the motivationfor incorporating FCA into RDR. To support our claim that the line diagrams are usefulfor explanation and learning we have performed preliminary evaluation studies withnovices and experts in each of the three domains mentioned which cover the fields ofmedicine, geology and agriculture. An initial survey using twelve computer sciencepostgraduates and academics has shown that the diagrams can provide insight into theunderlying knowledge and that they provide more structure than a number of othergraphic representations of the same knowledge. We have also used experts in each of thethree domains who have found that the knowledge learnt by a novice using the linediagram is representative of thinking in these domains. In both studies the line diagramhas proven to be a valuable communication tool.

6. Conclusion

The Ripple-down Rules approach described has been developed to cope with thesituated nature of human cognition and action. This is seen in: the emphasis onknowledge obtained in the context of a grounded case; incremental KA, maintenanceand validation; and a KA method that is natural for experts in many domains anddesigned to be performed without the mediation of a knowledge engineer. These featureshave been previously discussed in the literature and proven with real systems. However,the lack of terminological models in RDR KBS which provide higher level concepts hasat times been considered a limitation due to their potential usefulness for explanation,documentation and learning. We have sought to address this limitation by using theperformance knowledge in our assertional RDR KBS to retrospectively uncover ex-planation knowledge in the form of a terminological KBS. This is similar to howa human may act reflexively and then later reflect on their actions, seeing the models andpatterns that may exist. Since the assertional knowledge is grounded by cases we are ableto validate and maintain the performance knowledge. We believe that trying to buildgenerally applicable models that are not grounded in cases is problematic because oftheir unreliability and variability due to situational influences.

We have also sought to offer a system that can be used for a number of purposes,rather than separate systems for each one. The goal is a user-adaptable system that canbe utilized in which ever way the user desires. The tool we offer supports a wide range ofmodes from which the user can select according to their personal preferences and theircurrent situation. This seems more realistic than a system that forces the expert tointeract in one mode based on a preconceived expectation of the user’s requirements.

Recently, RDR have been shown to handle not only classification tasks but alsoconfiguration tasks. The ability to support both tasks without changing the PSM isa significant result and further supports the claim that despite the situated nature ofknowledge symbolic KBS can be useful systems. Within the RDR research group we areactively seeking to further simplify KA by allowing an expert to add or modify feature

RIPPLE DOWN RULES AND SITUATED COGNITION 921

definitions at any stage as with human practice and by seeing what other problem typescan be handled by RDR.

In this paper, we have acknowledged that knowledge is socially situated but we havenot explored the social aspect of knowledge construction in our research. We andClancey (personal communication) feel that the RDR approach places us in a goodposition to further explore this aspect of knowledge. The RE framework and extensionsare steps in the direction of providing a collaborative environment for the developmentof RDR. We intend to turn our attention to this area in the future.

Another possible role of RDR in the situated cognition debate has been suggested byMenzies (personal communication). His suggestion is to use RDR to assess symbolicresponses to situated cognition by contrasting the knowledge from different experts, thentrack changes in that knowledge over time. The argument is that: If

**there is no conflict between knowledge from experts over the same well-knownproblem

**there is little change in that knowledge over time

then the strong situated cognition stance (see Menzies, 1996) is absolutely not required.In conclusion, situated cognition has been seen as a major stumbling block for

conventional KBS because of the emphasis on getting the system right in a one-offdevelopment. While our current RDR research does not contribute to an understandingof situated cognition, the systems built using RDR act as a proof that while expertscannot provide complete knowledge they do make useful generalizations. Further, RDRprovides a way of addressing the problem of building systems in a fairly simple symbolicway. However these systems while performing better (it seems) than early approacheshave fairly simple key expectations, that is, a complete system cannot be built. The RDRcontribution to situated cognition in the future may be that situated cognition is not sucha problem for producing useful systems, we simply have to change our expectations.

RDR research is funded by various ARC. Our thanks goes to Phil Preston for his assistance.Thanks to the reviewers for their useful comments, particularly William Clancey for his detailedremarks and suggestions.

References

AGRE, P. E. (1993). The symbolic worldview: reply to Vera and Simon. Cognitive Science, 17,61—69.

AGRE, P. E. & CHAPMAN, D. (1987). Pengi: an implementation of a theory of activity, Proceedingsof the 6th National Conference on Artificial Intelligence, Menlo Park, CA, pp. 268—272.American Association for Artificial Intelligence.

BRACHMAN, R. J. (1979). On the epistemological status of semantic networks. In N. V. FINDLER,Ed. Associative Networks: Representation and ºse of Knowledge by Computers, New York,Academic press. pp. 3—50.

BREUKER J. (1994). Components of problem solving and types. In L. SCHREIBER, & W. VAN DE

VELDE, Eds. A Future for Knowledge Acquisition. Proceedings of the 8th European KnowledgeAcquisition ¼orkshop, EKA¼’94, Lecture Notes in Artificial Intelligence, Vol. 867, pp.118—136. Berlin: Springer.

922 D. RICHARDS AND P. COMPTON

BROOKS R. (1991). Intelligence without representation. Artificial Intelligence, 47, 139—159.CARBONELL, J. G., KNOBLOCK, C. A. & MINTON, S, (1989). Prodigy: an intelligent architecture for

planning and learning. In Architectures for intelligence. Hillsdale, NJ: Lawrence ErlbaumAssociates.

CAWSEY, A. (1993). User modelling in interactive explanations. Journal of ºser Modelling and ºserAdapted Interaction.

CHAPMAN, D. (1989). Penguins can make cake. Al Magazine, 10, 45—50.CHANDRASEKARAN, B. (1986). Generic tasks in knowledge-based reasoning: high level building

blocks for expert system design. IEEE Expert, 23—30.CHANDRASEKARAN, B. & JOHNSON T. (1993). Generic tasks and task structures. In J. M. DAVID,

J.-P. KRIVINE & R. SIMMONS, Eds. Second Generation Expert Systems, pp. 232—272. Berlin:Springer.

CLANCEY, W. J. (1984). Methodology for building an intelligent tutoring. In W. KINTSCH, J.MILLER & POLSON, Eds. Methods and ¹actics in Cognitive Science. pp. 51—83 NJ: LawrenceErlbaum Associates.

CLANCEY W. J. (1985) Heuristic classification. Artificial Intelligence, 27, 289—350.CLANCEY W. J. (1986). From GUIDON to NEOMYCIN and HERACLES in twenty short

lessons. ORN Final Report 1979—1985. Al Magazine 40—60.CLANCEY W. J. (1988). Acquiring, representing and evaluating a competence model of diagnosis. In

M.T.H. CHI, R. CLASER & M. J. FARR, Eds. ¹he nature of Expertise, pp. 343—418. Hillsdale,NJ: Erlbaum.

CLANCEY W. J. (1991a). The frame of reference problem in the design of intelligentmachines. In K. V. VAN LEHN, Ed. Architectures for Intelligence. Hillsdale, NJ:Erlbaum.

CLANCEY W. J. (1991b). Book review—Israel Rosenfield. The invention of memory: a new view ofthe brain. Artificial Intelligence, 50, 241—284.

CLANCEY W. J. (1992). Model construction operators. Artificial Intelligence, 53, 1—115.CLANCEY W. J. (1993). Situated action: a neurological interpretation response to Vera and Simon.

Cognitive Science, 17, 87—116.CLANCEY W. J. (1997). Situated Cognition: On Human Knowledge and Computer Representation.

New York: Cambridge University Press.COMPTON, P., EDWARDS, G., KANG, B., LAZARUS, L., MALOR, R., MENZIES, T., PRESTON, P.,

SRINIVASAN, A. & SAMMUT, C. (1991). Ripple down rules: possibilities and limitations.Proceedings of the 6th Banff AAAI Knowledge Acquisition for Knowledge Based Systems¼orkshop, pp. 6.1—6.18. Banff.

COMPTON, P. EDWARDS, G., KANG, B., LAZARUS, L., MALOR, R., MENZIES, T., PRESTON, P.,& SRINIVASAN, A. (1991). Ripple down rules: turning knowledge acquisition into knowledgemaintenance Artificial Knowledge in Medicine 4, 463—475.

COMPTON, P. & JANSEN, R., (1990). A philosophical basis for knowledge acquisition. KnowledgeAcquisition, 2, 241—257.

COMPTON, P., PRESTON P. & KANG, B. (1995). The use of simulated experts in evaluatingknowledge acquisition. Proceedings 9th Banff Knowledge Acquisition for Knowledge BasedSystems ¼orkshop, pp. 1.12.1—12.18. Banff, 26 February—3 March 1995.

COMPTON, P., RAMADAN, Z., PRESTON, P., LE-GIA, T. CHELLEN, V., MULHOLLAND, M. HIBBERT,D. B., HADDAD, P. R. & KANG, B. (1998). A trade-off between domain knowledge andproblem-solving method power. Proceedings of the 11th ¼orkshop on Knowledge AcquisitionModeling and Management (KA¼‘98). Banff, Canada, 18—23 April. University of Calgary,Calgary: SRDG Publications, SHARE-7.

CRAGUN, B. J. & STEUDEL, H. J. (1987). A decision-table-based processor for checking complete-ness and consistency in rule-based expert systems. International Journal of Man—MachineStudies, 26, 633—648.

DAVIS, R. (1977). Interactive transfer of expertise: acquisition of new inference rules. AdrtificialIntelligence, 12, 121—157.

EDWARDS, G. (1996), Reflective expert systems in clinical pathology. MD thesis, University of NewSouth Wales.

RIPPLE DOWN RULES AND SITUATED COGNITION 923

EDWARDS, G., COMPTON, P., MALOR, R., SRINIVASAN, A. & LAZARUS, L. (1993). PEIRS: a pathol-ogist maintained expert system for the interpretation of chemical pathology reports. Pathol-ogy, 25, 27—34.

GAINES, B. R. (1989). An ounce of knowledge is worth a tonne of data: quantitative studies of thetrade off between expertise and data based on statistically well-founded empirical induction.Proceedings of the 6th International ¼orkshop on Machine ¸earning pp. 156—159. MorganKaufmann, San Mateo, CA.

GAINES, B. R. & SHAW, M. L. G. (1989). Comparing the conceptual systems of experts. Proceedingsof the Joint Conference on Artificial Intelligence, pp. 633—638.

GAINES, B. R. & SHAW, M. L. G. (1995). Collaboration through concept maps. CSC¸’95 Proceed-ings.

GANTER, B. (1988). Composition and decomposition of data. In H. BOCK, Ed. Classification andRelated Methods of Data Analysis, pp. 561—566. Amsterdam: North-Holland.

GANTER, B. & WILLE, R. (1989). Conceptual scaling. In F. ROBERTS, Ed. Applications of Combina-torics and Graph ¹heory to the Biological Sciences, pp. 139—167. New York: Springer.

GOMEZ-PEREZ, A. (1994). From knowledge based systems to knowledge sharing: evaluation andassessment, Knowledge Systems Laboratory, Stanford University.

GEERTZ, C. (1993). ¸ocal Knowledge: Further Essays in Interpretive Anthropology. London:Fontana.

GREENO, J. G. & MOORE, J. L. (1993). Situativity and symbols. response to Vera and Simon.Cognitive Science, 17, 49—59.

GROSSNER, C., PREECE, A. D., CHANDLER, P. G., RADHAKRISHAN, T. & SUEN, C. Y. (1993).Exploring the structure of rule based systems. Proceedings of the 11th National Conference onArtificial Intelligence, pp. 704—709. Washington, DC: MIT Press.

GUHA, T. V. & LENAT, D. B. (1990). CYC: a mid-term report. AI Manazine, 11, 32—59.HEMMANN, T. (1993). Reusable framworks of problem solving. In C. PEYRALBE ED. IJCAI-93

¼orkshop on Knowledge Sharing and Information Interchange. Chamberry, France, 29August.

HOCHMAN, Z., COMPTON, P., BLUMENTHAL, M. & PRESTON, P. (1996). Ripple-down rules:a potential tool for documenting agricultural knowledge as it emerges Proceedings 8thAustralian Agronomy Conference, pp. 313—316. Toowoomba.

KANG B. (1996). »alidating knowledge acquisition: multiple classification ripple down rules. Ph.D.Thesis, School of Computer Science and Engineering, University of NSW, Australia.

KANG, B., COMPTON, P. & PRESTON, P. (1995). Multiple classification ripple down rules: evalu-ation and possibilities. Proceedings 9th Banff Knowledge Acquisition for Knowledge BasedSystems ¼orkshop, Vol. 1, pp. 17.1—7.20. Banff, 26 February—3 March.

KANG B., GAMBETTA, W. & COMPTON, P. (1996). Verification and validation with ripple-downrules. International Journal of Human-Computer Studies 44, 257—269.

KELLY, G. A. (1955). ¹he Psychology of Personal Constructs. New York: Norton.LAIRD, J., NEWELL, A. & ROSENBLOOM, P. (1987). SOAR: an architecture for general intelligence.

Artificial Intelligence, 33, 1—64.LANGLOTZ, C. P. & SHORTLIFFE, E. E. (1983). Adapting a consultation system to critique user

plans. International Journal of Man—Machine Studies, 19, 479—496.LAVE, J. (1988). Cognition in Practice: Mind, Mathematics, and Culture in Everyday ¸ife. Cam-

bridge, UK: Cambridge University Press.LEE, M. & COMPTON, P. (1995). From Heuristic knowledge to causal explanations. In X. YAO, Ed.

Proceeding of 8th Australian Joint Conference. On Artificial Intelligence AI’95, pp. 83—90.Canberra, 13—17 November. Singapore: World Scientific.

McCARTHY, J. (1991). Notes on formalizing context. Technical Report. Computer Science Depart-ment, Stanford University.

McDERMOTT, J. (1988). Preliminary steps toward a taxonomy of problem-solving methods. In S.MARCUS, Ed. Automating Knowledge Acquisition for Expert Systems, pp. 225—256. Dordrecht:Kluwer Academic Publishers.

MENZIES, T. J., & COMPTOM, P. (1995). The (Extensive) implications of evaluation on thedevelopement of knowledge-based systems. Proceedings of the 9th Banff Knowledge

924 D. RICHARDS AND P. COMPTON

Acquisition for Knowledge Based Systems ¼orkshop, pp. 18.1—18.20. Banff, 26 February—3March.

MENZIES, T. J. (1996). Assessing responses to situated cognition. Proceedings of the 10th KnowledgeAcquisition ¼orkshop, Bartt, Canada.

MITTAL, S., BOBROW, D. G. & DE KLEER, J. (1988). DARN: toward a community memory fordiagnosis and repair tasks. In J. A. HENDLER, Ed. Expert Systems: ¹he ºser Interface.Norwood, NJ: Ablex Publishing Corp.

NEBEL, B. (1991). Terminological cycles: semantics and computational properties. In: J. SOWA, Ed.Principles of Semantic Networks: Explorations in teh Representation of Knowledge, pp.331—361. Los Altos, CA: Morgan Kaufmann.

NEWELL, A. (1982). The knowledge level Artificial Intelligence, 18, 87—127.NORMAN, D. A. (1988). Psychology of Everyday ¹hings. New York: Basic Books.NORMAN D. A. (1993). Cognition in the head and in the world: an introduction to the special issue

of situated action. Cognition Science, 17, 1—6.O’KEEFE, R. T. M. & O’LEARY, E. E. (1993). Expert system verification and validation: a survey and

tutorial. Artificial Intelligence Review, 66, 273—309.PATIL, R. S., FIKES, R. E. PATEL-SCHNEIDER, P. F., McKAY, D., FININ, T., GRUBBER, T. R.

& NECHES, R. (1992). The Artificial Intelligence Review, 66, 273—309.PATIL, R. S., FIKES, R. E., PATEL-SCHNEIDER, P. F., McKAY, D., FININ, T., GRUBER,

T. R. & NECHES, R. (1992). The DARPA knowledge sharing effort: progress report.In C. RICH, B. NEBEL & W. SWARTOUT, Eds. Principles of Knowledge Representation andReasoning: Proceedings of the 3rd International Conference. Cambridge, MA: MorganKaufman.

PAWLAK, Z. (1982). Rough sets. International Journal of Information and Computer Sciences, 11,341—356.

PAWLAK, Z. (1991). Rough Sets: ¹heoretical Aspects of Reasoning about Data, Dordrecht: KluwerAcademic Publishers.

PHILIPPAKIS, A. S. (1988). Structured what if analysis in DSS models. In Proceedings of the 21thAnnual Hawaii International Conference on System Sciences. Washington: The ComputerSociety of the IEEE.

PIRLEIN, T. & STRUDER, R. (1994). KARO: an integrated environment for reusing. In L. STEELS, G.SCHREIBER, & W. VAN DE VELDE, Eds. A Future for Knowledge Acquisition Proceedings of the8th European Knowledge Acquisition ¼orkshop, EKA¼’94, Lecture Notes in ArtificialIntelligence, Vol. 867, pp. 200—225. Berlin: Springer-Verlag.

PREECE, A. D., SHINGHAI, R. & BATAREKH, A. (1992). Principles and practice in verifyingrule-based systems. ¹he Knowledge Engineering Review, 7, 115—141.

PUERTA, A. R., EGAR, J. W., TU, S. W. & MUSEN, M. A. (1992). A multiple method knowledgeacquisition shell for automatic generation of knowledge acquisition tools. Knowledge Acquisi-tion, 4, 171—196.

RADEMAKERS, P. & VANWELKENHUYSEN, J. (1993). Generic models and their support in modelingproblem solving behaviour. In J. M. DAVID, J.-P. KRIVINE & R. SIMMONS, Eds. SecondGeneration Expert Systems, pp. 350—375. Berlin: Springer.

RECTOR, A. L. (1989). Helping with a humanly impossible task: integrating KBS into clinical care.Proceedings of the 2nd Scandinavian Conference on AI, SFCAI’89.

RICHARDS, D. & COMPTON, P. (1996). Building knowledge based systems that match the decisionsituation using ripple down rules. Intelligent Decision Support’96, pp. 114—126. 9th September,Monash University.

RICHARDS, D. & COMPTON, P. (1997). Combining formal concept analysis and ripple down rulesto support the reuse of knowledge. Proceedings Software Engineering Knowledge EngineeringSEKE’97, pp. 177—184. Madrid, 18—20 June.

RICHARDS, D., GAMBETTA, W. & COMPTON, P. (1996). Using, rough set theory to verify produc-tion rules and support reuse. Proceedings of the »erification, »alidation and Refinement ofKBS ¼orkshop, PRICAI’96 26—30 pp. 57—66. Griffith University, Cairns, Australia.

RICHARDS, D. & MENZIES, T. (1998). Extending the SISTPHUS III experiment from a knowledgeengineering to a requirements engineering task. 11th ¼orkshop on Knowledge Acquisition,

RIPPLE DOWN RULES AND SITUATED COGNITION 925

Modeling and Management, Vol 1:SIS-6. Banff, Canada. SRDG Publications, Departments ofComputer Science, University of Calgary, Calgary, Canada.

ROSENFIELD, I. (1988). ¹he Inhvention of Memory: A New »iew of the Brain. New York: BasicBooks.

SALLE, J. M. & HUNTER, J. (1990). Computer/user cooperation issues for knowledge-based systems:a review. Technical Report AUCS/TR9003, Aberdeen University.

SCHEFFER, T. (1996). Algebraic foundation and improved methods of induction of ripple downrules. Proceedings of Pacific Knowledge Acquisition ¼orkshop PKA¼’96, pp. 279—292.Coogee, Australia, 23—25 October.

SCHMALHOFER, F., AITKEN, J. S. & BOURNE, L. E. JR., (1992). Beyond the knowledge level:descriptions of rational behaviour for sharing and reuse. In L. STEELS, G. SCHREIBER & W.VAN DE VELDE, Eds. A future for Knowledge Acquisition Proceedings of the 8th EuropeanKnowledge Acquisition ¼orkshop, EKA¼’94, Lecture Notes in Artificial Intelligence, Vol.867, pp. 83—103. Berlin: Springer-Verlag.

SCHON, D. A. (1987). Educating the Reflective Practitioner. San Francisco. CA: Jossey-Bass.SCHREIBER, G., WEILINGA, B. & BREUKER, Eds. (1993). KADS: a principled approach to know-

ledge-based system development. Knowledge-based Systems. London. UK: Academic Press.SHADBOLT, N. (1996). URL:http://www.psyc.nott.ac.uk/aigr/research/ka/SisIII.SHALIN, V. L., GEDDES, N. D., BERTRAM, D., SZCZEPKOWSKI, M. A. & DuBOIS, D. (1997).

Expertise in dynamic, phyusical task domain. In P. J. FELTOVICH, K. M. FORD & R. R.HOFFMAN, Eds. Expertise in Context: Human and Machine, pp. 195—217. Cambridge, MAAAAI Press/MIT Press.

SIMMONS, R. & DAVIS, R. (1993). The roles of knowledge and representation in problem solving. InI. M. DAVID, J.-P. KRIVINE & R. SIMMONS, Eds. Second Generation Expert Systems,pp. 273—298. Berlin: Springer-Verlag.

SOLLOWAY, E., BACHANT, J. & JENSEN, K. (1987). Assessing the maintainability ofXCON-in-RIME: coping with problems of a very large rule base. Proceedings of the 6thInternational Conference on Artificial Intelligence, Vol. 2, 824—829. Seattle, WA:Morgan-Kauffman.

STEELS, L. (1993). The componential framework and its role in reusability. In J. M. DAVID, J.-P.KRIVINE, E. STROULIA & A. GOEL. Reflective, Self-Adaptive Problem Solvers, Proceedingsof 8th European Knowledge Acquisition ¼orkshop EKA¼’94, pp. 394—413. Springer-Verlag.

SUCHMAN, L. (1987). Plans and Situated Actions. Cambridge, UK: Cambridge University Press.SUWA, M., SCOTT, A. & SHORTLIFFE, E. (1982). An approach to verifyinmg completeness and

consistency in a rule-based expert system. AI Magazine, 3, 16—21.SWARTOUT, & MOORE, J. (1993). Explanation in second generation ES. In J. M. DAVID, J.-P.

KRIVINE & R. SIMMONS, Eds. Second Gdeneration Expert Systems, pp. 543—585. Berlin:Springer-Verlag.

TYLER, S. (1978). ¹he Said and the ºnsaid: Mind, Meaning, and Culture. New York: AcademicPress.

VAN DE VELDE, W. (1993). Issues in knowledge level modeling. In J. M. DAVID, J.-P. KRIVINE & R.SIMMONS Eds. Second Generation Expert Systems, pp. 211—231. Berlin: Springer-Verlag.

VERA, A. H. & SIMON, H. A. (1993). Situated action: a symbolic interpretation. Cognitive Science,17, 7—48.

WEINTRAUB, M. (1991). An explanation-based approach to assigning credit. Ph.D. Dissertation, TheOhio State University.

WILLE, R. (1982). Restructuring lattice theory: an approach based on hierarchies of concepts.In I. RIVAL, Ed. Ordered Sets, pp. 445—470. Dordrecht, Boston: Reidel.

WILLE, R. (1989). Lattices in data analysis: how to draw them with a computer. In I. RIVAL, Ed.Algorithms and Order, pp. 33—58. Dordrecht, Boston: Kluwer.

WILLE, R. (1992). Concept lattices and conceptual knowledge systems. Computers and Mathemat-ical Applications (23) 6-9: 493—515.

WILLE, R. (1996). Conceptual structures of multicontexts. In P. EKLUND, G. ELLIS & G. MANN,Eds. Conceptual Structures: Knowledge Representation as Interlingua. Proceedings of the 4th

926 D. RICHARDS AND P. COMPTON

International Conference on Conceptual Structures, Lecture Notes in Artificial Intelligence,Vol. 987, pp. 23—39. Berlin Springer-Verlag.

WINOGRAD, T. & FLORES, F. (1996) ºnderstanding Computers and Cognition: A New Foundationfor Design. Norwood, NJ. Ablex.

ZALTA, E. N. (1988). Intensional ¸ogic and the Metaphysics of Intenionality. Cambridge, MA: MITPress.

ZDRAHAL, Z. & MOTTA, E. (1995). An in-depth analysis of propose and revise problem solvingmethods. 9th Knowledge Acquisition for Knowledge Based Systems ¼orkshop, Departmentsof Computer Science, University of Calgary, Calgary, Canada. SRDG Publications. pp.38.1—38.20.