510,84 l*6r no. 1534- 1539 1989 C.3 Digitized by the Internet Archive in 2013 http://archive.org/details/sociopathicknowl1538wilk DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLIN< N % REPORT NO. UIUCDCS-R-89-1538 UILU-ENG-89-1757 Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation by David C. Wilkins and Yong Ma August 1989 SECURITY CLASSIFICATION OF THIS PAGE REPORT DOCUMENTATION PAGE la. REPORT SECURITY CLASSIFICATION Unclassified 2a. SECURITY CLASSIFICATION AUTHORITY 2b. DECLASSIFICATION /DOWNGRADING SCHEDULE 4. PERFORMING ORGANIZATION REPORT NUM8ER(S) UIUCDCS-R-89-1538 UILU-ENG-89-1757 6a. NAME OF PERFORMING ORGANIZATION University of Illinois 6b. OFFICE SYMBOL (If applicable) 6c ADDRESS {City, State, and ZIP Code) Dept. of Computer Science 1304 W. Springfield Ave. Urbana, IL 61801 lb. RESTRICTIVE MARKINGS 3. DISTRIBUTION/AVAILABILITY OF REPORT Approved for Public Release; Distribution Unlimited 5. MONITORING ORGANIZATION REPORT NUMBER(S) 7a. NAME OF MONITORING ORGANIZATION Artificial Intelligence (Code 1133) Cognitive Science (Code 1142CS) 7b. ADDRESS {City. State, and ZIP Code) Office of Naval Research 800 N. Quincy Street Arlington, VA 22217-5000 8a NAME OF FUNDING/SPONSORING ORGANIZATION 3b. OFFICE SYMBOL (If applicable) 9 PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER N00014-88K-O124 8c. ADDRESS (City, State, and ZIP Code) 10 SOURCE OF FUNDING NUMBERS PROGRAM ELEMENT NO 61153N PROJECT NO RR04206 TASK NO oc WORK UNIT ACCESSION NO 443g-008 11. TITLE (Include Security Classification) Sociopathic Knowledge Bases: Correct Knowledge Can be Harmful Even Given Unlimited Computation 12. PERSONAL AUTHOR(S) David C. Wilkins and Yong Ma 13a. TYPE OF REPORT Technical Pppnrf 13b. TIME FROM * ^I R -¥ TO 91-1-30 14 DATE OF REPORT (Year, Month. Day) 1989 August 15. PAGE COUNT 26 16. SUPPLEMENTARY NOTATION Available as Report UIUCDCS-R-89-1538, Dept. of Computer Science, University of Illinois, Urbana, IL 61801. Submitted to: Artificial Intelligence Journal. 17. COSATI CODES FIELD 05 GROUP 09 SUB-GROUP 18. SU8JECT TERMS {Continue on reverse if necessary and identify by block number) artificial intelligence, knowledge-based systems, machine learning, program debugging, uncertainty reasoning, knowledge base refinement, sociopathic knowledge base. 19. ABSTRACT (Continue on reverse if necessary and identify by block number) This paper studies a situation is which correct knowledge is harmful to a problem solver even given unlimiitd computational resources. A knowledge base is defined to be sociopathic if all the tuples in the knowledge base are individually judged to be correct and a subset of the knowledge base gives better performance than the original knowledge base independent of the amount of computational resources that are available. Almost all knowledge bases that contain probabilistic rules are shown to be sociopathic and so this problem is very widespread. Sociopathicity has important consequences for rule induction methods and rule set debugging methods. So- ciopathic knowledge bases cannot be properly debugged using the widespread practice of incremental modification and deletion of rules responsible for wrong conclusions a la Teiresias; this approach fails to converge to an optimal solution. The problem of optimally debugging sociopathic knowledge bases is modeled as a bipartite graph mini- mization problem and shown to be NP-hard. Our heuristic solution approach is called the Sociopathic Reduction Algorithm and experimental results verify its efficacy. 20. DISTRIBUTION /AVAILA8ILITY OF ABSTRACT El UNCLASSIFIED/UNLIMITED L3 SAME AS RPT. n DTIC USERS 21. ABSTRACT SECURITY CLASSIFICATION Unclassified 22a NAME OF RESPONSIBLE INDIVIDUAL Dr. Alan Meyrowitz, Dr. Susan Chipman 22b. TELEPHONE (Include Area Code) (202)696-4302,696-4320 22c. OFFICE SYM80L 1133 and 1142CS DD FORM 1473, 84 MAR 83 APR edition may be used until exhausted. All other editions are obsolete. SECURITY CLASSIFICATION OF THIS PAGE Department of Computer Science College of Engineering Report No. UIUCDCS-R-89-1538 UILU-ENG-89-1757 Sociopathic Knowledge Bases: Correct Knowledge Can be Harmful Even Given Unlimited Computation David C. Wilkins and Yong Ma Knowledge-Based Systems Group Department of Computer Science University of Illinois 405 North Mathews Ave Urbana, IL 61801 August 1989 Submitted for Publication: Artificial Intelligence Journal Sociopathic Knowledge Bases: Correct Knowledge Can be Harmful Even Given Unlimited Computation David C. Wilkins and Yong Ma Department of Computer Science University of Illinois 405 North Mathews Avenue Urbana, IL 61801 Abstract This paper studies a situation is which correct knowledge is harmful to a problem solver even given unlimited computational resources. A knowledge base is defined to be sociopathic if all the tuples in the knowledge base are individually judged to be correct and a subset of the knowledge base gives better performance than the original knowledge base independent of the amount of computational resources that are available. Almost all knowledge bases that contain probabilistic rules are shown to be sociopathic and so this problem is very widespread. Sociopathicity has important consequences for rule induction methods and rule set debugging methods. Sociopathic knowledge bases cannot be properly debugged using the widespread practice of incremental modification and deletion of rules responsible for wrong conclusions a la Teiresias; this approach fails to converge to an optimal solution. The problem of optimally debugging sociopathic knowledge bases is modeled as a bipartite graph minimization problem and shown to be NP-hard. Our heuristic solution approach is called the Sociopathic Reduction Algorithm and experimental results verify its efficacy. Contents 1 Introduction 3 2 Inexact Reasoning and Rule Interactions 4 3 Debugging Rule Sets and Rule Interactions 6 3.1 Types of rule interactions 6 3.2 Traditional methods of debugging a rule set 7 4 Minimizing Sociopathic Interactions 8 4.1 Bipartite graph minimization formulation 9 5 Sociopathic Reduction Algorithm 14 5.1 The Sociopathic Reduction Algorithm 14 5.2 Example of sociopathic reduction 16 5.3 Experience with the Sociopathic Reduction Algorithm 18 6 Related Work 20 7 Summary and Conclusion 21 8 Acknowledgements 21 Appendix 1: Calculating G. 22 References 23 1 Introduction Reasoning under uncertainty has been widely investigated in artificial intelligence. Prob- abilistic approaches are of particular relevance to rule-based expert systems, where one is interested in modeling the heuristic and evidential reasoning of experts. Methods devel- oped to represent and draw inferences under uncertainty include the certainty factors used in Mycin (Buchanan and Shortliffe, 1984), fuzzy set theory (Zadeh, 1979), and the belief functions of Dempster-Shafer theory (Shafer, 1976) (Gordon and Shortliffe, 1985). In many expert system frameworks, such as Emycin, Expert, MRS, S.l, and Kee, the rule structure permits a conclusion to be drawn with varying degrees of certainty or belief. This paper addresses a concern common to all these methods and systems. In refining and debugging a probabilistic rule set, there are three major causes of errors: missing rules, wrong rules, and deleterious interactions between good rules. The purpose of this paper is to explicate a type of deleterious interaction and to show that it (a) is indigenous to rule sets for reasoning under uncertainty, (b) is of a fundamentally different nature from missing and wrong rules, (c) cannot be handled by traditional methods for correcting wrong and missing rules, and (d) can be handled by the method described in this paper. In section 2, we describe the type of deleterious rule interactions that we have en- countered in connection with automatic induction of rule sets, and explain why the use of most rule modification methods fails to grasp the nature of the problem. In section 3, we discuss approaches to debugging and refining rule sets and explain why traditional rule set debugging methods are inadequate for handling global interactions. In section 4, we for- mulate the problem of reducing deleterious interactions as a bipartite graph minimization problem and show that it is NP-hard. In section 5, we present a heuristic method called the Sociopathic Reduction Algorithm. Finally, our experiences in using the Sociopathic Reduction Algorithm are described. A brief description of terminology will be helpful to the reader. Assume there exists a collection of training instances, each represented as a set of feature- value pairs of evidence and a set of hypotheses. Rules are in Horn clause form: conclude(H, CF) :- E , where E is a conjunction of evidence, H is a hypothesis, and CF is a certainty factor or its equivalent. A rule that correctly confirms a hypothesis generates true positive evidence; one that correctly disconfirms a hypothesis generates true negative evidence. A rule that incorrectly confirms a hypothesis generates false positive evidence; one that incorrectly disconfirms a hypothesis generates false negative evidence. False positive and false negative evidence can lead to misdiagnoses of training instances. 2 Inexact Reasoning and Rule Interactions When operating as an evidence-gathering system (Buchanan and Shortliffe, 1984), an ex- pert system accumulates evidence for and against competing hypotheses. Each rule whose preconditions match the gathered data contributes either positively or negatively toward one or more hypotheses. Unavoidably, the preconditions of probabilistic rules succeed on instances where the rule will be contributing false positive or false negative evidence for conclusions. For example, consider the following rule: conclude(klebsiella, 0.77) :- (Rl) finding(surgery, yes), flnding(gram_negJnfection, yes) The frequency with which Rl generates false positive evidence has a major influence on its CF of 0.77, where —1 < CF < 1. Indeed, given a representative set of training instances, such as a library of medical cases, the certainty factor of a rule can be given a probabilistic interpretation 1 as a function G(xi,X2,X3), where X\ is the fraction of the positive instances of a hypothesis where the rule premise succeeds, thus contributing true positive or false negative evidence; x-i is the fraction of the negative instances of a hypothesis where the rule premise succeeds, thus contributing false positive or true negative evidence; See Appendix 1 for a description of the function G. The calculations of G give a purely statistical inter- pretation to CFs, and hence do not incorporate orthogonal utility measures as was done in MYCIN(Buchanan and Shortliffe, 1984). and Z3 is the ratio of positive instances of a hypothesis to all instances in the training set. For Rl in our domain, (7(.43, .10, .22) = 0.77 by the formulas in Appendix A, because statistics on 104 training instances yield the following values: X\ : E true among positive instances = 10/23 X2 : E true among negative instances = 8/81 (1) X3 : H true among all instances = 23/104 Hence, Rl generates false positive evidence on eight instances, some of which may lead to false negative diagnoses. But whether they do or not depends on the other rules in the system; hence our emphasis on taking a global perspective. The usual method of dealing with situations such as this is to make the rule fail less often by specializing its premise (Michalski et al., 1983). For example, surgery could be specialized to neurosurgery, and we could replace Rl with: conclude(klebsiella, 0.92) :- (R2) finding(neurosurgery, yes), finding(gram_negJnfection, yes) On our case library of training instances for the R2 rule, G(.26, .02, .22) = 0.92, so R2 makes erroneous inferences in two instances instead of eight. Nevertheless, modifying Rl to be R2 on the grounds that Rl contributes to a misdiagnosis is not always appropriate; we offer three objections to this frequent practice. First, both rules are inexact rules that offer advice in the face of limited information, and their relative accuracy and correctness is explicitly represented by their respective CFs. We expect them to fail, hence failure should not necessarily lead to their modification. Second, all probabilistic rules reflect a trade-off between generality and specificity. An overly general rule provides too little discriminatory power, and a overly specific rule contributes too infrequently to problem solving. A policy on proper grain size is explicitly or implicitly built into rule induction programs; this policy should be followed as much as possible. Specialization produces a rule that usually violates such a policy. Third, if the underlying problem for an incorrect diagnosis is rule interactions, a more specialized rule, such as the specialization of Rl to R2, can be viewed as creating a potentially more dangerous rule. Although it only makes an incorrect inference in two instead of eight instances, these two instances will be now harder to counteract when they contribute to misdiagnoses because R2 is stronger. Note that a rule with a large CF is more likely to have its erroneous conclusions lead to misdiagnoses. This perspective motivates the prevention of misdiagnoses in ways other than the use of rule specialization or generalization. Besides rule modification, another common method of nullifying the incorrect infer- ence of a rule in an evidence-gathering system is to introduce counteracting rules. In our example, these would be rules with a negative CF that concludes Klebsiella on the false positive training instances that lead to misdiagnoses. But since these new rules are prob- abilistic, they will introduce false negatives on some other training instances, and these may lead to misdiagnoses. We could add yet more counteracting rules with a positive CF to nullify any problems caused by the original counteracting rules, but these additional rules introduce false positives on yet other training instances, and these may lead to other misdiagnoses. Also, a counteracting rule is often of less quality in comparison to rules in the original rule set; if it were otherwise the induction program would have included the counteracting rule in the original rule set. Clearly, adding counteracting rules may not be necessarily the best way of dealing with misdiagnoses made by probabilistic rules. 3 Debugging Rule Sets and Rule Interactions Assume we are given a set of probabilistic rules that were either automatically induced from a set of training cases or created manually by an expert and knowledge engineer. In refining and debugging this probabilistic rule set, there are three major causes of errors: missing rules, wrong rules, and unexpected interactions among good rules. We first describe types of rule interactions, and then show how the traditional approach to debugging is inadequate. 3.1 Types of rule interactions In a rule-based system, there are many types of rule interactions. Rules interact by chaining together, by using the same evidence for different conclusions, and by drawing the same conclusions from different collections of evidence. Thus one of the lessons learned from research on MYCIN was that complete modularity of rules is not possible to achieve when rules are written manually (Buchanan and Shortliffe, 1984). An expert uses other rules in a set of closely interacting rules in order to define a new rule, in particular to set a CF value relative to the CFs of interacting rules. Automatic rule induction systems encounter the same problems. Moreover, automatic systems lack an understanding of the strong semantic relationships among concepts to allow judgments about the relative strengths of evidential support. Instead, induction systems use biases to guide the rule search (Michalski et al., 1983). The rule sets that are later analyzed for sociopathicity in this paper were generated by the induction subsystem of ODYSSEUS. The inductive biases used in this system are rule generality, whereby a rule must cover a certain percentage of instances; rule specificity, whereby a rule must be above a minimum discrimination threshold; rule colinearity, whereby rules must not be too similar in classification of the instances in the training set; and rule simplicity, whereby a maximum bound is placed on the number of conjunctions and disjunctions (Wilkins, 1987). 3.2 Traditional methods of debugging a rule set The standard approach to debugging a rule set consists of iteratively performing the fol- lowing steps: • Step 1. Run the system on cases until a false diagnosis is made. • Step 2. Track down the error and correct it, using one of five methods pioneered by Teiresias (Davis, 1982) and used by knowledge engineers generally: — Method 1: Make the preconditions of the offending rules more specific or some- times more general. 2 — Method 2: Make the conclusions of offending rules more general or sometimes more specific. — Method 3: Delete offending rules. — Method 4: Add new rules that counteract the effects of offending rules. Ways of generalizing and specializing rules are nicely described in (Michalski et al., 1983). They include dropping conditions, changing constants to variables, generalizing by internal disjunction, tree climbing, interval closing, exception introduction, etc. — Method 5: Modify the strengths or CFs of offending rules. This approach may be sufficient for correcting wrong and missing rules. However, it is flawed from a theoretical point of view, with respect to its sufficiency for correcting problems resulting from the global behavior of rules over a set of cases. It possesses two serious methodological problems. First, using all five of these methods is not necessarily appropriate for dealing with global deleterious interactions. In section 2 we explained why in some situations modifying the offending rule or adding counteracting rules leads to problems, and misses the point of having probabilistic rules, and this eliminates methods 1, 2 and 4. If rules are being induced from a representative set of training cases, modifying the strength of the rule is illegal, since the strength of the rule has a probabilistic interpretation, being derived from frequency information derived from the training instances, and this eliminates method 5. Only method 3 is left to cope with deleterious interactions. The second methodological problem is that the traditional method picks an arbitrary case to run in its search for misdiagnoses. Such a procedure will often not converge to a good rule set, even if modifications are restricted to rule deletion. The example in section 5.2 illustrates this situation. Our perspective on this topic evolved in the course of experiments in induction and refinement of knowledge bases. Using "better" induction biases did not always produce rule sets with better performance, and this prompted investigating the possibility of global probabilistic interactions. Our original approach to debugging was similar to the Teiresias approach. Often, correcting a problem led to other cases being misdiagnosed, and in fact this type of automated incremental debugging seldom converged to an acceptable set of rules. It might have if we we engaged in the common practice of "tweaking" the CF strengths of rules. However this was not permissible, since our CF values were derived from a representative set of training cases, and have a precise probabilistic interpretation, 4 Minimizing Sociopathic Interactions Assume there exists a large set of training instances, and a rule set for solving these instances has been induced that is fairly complete and contains rules that are individually judged to be good. By good, we mean that they individually meet some predefined quality standards such 8 as the biases described in section 3.1. Further, assume that the rule set misdiagnoses some of the instances in the training set. Given such an initial rule set, the problem is to find a rule set that meets some optimality criteria, such as to minimize the number of misdiagnoses without violating the goodness constraints on individual rules. Now modifications to rules, except for rule deletion, generally break the predefined goodness constraints. And adding other rules is not desirable, for if they satisfied the goodness constraints they would have been in the original rule set produced by the induction program. Hence, if we are to find a solution that meets the described constraints, the solution must be a subset of the original rule set. 3 More formally: Definition 1 (Sociopathic Knowledge Base) A knowledge base is sociopathic if and only if (l) all the tuples in the knowledge base are individually judged to be good; and (2) a subset of the knowledge base gives better performance than the original knowledge base independent of the amount of available computational resources. By the definition of a sociopathic knowledge base, the best rule set is viewed as the element of the power set of rules in the initial rule set that yields a global minimum weighted error. A straightforward approach is to examine and compare all subsets of the rule set. However, the power set is almost always too large to work with, especiaDy when the initial set has deliberately been generously generated. The selection process can be modeled as a bipartite graph minimization problem as follows. 4.1 Bipartite graph minimization formulation A bipartite graph G — (V, E) is a graph whose vertices V can be partitioned into two sets V\ and V 2 so that every edge in E joins a vertex in V\ to a vertex in Vi. For each hypothesis in the set of training instances, define a directed bipartite graph G = (V, E), with its vertices V partitioned into two sets J and R, as shown in Figure 1. Elements of R represent rules, and the evidential strength of Rj is denoted by CFj. Each vertex in I represents a training instance; for positive instances M; is 1, and for negative instances M; is —1. Arcs [Rj, Ii] connect a rule in R with the training instances in I for which its preconditions are If we discover that this solution is inadequate for our needs, then introducing rules that violate the induction biases is justifiable. satisfied; the weight of arc [it,-,/,-] is CFj. The weighted arcs terminating in a vertex in / are combined using an evidence combination function F, which is denned by the user. The combined evidence classifies an instance as a positive instance if the combined evidence is above a user specified threshold CF t . In the example in section 5.2, CF t is 0, while for Mycin, CF t is 0.2. Instance Set Rule Set Ii (Mi) • Ri (CFi) I 2 (M a ) ♦- R 2 (CF 2 ) Im(M m ) • • R n (CF n ) Figure 1: Bipartite Graph Formulation. The left hand nodes, Jj,. . . , J m represent a case Library of m training instances, where Mi indicates whether an instance is a positive or negative example of a hypothesis. The right hand nodes, R\ % .. .,R n represent a knowledge base of prob- abilistic rules, where CFj is the strength of the rule. The links show which training instances 7i,...,/ m satisfy the preconditions of rule Rj. More formally, assume that Jj f . .., I m — training set of instances, and i? l5 ..., R n rules of an initial rule set. Then we want to minimize: subject to the constraints e; = < z = J^e; t=i if F{ ail r u ...,a in r n ) > CF t for M { = 1 if F(an r l7 ..., a in r n ) (F must be l), then at least one g(aijTj) is 1. By the definition of g{ciijTj) above, either Aj appears in C; and Tj — 1 or Aj appears in C{ and Tj — 0. In either case, according to the output transformation, the corresponding clause C{ is satisfied (true). Only if part: Assume that C{ is satisfied by the truth assignment in the final rule set. Then there must exist some atom Aj such that either Aj is in C{ and it is assigned to be true or Aj is in C{ and assigned to be false. In either case, g{a.ijTj) — 1, by the output transformation and the definition of the function. Therefore, F(a,ir 1 , ...,a, n r n ) = 1 and e< = 0. To summarize, g(aijrj) being 1 corresponds intuitively to the positive contribution made by Aj to C{. Finally, it's shown that SAT is satisfiable iff BGMP so constructed has a minimum objective value 0. If BGMP has a solution with z = 0, then e t - = for all i, because 6, = 1. Therefore each C{ is satisfied and thus SAT is satisfiable. Conversely, if the SAT is satisfiable then each C{ can be satisfied by some truth assignment of atoms. Clearly, the final ride set of the BGMP formulation (of SAT) can be easily constructed with z — 0, according to that assignment. □ Corollary 1 Given a positive real number B , the problem of determining if there exists a rule set whose global weighted error z is less than or equal to B in the bipartite graph formulation for heuristic rule set optimization is NP-complete. Proof: To show that this decision problem is in NP, we notice that it is easy to construct a polynomial algorithm for checking whether or not the (weighted) number of misdiagnosis by any given subset of R is less than or equal to B. It is NP-hard by an argument similar to that in the proof of the above theorem. □ 13 5 Sociopathic Reduction Algorithm In this section, a heuristic method called the Sociopathic Reduction Algorithm is described, and an example is provided based on the graph shown in Table 1. 5.1 The Sociopathic Reduction Algorithm The following heuristic hill-climbing search method, the Sociopathic Reduction Algorithm, is one that we have developed and used in our experiments: • Step 1. Assign values to penalty constants. Let p\ be the penalty assigned to a poison rule. A poison rule is a strong rule giving erroneous evidence for a case that cannot be counteracted by the combined weight of all the rules in the rule base that give correct evidence. Let p? be the penalty for contributing false positive evidence to a misdiagnosed case, p$ be the penalty for contributing false negative evidence to a misdiagnosed case, p\ be the penalty for contributing false positive evidence to a correctly diagnosed case, p$ be the penalty for contributing false negative evidence to a correctly diagnosed case, and p$ be the penalty for using weak rules. Let h be the maximum number of rules that are removed at each iteration. Let i? mtn be the minimum size of the solution rule set. • Step 2. Optional step for very large rule sets: given an initial rule set, create a new rule set containing the n strongest rules for each case. • Step 3. Find all misdiagnosed cases for the rule set. If none exists, stop. Otherwise, collect and rank the rules that contribute evidence toward these erroneous diagnoses. The rank of rule Rj is Ya=i Pi n ij-> where: — riij = 1 if Rj is a poison rule or its deletion leads to the creation of another poison rule and otherwise. — n,2j = the number of misdiagnoses for which Rj gives false positive evidence; — n 3 j — the number of misdiagnoses for which Rj gives false negative evidence; — n 4j - = the number of correct diagnoses for which Rj gives false positive evidence; 14 — n 5j - = the number of correct diagnoses for which Rj gives false negative evidence; — riQj = the absolute value of the CF of Rj\ • Step 4. Eliminate the h highest ranking rules. • Step 5. If the number of misdiagnoses is decreased, go to step 3. • Step 6. Else, if the number of misdiagnoses begins to increase and h ^ 1, then — Undo the last deletion, i.e., take back the most recently removed h rules. 4 — hi- h-l. s — Goto step 3. • Step 7. Otherwise, i.e., if the number of misdiagnoses is increased and h = 1, then undo the last rule deletion; output the final rule set and stop. Each iteration of the algorithm produces a new rule set, and each rule set must be rerun on all training instances to locate the new set of misdiagnosed instances. If this is par- ticularly difficult to do, the h parameter in step 4 can be increased, but there is the potential risk of converging to a suboptimal solution. For each misdiagnosed instance, the automated reasoning system that uses the rule set must be able to explain which rules contributed to a misdiagnosis. Hence, we require a system with good explanation capabilities. The nature of an optimal rule set differs between domains. Penalty constants, pi, are the means by which the user can define an optimal policy. For instance, via p2 ai *d j>3, the user can favor false positive over false negative misdiagnoses, or visa versa. For medical expert systems, a false negative is often more damaging than a false positive, as false positives generated by a medical program can often be caught by a physician upon further testing. False negatives, however, may be sent home, never to be seen again. In our experiments, the value of the six penalty constants was p, = 10 6-1 . The h constant determines how many rules are removed on each iteration, and its value is about 5. Rmin is the minimum size of the solution rule set, usually about 90% of the original set; its usefulness was described in section 4.1. 4 It is this step that makes it a hill-climbing algorithm. Since the h is usually small, say about 5, the next incremental step of 1 is the simplest, although the more complicated schema of step decrements can be implemented for a relatively big number of h. 15 I\R #i(+.33)* R 2 {+.75) i2 3 (+-33) J2 4 (-.33)* R s (-.75) c i2 6 (-.33) Io(+) X ii(+) X X Ji(+) XXX X h(+) X X X X h(+r X X X X h(-r X X X h(-y X X X X H-) X X '•(-) X XXX h(-) X X Table 1: An example for Sociopathic Reduction algorithm. There are ten training instances that are classified as positive ( + ) or negative ( — ) instances of the hypothesis. There are six rules shown with their CF strength. The marks indicate the instances to which the rules apply, i.e., when an instance satisfies the premises clauses of a rule. 5.2 Example of sociopathic reduction In this example, which is illustrated in Table 5.1, there are ten training instances I , . . ., J 9 , classified as positive or negative instances of the hypothesis. There are six rules JRj, . . . , R 6 shown with their CF strength. The marks (x) indicate the instances to which the rules apply, i.e., when an instance satisfies the premises clauses of a rule. To simplify the example, define the combined evidence for an instance as the sum of the evidence contributed by all applicable rules, and let CF t = 0. Rules with a CF of one sign that are connected to an instance of the other sign contribute erroneous evidence. Two cases in the example are misdiagnosed: I4 and 1$. The objective is to find a subset of the rule set that minimizes the number of misdiagnoses. Before the details are examined, the following points concerning examples should be made. First, it can be shown that it is impossible to have an example using rules with out degree less than 5 that has all the points to be made from this example, if there are the equal number of positive and negative training instances. The argument is trivial for the rules with out degree of 1 and 2. For a rule with out degree of 3, assume that it has a positive CF value and is to be deleted. Then, it must misdiagnose some negative instance to become a 16 rule to be blamed. And, in order to have a positive CF, it must provide (positive) evidence for two positive instances, provided that the number of positive instances is equal to that of negative instances. Therefore, the number of correct diagnoses for which it gives false positive evidence must be zero, since the only negative instance that it connects to is the misdiagnosed one. Then, its ranking vector is (nij, ri2j, n3j, n 4j -, n 5j -, n&j) — (0,1, 0, 0, 0, CF) which results in the smallest ranking quantity that a blamed rule with positive CF can have. Thus, the algorithm will not guarantee to chose it for deletion. The argument for rules with out degree of 4 is similar to the above, or the CF values are zeroes if the rules connect to two positive instances and two negative ones. It may be possible to devise a heuristic algorithm which gives a better computational performance from this observation. The second point to make is that the CF values attached to the rules are the real values that are calculated based on the formula given in the appendix. Take J?i( + -33) for example. x\ : E true among positive instances = 3/5 Z2 : E true among negative instances = 2/5 xz : H true among all instances = 5/10 (9) Then, x 4 = x\x 3 X1X3 + x 2 {l -x 3 ) = 0.60 (10) Since 2:4 > 2:3, CF z 4 - x 3 a: 4 (l-Z3) 3 - = 0.33 (11) Now the examination of the example is to be preceded. Assume that the final rule set must have at least four rules, hence i2 mtn = 4. Let p, = 10 6- ', for < i < 5, thus choosing rules in the highest category, and using lower categories to break ties. On the first iteration, two misdiagnosed instances are found, 7 4 and 7s, and four rules contribute erroneous evidence toward these misdiagnoses, i?i, R2, #4, and R5. Their ranking vectors are shown in Table 2. Clearly, R\ has the highest ranking quantity £f =1 p t n,j, thus 17 n X j n 2 j n 3j n 4 j n 5j «6j Rx 1 1 0.33 R2 1 0.75 i?4 1 1 0.33 Rs 1 0.75 Table 2: The ranking vectors of blamed rules it is chosen for deletion. On the second iteration, one misdiagnosis is found, 7 4 , and two erroneous rules contribute erroneous evidence, R 4 and iZ 5 . Rules are ranked and iZ 4 is deleted. This reduces the number of misdiagnoses to zero and the algorithm successfully terminates. The same example can be used to illustrate the problem of the traditional method of rule set debugging, where the order in which cases are checked for misdiagnoses influences which rules are deleted. Consider a Teiresias style program that looks at training instances and discovers I4 is misdiagnosed. There are two rules that contribute erroneous evidence to this misdiagnosis, rules R4 and i? 5 . It wisely notices that deleting R4 causes I 6 to become misdiagnosed, hence increasing the number of misdiagnoses; so it chooses to delete i? 5 . However, no matter which rule it now deletes, there will always be at least one misdiagnosed case. To its credit, it reduced the number of misdiagnoses from two to one; however, it fails to converge to an rule set that minimizes the number of misdiagnoses. 5.3 Experience with the Sociopathic Reduction Algorithm Some preliminary experiment with the Sociopathic Reduction Algorithm has been com- pleted, using the Mycin case library which is a collection of 112 solved cases that were obtained from records at the Stanford Medical Hospital. The rule set of about 370 rules was the one after (1) correcting an incorrect domain theory, and (2) using apprenticeship learning to extend an incomplete domain theory (Wilkins and Tan, 1989). The Sociopathic Reduction Algorithm removed 21 rules from the knowledge base after 8 iterations. In Table 3, it is shown that about 10% improvement over the knowledge base tested is obtained. Although our work is pretty much theoretical research oriented one example of ex- periments is not sufficient by any means. Thus, our ongoing experiments involve two kinds 18 Disease Number Before Reduction After Reduction Cases TP FN FP TP FN FP Bacterial Meningitis 16 14 2 13 12 4 4 Brain Abscess 7 1 6 1 6 Cluster Headache 10 8 2 8 2 Fungal Meningitis 8 3 5 4 4 Migraine 10 6 4 7 3 Myco-TB Meningitis 4 4 1 4 3 Primary Brain Tumor 16 3 13 10 6 1 Subarach Hemorrhage 21 16 5 3 16 5 4 Tension Headache 9 8 1 3 8 1 1 Viral Meningitis 11 10 1 12 10 1 6 None 7 12 Totals 112 73 39 39 80 32 32 Table 3: The Sociopathic Reduction Algorithm, when applied to this knowledge base, improves the performance by about 10%. of tests. First, we divide the cases into a training set and a validation set with 70% vs. 30% each, so that it can be shown that the performance improvement is carried over to the validation set. To be more accurate, we would like to randomly split the cases five times and then average the improvements. Second, we like to apply the method just described to various knowledge bases available, for example, a knowledge base after correction of wrong rules only, a knowledge base after case-based learning application, and so on. 19 6 Related Work The original contribution of this paper is to show that correct knowledge can be harmful independent of problem-solving efficiency and that this problem is widespread. Another contribution is to show that the problem of harmful knowledge can be minimized and problem-solving performance improved by a particular form of knowledge base reduction, and that the optimal reduction is NP-hard. The theme of correct knowledge being harmful has been studied by a number of other investigators. Minton has investigated how the learning of correct search control knowledge can slow down a problem solver; his solution approach is to quantify the potential utility of a new piece of control knowledge and only add those with a high utility (Minton and Carbonell, 1987). Markovitch and Scott have shown that any deductively learned knowledge effects the cost of searching a problem space; their solution approach is to use filter functions that can determine whether a piece of past knowledge that has been deductively learned should be used on a current problem (Markovitch and Scott, 1989). Still another approach is to modify learned search control knowledge to increase problem- solving speed (Prieditis and Mostov, 1987). The theme of improving problem- solving accuracy via knowledge base reduction has been studied in conjunction with eliminating or reducing wrong knowledge. For example, the genetic algorithm used in conjunction with a classifier system eliminates as much as half of a knowledge base; it ehminates rules that has not contributed to past problem-solving successes (Holland, 1986). Another approach is to perform a global analysis of a knowledge base and eliminate those rules that are redundant or inconsistent (Ginsberg et al., 1988). Learning systems that perform induction from noisy training instances have also addressed the problem of wrong knowledge. The RULEMOD program of META-DENDRAL selects a subset of rules that have wide applicability, thereby reducing the number of wrong rules (Buchanan and Mitchell, 1978). RULEMOD also selects rules that jointly form a good global cover and hence shares our concern for finding rules that work well together. The TRUNC program of AQ15 deletes those disjunctions of non-probabilistic induced rules that cover the fewest cases (Michalski et al., 1986a; Michalski et al., 1986b). The reduced knowledge bases produced by RULEMOD and TRUNC give equal or superior performance. 20 7 Summary and Conclusion Traditional methods of debugging a probabilistic rule set are suited to handling missing or wrong rules, but not to handling deleterious interactions between good rules. This paper describes the underlying reason for this phenomenon. We formulated the problem of minimizing deleterious rule interactions as a bipartite graph minimization problem and proved that it is NP-hard. A heuristic method was described for solving the graph problem, called the Sociopathic Reduction Algorithm. In our experiments, the Sociopathic Reduction Algorithm gave good results. We believe that the rule set refinement method described in this paper, or its equiv- alent, is an important component of any learning system for automatic creation of proba- bilistic rule sets for automated reasoning systems. All such learning systems will confront the problem of deleterious interactions among good rules, and the problem wiU require a global solution method, such as we have described here. Our future research in this area is to create a theory of sociopathicity that subsumes all AI techniques for uncertainty reasoning, including certainty factors, Bayesian methods, probability methods, Dempster- Shafer theory, fuzzy reasoning, belief networks, and non- monotonic reasoning. For our progress to date, see (Ma and Wilkins, 1990a; Ma and Willdns, 1990b; Ma and Wilkins, 1990c). 8 Acknowledgements We thank Marianne Winslett for suggesting the bipartite graph formulation and for detailed comments, and thank Bruce Buchanan for earlier major collaboration on this work (Wilkins and Buchanan, 1986). We also express our gratitude for the helpful discussions and critiques provided by Bill Clancey, Ramsey Haddad, David Heckerman, Eric Horovitz, Curt Langlotz, Peter Rathmann and Devika Subramanian. This work was supported in part by NSF grant MCS-83-12148, ONR grant N00014- 88K-0124, and an Arnold O. Beckman research award to the first author. We are grateful for the computer time provided by the Intelligent Systems Lab of Xerox PARC and SUMEX- AIM at Stanford University. 21 Appendix 1: Calculating G. Consider rules of the form conclude(H, CF) :- E. Then CF = G — G{x\,x-i,xz) — empirical predictive power of rule R, where: • x\ — P(E + \H + ) — fraction of the positive instances in which R correctly succeeds (true positives or false negatives) • x 2 — P(E + \H~) — fraction of the negative instances in which R incorrectly succeeds (false positives or true negatives) • xz = P(H + ) = fraction of all instances that are positive instances Given xi,Z2> ^3> let ■ «. = f(g+i*+) = .,.,r.,7.-,,) - If x t > x, then G = ^f^ else G = jf^. This probabilistic interpretation reflects to the modifications to the certainly factor model proposed by (Heckerman, 1986). 22 References Buchanan, B. G. and Mitchell, T. M. (1978). Model-directed learning of production rules. In Waterman, D. A. and Hayes-Roth, F., editors, Pattern- Directed Inference Systems, pages 297-312. New York: Academic Press. Buchanan, B. G. and Shortliffe, E. H. (1984). Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Reading, Mass.: Addison- Wesley. Davis, R. (1982). Application of meta level knowledge in the construction, maintenance and use of large knowledge bases. In Davis, R. and Lenat, D. B., editors, Knowledge-Based Systems in Artificial Intelligence, pages 229-490. New York: McGraw-Hill. Ginsberg, A., Weiss, S. M., and Politakis, P. (1988). Automatic knowledge base refinement for classification systems. Artificial Intelligence, 35(2):197-226. Gordon, J. and Shortliffe, E. H. (1985). A method for managing evidential reasoning in a hierarchical hypothesis space. Artificial Intelligence, 26(3):323-358. Heckerman, D. (1986). Probabilistic interpretations for Mycin's certainty factors. In Kanal, L. and Lemmer, J., editors, Uncertainty in Artificial Intelligence, pages 167-196. New York: North Holland. Holland, J. H. (1986). Escaping brittleness: the possibilities of general-purpose learning algorithms applied to parallel rule-based systems. In Michalski, R. S., Carbonell, J. G., and Mitchell, T. M., editors, Machine Learning, Volume II, volume 2, chapter 20, pages 593-624. Los Altos: Morgan Kaufmann. Ma, Y. and Wilkins, D. C. (1990a). An analysis of Bayesian evidential reasoning. Working Paper KBS-90-001, Department of Computer Science, University of Illinois. Ma, Y. and Wilkins, D. C. (1990b). Computation of rule probability assignments for Dempster-Shafer theory and the sociopathicity of the theory. Working Paper KBS- 90-002, Department of Computer Science, University of Illinois. Ma, Y. and Wilkins, D. C. (1990c). Sociopathicity properties of evidential reasoning sys- tems. Working Paper KBS-90-016, Department of Computer Science, University of Illinois. Markovitch, S. and Scott, P. D. (1989). Utilization filtering: a method for reducing the inherent harmfulness of deductively learned knowledge. In Proceedings of the 1989 IJCAI, pages 738-743, Detroit, MI. Michalski, R. S., Carbonell, J. G., and Mitchell, T. M., editors (1983). Machine Learning: An Artificial Intelligence Approach. Palo Alto: Tioga Press. 23 Michalski, R. S., Mozetic, L, and Hong, I. (1986a). The AQ15 inductive learning system: An overview and experiments. Technical Report ISG 86-20, UIUCDCS-R-86-1260, Department of Computer Science, University of Illinois. Michalski, R. S., Mozetic, I., Hong, J., and Lavrac, N. (1986b). The multi-purpose incre- mental learning system AQ15 and its testing application to three medical domains. In Proceedings of the 1986 National Conference on Artificial Intelligence, pages 1041- 1045, Philadelphia, PA. Minton, S. and Carbonell, J. G. (1987). Strategies for learning search control rules: An explanation-based approach. In McDermott, J., editor, Proceedings of the 1987 IJCAI, pages 228-235, Milan. Prieditis, A. E. and Mostov, J. (1987). PROLEARN: towards a prolog interpreter that learns. In Proceedings of the 1987 National Conference on Artificial Intelligence, pages 494-498. Shafer, G. A. (1976). Mathematical Theory of Evidence. Princeton: Princeton University Press. Wilkins, D. C. (1987). Apprenticeship Learning Techniques For Knowledge Based Systems. PhD thesis, University of Michigan. Also, Knowledge Systems Lab Report KSL-88-14, Dept. of Computer Science, Stanford University, 1988, 153pp. Wilkins, D. C. and Buchanan, B. G. (1986). On debugging rule sets when reasoning under uncertainty. In Proceedings of the 1986 National Conference on Artificial Intelligence, pages 448-454, Philadelphia, PA. Wilkins, D. C. and Tan, K. (1989). Knowledge base refinement as improving an incorrect, inconsistent, and incomplete domain theory. In Proceedings of the Sixth International Conference on Machine Learning, pages 332-337, Ithaca, NY. Zadeh, L. A. (1979). Approximate reasoning based on fuzzy logic. In Proceedings of the 1979 IJCAI, pages 1004-1010, Tokyo, Japan. 24 ONR DISTRIBUTION LIST [ILLINOIS/WILKINS] Ml. Lilt B. Achille Code 5630 Nava] Heirirch Lab Overlook Drive Waihington, DC 20375-5000 Dr. Thomai H. Anderion Center for the Study of Reading 174 Children'l Reiearch Center 51 Gerty Drive Champaign, IL 61820 Dr. Gautam Biswai Department of Computer Science Box 1681, Station B Vanderbilt Univeriity Naihville, TN 37235 Dr. Mark Bnritein BBN 10 Moulton Street Cambridge, MA 02133 Dr. Edith Ackermann Media Laboratory E15-311 20 Amei Street Cambridge, MA 02139 Dr. Philip Ackerman Dept. of Psychology University of Minnesota 75 Eait River road N128 Elliott Hall Minneapolis, MN 55455 Dr. Beth Adelion Department of Computer Science Tufti Univeriity Medford, MA 02155 Technical Document Center AFHRL/LRS-TDC Wright-Patterion AFB OH 45433-1503 Dr. Robert Ahleri Code N7U Human Factori Laboratory Naval Training Syitemi Center Orlando, FL 32813 Dr. Robert M. Aiken Computer Science Department 038-24 Temple University Philadelphia, PA 1(122 Dr. Stephen J. Andriole, Chairman Department of Information Systems and Systems Engineering George Maion Univeriity 4400 Univeriity Drive Fairfax, VA 22030 Dr. John Annett Univeriity of Warwick Department of Psychology Coventry CV4 7AL ENGLAND Dr. Edward Atkini Code 61Z1210 Naval Sea Syitemi Command Waihington, DC 20382-5101 Dr. Michael E. Atwood NYNEX AI Laboratory 500 Weitcheiter Avenue White Plaini, NY 10804 Dr. Patricia Baggett School of Education 810 E. Univeriity Univeriity of Michigan Ann Arbor, MI 48109-1259 Dr. Bruce W. Ballard AT&T Bell Laboratoriei 600 Mountain Avenue Murray Hill, NJ 07974 Dr. John Black Teathen College, Box 8 Columbia Univeriity 525 Weit 120th Street New York, NY 10027 Dr. Daniel G. Bobrow Intelligent Syitemi Laboratory Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304 Dr. Deborah A. Boehm-Davii Department of Psychology George Mason Univeriity 4400 Univeriity Drive Fairfax, VA 22O30 Dr. Sue Bogner Army Reiearch Inititute ATTN: PERI-SF 5001 Eiienhower Avenue Alexandria, VA 22333-5800 Dr. Jeff Bonar Learning R&D Center Univeriity of Pittsburgh Pittiburgh, PA 15280 Dr. C. Alan Boneau Department of Psychology George Mason University 4400 Univeriity Drive Fairfax, VA 22030 Dr. Robert Calfee School of Education Stanford Univeriity Stanford, CA 94305 Dr. Robert L. Campbell IBM T.J. Watson Research Center P. O. Box 704 Yorktown Heighti, NY 10598 Dr. Joieph C. Campione Center for the Study of Reading Univeriity of Illinois 51 Gerty Drive Champaign, IL 81820 Dr. Jaime G. Carbonell Computer Science Department Carnegie-Mellon University Schenley Park Pittiburgh, PA 1S213 Dr. Thomai Carolan Inititute for Simulation and Training Univeriity of Central Florida 12424 Reiearch Parkway Suite 300 Orlando, FL 32826 Dr. Gail Carpenter Center for Adaptive Syitemi 111 Cummington St., Room 244 Boston Univeriity Boiton, MA 02215 Dr. Jan Aikim AION Corporation 101 Univeriity Palo Alto, CA 94301 Dr. Saul Amarel Dept. of Computer Science Rutgen Univeriity New Brunswick, NJ 08903 Mr. Tejwanih S. Anand Philipi Laboratoriei 345 Scarborough Road Briarcliff Manor New York, NY 10520 Dr. Jamei Anderson Brown Univeriity Department of Psychology Providence, RI 02912 Dr. John R. Anderion Department of Psychology Carnegie-Mellon University Schenley Park Pittsburgh, PA 15213 Dr. Donald E. Bamber Code 446 Naval Ocean Syitemi Center San Diego, CA 92152-6000 Dr. Harold Bamford National Science Foundation 1800 G Street, N.W. Waihington, DC 20550 Dr. Ranan Banerji Dept. of Mathematics and CS St. Joieph'i University 5600 City Avenue Philadelphia, PA 19131 Dipartimento di Piicologia Via della Pergola 48 50121 Firenie ITALY Dr. Marie A. Bienkowiki 333 Raveniwood Ave. FK337 SRI International Menlo Park, CA 94025 Dr. J. C. Boudreaux Center for Manufacturing Engineering National Bureau of Standards Galthenburg, MD 20899 Dr. Lyle E. Bourne, Jr. Department of Psychology Box 345 University of Colorado Boulder, CO 80309 Dr. Gary L. Bradshaw Psychology Department Campus Box 345 Univeriity of Colorado Boulder, CO 80309 kDr. Brucn Buchanan Computer Science Department Univeriity of Pittiburgh 322 Alumni Hall Pittiburgh, PA 15260 LT COL Hugh Burni AFHRL/IDI Brooki AFB, TX 78235 Dr. John M. Carroll IBM Wation Reiearch Center Uier Interface Inititute P.O. Box 704 Yorktown Heights, NY 10598 CDR Robert Carter Office of the Chief of Naval Operations OP-933D4 Waihington, DC 20350-2000 Dr. Fred Chang Pacific Bell 2600 Cimino Ramon Room 3S-450 San Ramon, CA 94583 Dr. Davida Charney English Department Penn State Univeriity Univeriity Park, PA 16802 Dr. Michelene Chi Learning R&D Center Univeriity of Pittiburgh 3939 O'Hara Street Pittiburgh, PA 15260 ONR DISTRIBUTION LIST [ILLINOIS/WILKINS] Dr. Satan Chipman Penonnel and Training Reiearch I llti of Naval Reiearch Code 1U2CS Arlington, VA 22*17-5000 Prof. Veronica Dahl Department of Computer Science Simon Fraier Unrvenity Burnaby, Britiih Colombia CANADA VSA ISO Dr. Ralph Duiek V-P Human Factori JII. Systems 1 2 Z & Jefferaon Davii llwy Suite 120) Arlington, VA 22201 Dr. Jean-Claude Palmagnt Department of Psychology New York Unrreraity t Washington Place New York, NY 10003 Dr. William J. Clancey IRL 2550 Hanover Street Palo Alto, CA 04)04 Dr. John P. Dalphin Chair, Computer Science Dept. Towton State University Baltimore, MD 21204 Prof. Michael G. Dyer Computer Science Department UCLA 3532 Boelter Hall Lot Angelei, CA 90024 Dr. Manhall J. Parr, Comultant Cognitrve k Instructional Sciencei 2520 North Vernon Street Arlington, VA 22207 Dr. Norman Cliff Department of Psychology Univ. of So. California Lot Angelei, CA 00019-1001 Dr. Charlei E. Davii Code 1142CS 800 N. Quincy Street Arlington, VA 22217 Dr. John Kiln Navy Penonnel Ri.1 Center Code 51 San Diego, CA 92252 Dr. P-A. Federico Code 51 NPRDC San Diego, CA 02152-0100 Dr. Paul Cohen Computer Science Department Unrvenity of Manachuietti Lederle Graduate Reiearch Center Amherit, MA 01003 Dr. Gerald F. DeJong Dept. of Computer Science Unrvenity of lllinoil 405 N. Mathewi Ave. Urbana, IL 01801 Dr. Suaan Epitein 144 S. Mountain Avenue Montclair, NJ 07042 Dr. Jerome A. Ptidman International Computer Science Inititute 1047 Center Street Berkeley, CA 94704-1105 Dr. Alan Colling BBN 33 Moulton Street Cambridge, MA 02238 Dr. Stanley Colryer Office of Naval Technology Code 222 800 N. Quincy Street Arlington, VA 22217-5000 Dr. Thomai E. DeZern Project Er.r ; neer, AJ General L tnicf PO Boi I Mail Zone 2040 Fort Worti TX 70101 Dr. Thomai G. Dietterich Dept. of Computer Science Oregon State Unrvenity Corvmllei, OR 973J1 ERIC Facility-Acquisitioni 43S0 Eait-Weit Hwy., Suite 1100 Betheida, MD 20814-4475 Dr. K Anderi Ericnon Univenity of Colorado Department of Psychology Campui Box 345 Boulder, CO 80309-0345 Dr. Paul Feltovich Southern Illinois Unrvenity School of Medicine Medical Education Department P.O. Boi 19230 Springfield, IL 02708 Dr. Richard Fikel Price Wat«rhouie Tech Center 08 Willow Road Menlo Park, CA 94025 Dr. Greg Cooper Stanford Unrvenity Knowledge Systems Lab P. O. Box 8070 Stanford, CA 94305 Dt. Richard L. Coulaon Dept. of Physiology School of Medicine Southern lllinoil Unrvenity Carbondale, IL 02901 Dr. Meredith P. Crawford 3583 Hamlet Place Chevy Chale, MD 20815 Dr. Hani F. Crombag Faculty of Law Unrvenity of Limburg P.O. Boi 010 Maastricht The NETHERLANDS 0200 MD Dr. Kenneth B. Cron Anacapa Sciencei, Inc. P.O. Drawer Q Santa Barbara, CA 93102 Dr. Cary Caichon Intelligent Instructional Systems Texas instrummti AI Lab P.O. Box 000240 Dallai, TX 75200 Dr. Ronna Dillon Department of Guidance and Educational Psychology Southern lllinoil Unrvenity Carbondale, IL 02901 Dr. J. Stuart Donn Faculty of Education Unrvenity of British Columbia 2125 Main Mall Vancouver, BC CANADA V0T 1Z5 Dr. Kejitan Dontai George Mason Unrvenity Dept. of Computer Science 4400 Unrvenity Drive Fairfax, VA 22030 Defenie Technical Information Center Cameron Station, Bldg 5 Alexandria, VA 22314 Attn: TC (12 Copiei) Dr. Pierre Duguet Organiaation for Economic Cooperation and Development 2, rue Andre-Paacal 75010 PARIS FRANCE Dr. Lee Erman Technowledge, Inc. 525 University Avenue Palo Alto, CA 94301 Dr. Tom Eskridge Lockheed Austin Drviiion 0800 Burleson Road Dept. T4-41, Bldg. 30F Austin, TX 78744 Dr. Lorraine D. Eyde Office of Personnel Management Office of Examination Development 1900 ESt., NW Washington, DC 20415 LCDR Micheltne Y. Eyraud Code 002 Naval Air Development Center Warminster, PA 18974-5000 Prof. Lawrence M. Fagan Stanford Unrvenity Medical Center TC-135 Medical Computer Science Stanford, CA 94305 Dr. Brian Falkenhainer Xerox PARC 3333 Coyote Hill Rd. Palo Alto, CA 94304 CAPT J. Finelli Commandant (G-PTE) U.S. Coast Guard 2100 Second St., S.W. Waihington, DC 20593 Dr. Douglai Fiiher Dept. of Computer Science Box 07, Station B Vanderbilt Unrvenity Naih-ville, TN 37235 Dr. Donald Fitigerald Unrvenity of New England Department of Psychology Armidale, New South Wales 2351 AUSTRALIA Mr. Nicholas S. Flann Dept. of Computer Science Oregon State Unrvenity Corvallii, Oregon 97331-3902 Dr. Kenneth D. Porbui Department of Computer Science University of Illinois 405 N. Mathewi Avenue Urbana, IL 01801 Dr. Kenneth M. Ford Drviiion of Computer Science The Unrvenity of Weit Florida 11000 Unrvenity Parkway Peniacola, PL 32514 ONR DISTRIBUTION LIST [ILLINOIS/WILKINS] Dr. Charles Forgy Department of Computer Science Carnegie-Mellon University Pittsburgh, PA 15213 Dr. Dedre Centner Department of Psychology University of Illinois 903 E. Daniel Champaign, IL 81820 Dr. Sherrie Gott AFHRL/MOMJ Brookl AFB, TX 71235-5401 Dr. Cheryl Hamel Naval Training Systems Center Code 71J 12350 Reiearch Parkway Orlando, PL 32121 Dr. Barbara A. Fox University of Colorado Department of Linguistics Boulder, CO 80309 Dr. Mark Foi Carnegie Mellon University Hobotici Inititute Pittiburgh, PA 15213 Dr. Carl H. Frederikien Dept. of Educational Psychology McGill University 3700 McTavish Street Montreal, Quebec CANADA H3A 1Y2 Dr. Donald R. Gentner Philipa Laboratoriei 345 Scarborough Road Briarcliff Manor, NY 10510 Dr. Helen Gigley National Science Foundation 1300 G Street N.W. Room 304 Waihington, DC 20550 Dr. Philip Gillis Army Reiearch Inititute PERi-n 5001 Eiienhower Avenue Alexandria, VA 22333-5800 Dr. T. Govindaraj Georgia Inititute of Technology School of Industrial and SyiUmi Engineering Atlanta, GA 30332-0205 Dr. Art Graener Dept. of Piychology Memphii State Univenity Memphii, TN 38152 Dr. Wayne Gray Artificial Intelligence Laboratory NYNEX 500 Weitcheiter Avenue White Plaini, NY 10804 Dr. Bruce W. Hamill Reiearch Center The Johna Hopkins Univeriity Applied Physics Laboratory Johns Hopkins Road Laurel, MD 20707 Dr. Chris Hammond Dept. of Computer Science University of Chicago 1100 E. 58th Street Chicago, IL 00837 Dr. Patrick R. Harrison Computer Science Department U.S. Naval Academy Annapolis, MD 21402-5002 Dr. John R. Frederiksen BBN Laboratoriei 10 Moulton Street Cambridge, MA 02238 Dr. Allen Giniberg AT&T Bell Laboratories Holmdel, NJ 07733 H. William Greenup Dep Asst C/S, Instructional Management (E03A) Education Center, MCCDC Quantico, VA 22134-5050 Dr. Peter Hart 301 Arbor Road Menlo Park, CA 94025 Dr. Norman Frederiksen Educational Testing Service (05-R) Princeton, NJ 08541 Dr. Alfred R. Fregry AFOSR/NL, Bldg. 410 Boiling AFB, DC 20332-8448 Dr. Peter Friedland Chief, AI Research Branch Mail Stop 244-17 NASA Amei Reiearch Center Moffett Field, CA 94035 Dr. Michael Friendly Psychology Department York University Toronto ONT CANADA M3J 1P3 Col. Dr. Emit Friie Heereipsychologischer Dienst Maria Theresien-Kaieme 1130 Wien AUSTRIA Dr. Robert M. Gagne 1456 Mitchell Avenue Tallahanee, FL 32303 Dr. Brian R. Gaines Knowledge Science Inititute Univeriity of Calgary Calgary, Alberta CANADA T2N 1N4 Mr. Lee Gladwin 305 Davii Avenue Leeiburg, VA 22075 Dr. Robert Glaser Learning Research & Development Center University of Pittiburgh 3839 O'Hsra Street Pittiburgh, PA 15280 Dr. Marvin D. Glock 101 Homeitead Terrace Ithaca, NY 14856 Dr. Dwight J. Goehring ARI Field Unit P.O. Box 5787 Presidio of Monterey CA 93944-5011 Dr. Joseph Goguen Computer Science Laboratory SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 Mr. Richard Golden Psychology Department Stanford University Stanford, CA 94305 Mr. Harold Goldstein Univeriity of DC Department Civil Engineering Bldg. 42, Room 112 4200 Connecticut Avenue, N.W. Waihington, DC 20008 Dr. Dik Gregory Admiralty Reiearch Eitabliihment/A-XB Queena Road Teddington Middleiex, ENGLAND TW110LN Dr. Gleen Griffin Naval Education and Training Program Management Support Activity Initructional Technology imp!. Drv. Code 0473 Peniacola, FL 32509-5000 Dr. Benjamin N. Groiof IBM T.J. Watson Labs P.O. Box 704 Yorktown Heighti, NY 10598 Dr. Stephen Gronberg Center for Adaptive Syttems Room 244 111 Cummington Street Boston University Boston, MA 02215 Michael Habon DORNIER GMBH P.O. Box 1420 D-7990 Friedrichshafen 1 WEST GERMANY Dr. Henry M. Half Halff Resources, Inc. 4918 33rd Road, North Arlington, VA 22207 Dr. H. Hamburger Department of Computer Science George Mason Univeriity Fairfax, VA 22030 Dr. Wayne Harvey Center for Learning Technology Education Development Center 55 Chapel Street Newton, MA 02160 Dr. David Haussler 402 Nobel Drive Santa Crus, CA 95060 Dr. Barbara Hayes-Roth Knowledge Systems Laboratory Stanford Univeriity 701 Welch Road Palo Alto, CA 94304 Dr. Frederick Hayes-Roth Teknowledge P.O. Box 10119 1850 Embarcadero Rd. Palo Alto, CA 94303 Dr. James Hendler Dept. of Computer Science University of Maryland College Park, MD 20742 Dr. James Hiebert Department of Educational Development University of Delaware Newark, DE 19716 Dr. Geoffrey Hinton Computer Science Department University of Toronto Sandford Fleming Building 10 King's College Road Toronto, Ontario CANADA M5S 1A4 ONR DISTRIBUTION LIST [ILLINOIS/WILKINS] Dr. H.ym Hirih Dept. of Computer Science Rutgers University New Brunrwiclc, NJ 01903 Mr. Roland Jones Mitre Corp., K-J03 Burlington Ro.d Bedford, MA 01710 Dr. Wendy Kellogg IBM T. J. W.t.on Rese.rch Ctr. P.O. Box 704 Yorlctown Height!, NY 10598 Dr. Cry Kress 028 Sp.ti.r Avenue P.cifie Grove, CA 93950 Dr. J.mei E. Hoffm.n Dep.rtment of Psychology University of Del. w. re New.rk, DE 19711 Prf. Ar.vind K. Joihi Dep.rtment of Computer Science Unrvenity of Pennsylvania R-268 Moore School Phil.delphi., PA 19104 Dr. Dougl.i Kelly University of North C.rolin. Dep.rtment of St.tiitici Ch.pel Hill, NC 27514 Prof. C.itmir A. Knlikowiki Dep.rtment of Computer Science Hill Center for the M. them. tic. I Science! Buieh C.mpui Rutgeri Unrvenity New Brunswick, NJ 01903 Dr. John H. Holl.nd Dept. of EE »nd CS Room 3110 Unrvenity of Michig.n Ann Arhor, MI 48109 M.. Juli. S. Hough 110 W. H.rvey Street Phil.delphi., PA 19144 Dr. J.ck Hunter 2122 Coolidge Street Lansing, MI 48908 Dr. Ed Hntchini Intelligent Syitemi Group Institute for Cognitive Science (C-015) UCSD L. Joll», CA 92093 Dr. W.yne Ih. Dept. of Inform.tion .nd CS Unrvenity of C.liforni., Irvine Irvine, CA 92717 Dr. Robin Jeffries Hewlett-P.ck.rd L.bor.toriei, 3L P.O. Boi 10490 P.lo Alto, CA 94303-0971 Dr. Lewii Johmon use Information Sciences Institute 4878 Admir.lity W.y, Suite 1001 M.rin. Del Rey, CA 90292 Dr. D.niel B. Jones U.S. Nucle.r Regul.tory Commission NRR/ILRB W.shington, DC 20555 Mr. P.ul L. Jones Rese.rch Division Chief of N.val Technic.l Tr.ining Building E»it-1 N.v.l Air St. tion Memphis Millington, TN 38054-5058 Dr. R.ndolph Jones Inform.tion .nd Computer Science University of C.liforni. Irvine, CA 92717 Dr. G.ry K.hn 1220 M.con Avenue Pittsburgh, PA 15211 Dr. Ruth K.nfer University of Minnesot. Dep.rtment of Psychology Elliott H.ll 75 E. River Ro.d Minne.polis, MN 55455 Dr. Mich. el K.pl.n Office of B.sic Rese.rch U.S. Army Rese.rch Institute 5001 Eisenhower Avenue Alei.ndri., VA 22333-5800 Mr. Shy.m K.pur Dept. of Computer Science Cornell University 4130 Upson H.ll Ith.c, NY 14853 Dr. Demetrios K.ris GTE L.bs, MS 81 40 Sylvan Ro.d W.lth.m, MA 022S4 Dr. A. K.rnnloff- Smith MRC-CDU 17 Gordon Street London ENGLAND WC1H OAH Dr. Milton S. K.ts European Science Coordin.tion Office U.S. Army Rese.rch Institute Box 85 FPO New York 09510-1500 Dr. Sm.d.r T. Ked.r-C. belli NASA Ames Rese.rch Center M.il Stop 244 Moffett Field, CA 94035 Dr. Fr.nk Keil Dep.rtment of Psychology 228 Uris H.ll Cornell University Ith.c, NY 14850 Dr. Rich.rd M. Keller Knowledge Systems L.bor.tory St.nford University Computer Science Dept. Stanford, CA 94305 Dr. J.A.S. Kelso Center for Complex Systems Building MT 9 Florid. Atl.ntic University Boc. R.ton, FL 33431 Prof. L.rry Kerschberg Dept. of Inform.tion System k Systems Engineering George Mason University 4400 University Drive F.irf.x, VA 22030 Dr. Dennis Kibler Dept. of Inform.tion & Computer Science University of C.liforni. Irvine, CA 92717 Dr. D.vid Kieras Technic.l Communic.tion Progr.n TIDAL BIdg. 2310 Bonisteel Blvd. University of Michig.n Ann Arbor, MI 48109 Dr. Thom.s Killion AFHRL/OT Willi.ms AFB, AZ 85240-0457 Dr. J. Peter Kinc.id Army Rese.rch Institute Orlando Field Unit c/o PMTRADE-E Orl.ndo, FL 32813 Dr. W.lter Kintsch Dep.rtment of Psychology University of Color. do Boulder, CO 80309-0345 Dr. Yves Kodr.toff George M.son University AI Center F.irf.x, VA 22030-4444 Dr. J.net Kolodner School of Information and Computer Science Georgia Institute of Technology Atlanta, CA 30332-0280 Dr. Stephen Kosslyn Harvard University 1238 William J.mes H.ll 33 Kirkl.nd St. C.mbridge, MA 02138 Dr. D.vid R. L.mbert N.v.l Oce.n Systems Center Code 772 271 C.t.lin. Boulevard S.n Diego, CA 92152-5000 Dr. P.t L.ngley NASA Ames Rese.rch Center M.il Stop 244-17 Moffett Field, CA 94035 Dr. Robert W. L.wlcr M.tthewi 111 Purdue University West L.f.yette, IN 47907 Dr. Yuh-Jeng Lee Department of Computer Science Code 52 Naval Postgraduate School Monterey, CA 93943 Ms. Debbie Leishman Knowledge Sciene Institute University of Calgary C.lgr.y, Alberta CANADA T2N 1N4 Dr. Douglas B. Lenat MCC 9430 Research Blvd. Echelon Building #3 Austin, TX 78759 Dr. Alan M. Lesgold Learning R and D Center 3939 O'Hara Street University of Pittsburgh Pittsburgh, PA 15260 Dr. Keith R. Levi Honeywell S »nd RC 3680 Technology Drive Minne.polis, MN 55411 Dr. John Levine Le.rning R .nd D Center University of Pittsburgh Pittsburgh, PA 15260 Dr. Leon S. Levy 2D Dor. do Drive Morristown, NJ 07960 ONR DISTRIBUTION LIST [ILLINOIS/WILKINS] Mitt Lewi. Department of Psychology Carnegie-Mellon University Pittsburgh, PA 15213 Dr. Dorii K. Lidtke Software Productivity Consortium 1810 Cempui Commoni Drive, North Reiton, VA 22091 Dr. Sridhar Mahadevan Dept. of Computer Science Rutgers University New Brunswick, NJ 08903 Vern M. Malec NPRDC, Code 14 San Diego, CA 92152-0800 Dr. Jane Malin Mail Code EF5 NASA Johnion Space Center Houiton, TX 77058 Dr. William L. Maloy Code 04 NETPMSA Pemacola, PL 32509-5000 Dr. Michel Manago IntelliSoft 28 rue Georgei Clemenceau 91400, ORSAY, FRANCE Dr. William Mark Lockheed Al Center Building 259 2710 Sand Hill Rd. Department 9006 Menlo Park, CA 94025 Dr. Sandra P. Manhall Dept. of Piychology San Diego State University San Diego, CA 92182 Dr. Joieph C. McLachlan Code 52 Nary Personnel RfcD Center San Diego, CA 92152-0800 Dr. Barbara Means SRI International 333 Ravenswood Avenue Menlo Park, CA 94025 Dr. Douglas L. Medin Psychology Department Perry Building University of Michigan 330 Packard Rd. Ann Arbor, MI 48104 Dr. Jose Mestre Department of Physics Hasbrouck Laboratory University of Massachusetts Amherst, MA 01003 Dr. Theodore Metsler Department of the Navy Office of the Chief of Naval Research Arlington, VA 22217-5000 Dr. Alan L. Meyrowits Office of Naval Research, Code 800 N. Quincy Rd. Arlington, VA 22217 Dr. Ryssard S. Michalski Department of Computer Science George Mason University 4400 University Drive Fairfax, Va 22030 Dr. Donald Michie The Turing Institute George House 36 North Hanover Street Glasgow Gl 2AD UNITED KINGDOM Dr. Tom M. Mitchell School of Computer Science Carnegie Mellon University Pittsburgh, PA 1S213 Dr. Sanjay Mittal Knowledge System Area Intelligent Systems Lab Xeroi Palo Alto Research Center Palo Alto, CA 94304 Dr. Andrew R. Molnar Applic. of Advanced Technology Science and Engr. Education National Science Foundation Washington, DC 20S50 Dr. William Montague Naval Personnel R and D Center San Diego, CA 92152-6800 Dr. Melvin D. Montemerlo NASA Headquarters Code RC Washington, DC 20548 Dr. Raymond Mooney Dept. of Computer Sciences The University of Texas at Austin Taylor Hall 2.124 Austin, TX 78712 Dr. Katharina Morik GMD F3/XPS P.O. Box 1240 D-5205 St. Augustin WEST GERMANY Prof. John Morton MRC Cognitive Development Unit 17 Gordon Street London WC1H OAH UNITED KINGDOM Dr. William R. Murray FMC Corporation Central Engineering Labs 1205 Coleman Avenue Box 580 Santa Clara, CA 95052 Prof. Makoto Nagao Dept. of Electrical Engineering Kyoto University Yoshida-Honmachi Sakyo-Ku Kyoto JAPAN Mr. J. Nelissen Twente University of Technology Fac. Bibl. Toegepaste Onderwyskurde P. O. Box 217 7500 AE Enschede The NETHERLANDS Dr. T. Niblett The Turing Institute George House 36 North Hanover Street Glasgow Gl 2AD UNITED KINGDOM Dr. Ephraim Nissan Department of Mathematics & Computer Science New Campus Ben Gurion University of Negev P. O. Box 653 84105 Beer-Sheva ISRAEL Dr. A. F. Norcio Code S530 Naval Research Laboratory Washington, DC 20375-5000 Dr. Donald A. Norman C-015 Institute for Cognitive Science University of California La Jolla, CA 92093 Dr. Manton M. Matthews Department of Computer Science University of South Carolina Columbia, SC 29208 Mr. John Mayer University of Michigan 703 Church Street Ann Arbor, MI 48104 Dr. John McDermott DEC Dlb5-3/E2 290 Donald Lynch Blvd. Marlboro, MA 01752 Prof. David D. McDonald Department of Computer & Information Sciences University of Massachusetts Amherst, MA 01003 Dr. Vittorio Midoro CNR-Istituto Tecnologie Didattiche Via All'Opera Pia 11 GENOVA-ITALIA 16145 Dr. James R. Miller MCC 3500 W. Balcones Center Dr. Austin, TX 78750 Prof Perry L. Miller Dept. of Anesthesiology Yale University School of Medicine 333 Cedar Street P. O. Box 3333 New Haven, CT 06510 Dr. Christine M. Mitchell School of Indus- and Sys. Eng. Center for Man-Machine Systems Research Georgia Institute of Technology Atlanta, GA 30532-0205 Dr. Jack Mostow Dept. of Computer Science Rutgers University New Brunswick, NJ 08903 Dr. Randy Mumaw Training Research Division HumRRO 1100 S. Washington Alexandria, VA 22314 Dr. Allen Munro Behavioral Technology Laboratories - USC 1845 S. Elena Ave., 4th Floor Redondo Beach, CA 90277 Dr. Kenneth S. Murray Dept of Computer Sciences University of Texas at Austin Taylor Hall 2.124 Austin, TX 78712 Dr. Harold F. O'Neil, Jr. School of Education - WPH 801 Department of Educational Psychology & Technology University of Southern California Los Angeles, CA 90089-0031 Dr. Paul O'Rorke Department of Information and Computer Science University of California Irvine, CA 92717 Dr. Stellan Ohlsson Learning R and D Center University of Pittsburgh Pittsburgh, PA 15260 Dr. James B. Olsen WICAT Systems 1875 South State Street Orem, UT 84058 ONR DISTRIBUTION LIST [ILLINOIS/WILKINS] Dr. Judith Reitman Olson Graduate School of Business University of Michigan 701 Tappen Ann Arbor, MI 48109-1234 Admiral Piper PM TRADE ATTN: AMCPM-TNO-ET 123S0 Research Parkway Orlando, FL 32126 Dr. Stephen Reder MWREL 101 SW Main, Suite 500 Portland, OR 97204 Dr. Paul S. Rosenbloom University of Southern California Information Science! Inatitnte 4678 Admiralty Way Marina Del Ray, CA 10282 Office of Naval Research, Code 1142CS 800 N. Quincy Street Arlington, VA 22217-5000 (8 Copies) Dr. Peter Pirolli Graduate School of Education EMST Division 4533 Tolman Hall Univeraity of California, Berkeley Berkeley, CA 94702 Dr. James A. Reggia University of Maryland School of Medicine Department of Neurology 22 South Greene Street Baltimore, MD 21201 Dr. Emit I. Rothkopf AT&T Bell Laboratories Room 2D-458 800 Mountain Avenue Murray Hill, NJ 07974 Dr. Judith Orasanu Baiic Research Office Army Reaearch Institute 5001 Eisenhower Avenue Alexandria, VA 22333 Dept. of Administrative Sciences Code 54 Naval Postgraduate School Monterey, CA 93943-5028 Dr. J. Wesley Regian AFHRL/IDI Brooks AFB, TX 78235 Dr. Allen A. Rovick Rush Medical College 1853 W. Congress Parkway Chicago, IL 80812-3884 Dr. Jesse Orlansky Institute for Defense Analyses 1801 N. Beauregard St. Alexandria, VA 22311 Prof. Tim O'Shea Institute of Educational Technology The Open University Walton Hall Milton Keynes MK7 6AA Buckinghamshire, U.K. Dr. Tomaso Poggio Massachusetts Institute of Technology E25-201 Center for Biological Information Processing Cambridge, MA 02139 Dr. Peter Poison University of Colorado Department of Psychology Boulder, CO 80309-0345 Dr. Brian Reiser Cognitive Science Lab Princeton University 221 Nassau Street Princeton, NJ 08544 Dr. Lauren Resnick Learning R&D Center University of Pittsburgh 3939 O'Hara Street Pittsburgh, PA 15213 Dr. Stuart J. Russell Computer Science Division University of California Berkeley, CA 94720 Dr. Roger C. Schank Northwestern University Inst, for the Learning Sciences 1890 Maple Evanston, IL 80208 Dr. Everett Palmer Mail Stop 239-3 NASA-Ames Research Center Moffett Field, CA 94035 Dr. Bruce Porter Computer Science Department University of Texas Taylor Hall 2.124 Austin, TX 78712-1188 Dr. J. Jeffrey Richardson Center for Applied AI College of Business University of Colorado Boulder, CO 80309-0419 Lowell Schoer Psychological t Quantitative Foundations College of Education University of Iowa Iowa City, IA 52242 Dr. Okchoon Park Army Research Institute PERI-2 5001 Eisenhower Avenue Alexandria, VA 22333 Mr. Armand E. Prieditis Department of Computer Science Rutgers University New Brunswick, NJ 08903 Prof. Christopher K. Riesbeck Department of Computer Science Yale University P. O. Box 2158, Yale Station New Haven, CT 08520-2158 Dr. Jeffrey C. Schtimmer School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Dr. Ramesh Patil MIT Laboratory for Computer Science Room 418 545 Technology Square Cambridge, MA 02139 Dr. Joseph Psotka ATTN: PERI-IC Army Research Institute 5001 Eisenhower Ave. Alexandria, VA 22333-5800 Prof. David C. Rine Deptment of Computer & Information Sciences George Mason University 4400 University Drive Fairfax, VA 22030 Dr. Janet W. Schofield 818 LRDC Building University of Pittsburgh 3939 O'Hara Street Pittsburgh, PA 15260 Dr. Michael J. Passani Department of Computer and Information Science University of California Irvine, CA 92717 Dr. J. Ross Quinlan School of Computing Sciences NSW. Institute of Technology Broadway N.S.W. AUSTRALIA 2007 Dr. Edwina L. Rissland Dept. of Computer and Information Science University of Massachusetts Amherst, MA 01003 Dr. Paul D. Scott University of Essex Dept. of Computer Science Wrvenhoe Park Colchester C043SQ ENGLAND Dr. Roy Pea Institute for Research on Learning 2550 Hanover Street Palo Alto, CA 94304 Dr. Shankar A. Rajamoney Computer Science Department University of Southern California Los Angeles, CA 91030 Dr. Linda G. Roberts Science, Education, and Transportation Program Office of Technology Assessment Congress of the United States Washington, DC 20510 Dr. Alberto Segre Cornell University Computer Science Department Upson Hall Ithaca, NY 14853-7501 Dr. Ray S. Perea ARI (PERI-D) 5001 Eisenhower Avenue Alexandria, VA 22333 Dr. C. Perrino, Chair Dept. of Psychology Morgan State University Cold Spring La.-Hillen Rd. Baltimore, MD 21239 Mr. Paul S. Rau Code U-33 Naval Surface Weapons Center White Oak Laboratory Silver Spring, MD 20903 Ms. Margaret Recker Graduate Group in Education EMST Division 4533 Tolman Hall University of California, Berkeley Berkeley, CA 94702 LT CDR Michael N. Rodgers Canadian Forces Personnel Applied Research Unit 4900 Yonge Street, Suite 800 Willowdale, Ontario M2N 6B7 CANADA Dr. Colleen M. Seifert Dept. of Psychology University of Michigan 350 Packard Rd. Ann Arbor, MI 48104 Dr. Oliver G. Selfridge GTE Labs Waltham, MA 02254 ONR DISTRIBUTION LIST [ILLINOIS/WILKINS] Dr. Michael G. Shafto NASA Amei Research Ctr. Mmil Stop 239-1 Moffett Field, CA 94035 Dr. Derek Sleeman Computing Science Department The Unrvenity Aberdeen AB9 2FX Scotland UNITED KINGDOM Dr. Patrick Snppei Stanford Unrvenity Institute for Mathematical Studiei in the Social Science! Stanford, CA 94305-4115 Dr. Kurt Van Lehn Department of Psychology Carnegie-Mellon University Schenley Park Pittsburgh, PA 15213 Dr. Valerie L. Shalin Honeywell S&RC 3810 Technology Drive Minneapolis, MN 55411 Mi. Gail K. Slemon LOGICON, Inc. P.O. Boi 85151 San Diego, CA 92138-5158 Dr. Richard Sntton GTE Labi Waltham, MA 02254 Dr. W. S. Vaughan 800 N. Qnincy Street Arlington, VA 22217 Dr. Jude W. Shavlik Compnter Science! Department University of Wiiconiin Madiion, WI 53708 Mr. Colin Sheppard AXC2 Block 3 Admirality Research Eitabliahment Miniitry of Defence Portadown Portimouth Hanti P064AA UNITED KINGDOM Dr. Ben Shneiderman Dept. of Computer Science Univeriity of Maryland College Park, MD 20742 Dr. Edward E. Smith Department of Psychology Unrvenity of Michigan 330 Packard Road Ann Arbor, MI 48103 Dr. Reid G. Smith Schlnmberger Technologies Lab. Schlomberger Palo Alto Reiearch 3340 Hilhriew Avenue Palo Alto, CA 94304 Dr. Elliot Soloway EE/CS Department Unrvenity of Michigan Ann Arbor, MI 48109-2122 Dr. William Swartout use Information Sciences Institute 4876 Admirality Way Marina Del Rey, CA 90292 Mr. Prasad Tadepalli Department of Computer Science Rutgers University New Brunswick, NJ 08903 Dr. Gheorghe Tecuci Research Institute for Computers and Informatics 71318, Bd. Miciurin 8-10 Bucharest 1 ROMANIA Dr. Adrian Walker IBM P. O. Box 704 Yorlrtown Heights, NY 10598 Dr. Diana Wearne Department of Educational Development University of Delaware Newark, DE 19711 Prof. Sholom M. Weils Department of Computer Science Hill Center for Mathematical Sciences Rutgers University New Brunswick, NY 08903 Dr. Jeff Shrager Xerox PARC 3333 Coyote Hill Rd. Palo Alto, CA 94304 Linda B. Soriaio IBM-Los Angeles Scientific Center 11601 Wilehire Blvd., 4th Floor Los Angeles, CA 90025 Dr. Perry W. Thorndyke FMC Corporation Central Engineering Labs 1205 Coleman Avenue, Box 580 Santa Clara, CA 95052 Dr. Keith T. Wescourt FMC Corporation Central Engineering Labs 1205 Coleman Ave., Box 580 Santa Clara, CA 95052 Dr. Howard Shrobe Symbolics, Inc. Eleven Cambridge Center Cambridge, MA 02142 Dr. Randall Shumaker Naval Research Laboratory Code 5510 4555 Overlook Avenue, S.W. Washington, DC 20375-5000 Dr. Bernard Siler Information Sciences Fundamental Reiearch Laboratory GTE Laboratories, Inc. 40 Sylvan Road Waltham, MA 022S4 Dr. Herbert A. Simon Departments of Computer Science and Psychology Carnegie-Mellon Unrvenity Pittiburgh, PA 15213 Dr. N. S. Sridharan FMC Corporation Box 580 1205 Coleman Avenue Santa Clara, CA 95052 Dr. Frederick Steinheiier CIA-ORD Amei Building Waihington, DC 20605 Dr. Ted Steinke Dept. of Geography Unrvenity of South Carolina Columbia, SC 29208 Dr. Leon Sterling Dept. of Computer Engineering and Science Crawford Hall Caie Weitern Reierve Univeriity Cleveland, Ohio 44108 Dr. Chrii Tong Department of Computer Science Rutgen Unrvenity New Brunswick, NJ 08903 Dr. Douglas Towne Behavioral Technology Labs University of Southern California 1845 S. Elena Ave. Redondo Beach, CA 90277 Lt. Col. Edward Trautman Naval Training Systems Center 12350 Research Parkway Orlando, FL 32826 Dr. Paul T. Twohig Army Research Institute ATTN: PERI-RL 5001 Eisenhower Avenue Alexandria, VA 22333-5800 Dr. David C. W.lkins Dept. of Computer Science University of Illinoii 405 N. Mathews Avenue Urbana, IL 81801 Dr. Kent E. Williami Institute for Simulation and Training The University of Central Florida 12424 Research Parkway, Suite 300 Orlando, FL 32828 Dr. Marsha R. Williamj Applic. of Advanced Technologies National Science Foundation SEE/MDRISE 1800 G Street, N.W., Room 835-A Washington, DC 20550 S. H. Wilson Code 5505 Naval Research Laboratory Washington, DC 20375-5000 Robert L. Simpson, Jr. DARPA/ISTO 1400 Wilson Bhrd. Arlington, VA 22209-2308 Dr. Michael J. Strait UMUC Graduate School College Park, MD 20742 Dr. Paul E. Utgoff Department of Computer and Information Science University of Massachusetts Amherst, MA 01003 Dr. Patrick H. Winiton MIT Artificial Intelligence Lab. 545 Technology Square Cambridge, MA 02139 Dr. Zita M. Simutis Chief, Technologies for Skill Acquisition and Retention ARI 5001 Eiienhower Avenue Alexandria, VA 22333 Dr. Devika Subramanian Department of Computer Science Cornell Unrvenity Ithaca, NY 14853 Dr. Harold P. Van Cott Committee on Human Factor! National Academy of Science! 2101 Conititution Avenue Waihington, DC 20413 Dr. Edward Wnniewiki Honeywell S and RC 3680 Technology Drive Minneapolis, MN 55418 ONR DISTRIBUTION LIST [ILLINOIS/WILKINS] Dr. Paul T. Wohig Army Reiearch Institute 5001 Eiitnhowd Avr ATTN: PERI-RL Alexandria, VA 22333-seoo Dr. Joieph Wol.l Alphatech, Inc. 2 Burlington Executive Center 111 Middleiex Turnpike Burlington, MA 01103 Dr. Beverly P. Woolf Dept. of Computer and Information Sciences University of Massachusetts Amherit, MA 01003 Dr. Ronald R. Yager Machine Intelligence Institute Iona College New Rochelle, NY 10801 Dr. Masoud Yaidani Dept. of Computer Science Univenity of Exeter Prince of Walei Road Exeter EX44PT ENGLAND Dr. Joieph L. Young National Science Foundation Room 320 1800 G Street, N.W. Washington, DC 20550 Dr. Maria Zemankova National Science Foundation 1800 G Street N.W. Waihington, DC 20550 Dr. Uri Zernik GE - CRD P. O. Box I Schenectady, NY 12301 UNIVERSITY OF ILLINOIS-URBANA 3 0112 101385430