510,84 no. 1534- 1539 1989 C • o Digitized by the Internet Archive in 2013 http://archive.org/details/knowledgebaseref1537wilk DEPARTMENT OF COMPUTER SCIENCE UNIVERSITY OF ILLINO c'\ REPORT NO. UIUCDCS-R-89-1537 UILU-ENG-89-1756 Knowledge Base Refinement as Improving an Incorrect, Inconsistent and Incomplete Domain Theory by David C. Wilkins and Kok-Wah Tan August 1989 SECURITY CLASSIFICATION OF THIS PAGE REPORT DOCUMENTATION PAGE la. REPORT SECURITY CLASSIFICATION Unclassified lb RESTRICTIVE MARKINGS 2a. SECURITY CLASSIFICATION AUTHORITY 2b. DECLASSIFICATION /DOWNGRADING SCHEDULE 3. DISTRIBUTION /AVAILABILITY OF REPORT Approved for Public Release; Distribution Unlimited 4. PERFORMING ORGANIZATION REPORT NUMBER(S) UIUCDCS-R-89-1537 UILU-ENG-89-1756 5. MONITORING ORGANIZATION REPORT NUM8ER(5) 6a. NAME OF PERFORMING ORGANIZATION University of Illinois 6b. OFFICE SYMBOL (If applicable) 7a. NAME OF MONITORING ORGANIZATION Artificial Intelligence (Code 1133) Cognitive Science (Code 1142CS) 6c ADDRESS (G'ty, State, and ZIP Code) Dept. of Computer Science 1304 W. Springfield Ave. Urbana, IL 61801 7b. ADDRESS (C"fy, State, and ZIP Code) Office of Naval Research 800 N. Quincy Street Arlington, VA 22217-5000 8a. NAME OF FUNDING / SPONSORING ORGANIZATION 3b. OFFICE SYMBOL (If applicable) 9 PROCUREMENT INSTRUMENT IDENTIFICATION NUMBER N00014-88K-0124 8c ADORESS(G'fy, State, and ZIP Code) 10 SOURCE OF FUNDING NUMBERS PROGRAM ELEMENT NO PROJECT NO. TASK NO 61153N RR04206 WORK UNIT ACCESSION NO 443g-008 11 TITLE (Include Security Classification) Knowledge Base Refinement As Improving An Incorrect, Inconsistent and Incomplete Domain Theory 12. PERSONAL AUTHOR(S) David C. Wilkins and Kok-Wah Tan 13a. TYPE OF REPORT Technical Report 13b. TIME dfW¥ 91-1-30 FROM 0/ iZ X TO 5 14. DATE OF REPORT (Year, Month, Day) 1989 August 15. PAGE COUNT 14 16. SUPPLEMENTARY NOTATION Available as Report UIUCDCS-R-89-1537, Dept. of Computer Science, University of Illinois, Urbana, IL 61801. To Appear in: Proceedings of the Sixth International Machine Learning Workshop. 17. COSATI CODES FIELD "03 — GROUP U9~ SUB-GROUP 18 SUBJECT TERMS (Continue on reverse if necessary and identify by block number) artificial intelligence, machine learning, knowledge-based systems, knowledge base refinement, failure-driven learning, explanation-based learning, incomplete domain theory, inconsistent domain theory, incorrect domain theory. 19. ABSTRACT (Continue on reverse if necessary ana identify by block numoer) We view automation of knowledge base refinement as improvements to a domain theory. This paper presents a brief overview of the techniques that we have developed to handle three types of domain theory pathologies: incor- rectness, inconsistency, and incompleteness. The major sources of power of our learning method are a confirmation theory that connects the domain theory to underlying domain theories, the use of an explicit qrepresentation of the strategy knowledge for a generic problem class (e.g., heuristic classification) that is separate from the domain theory (e.g., medicine) to be improved, and lastly an explicit, modular, and declarative knowledge representation for the domain theory. 20. DISTRIBUTION /AVAILABILITY OF A8STRACT CD UNCLASSIFIED/UNLIMITED C3 SAME AS RPT. n DTIC USERS 21. ABSTRACT SECURITY CLASSIFICATION Unclassified 22a. NAME OF RESPONSIBLE INDIVIDUAL Dr. Alan Meyrowitz, Dr. Susan Chipman 22b. TELEPHONE (Include Area Code) (202)696-4302, 696-4320 22c. OFFICE SYMBOL 1133 and 1142CS DD FORM 1473. 84 mar 83 APR edition may be used until exhausted. All other editions are obsolete. SECURITY CLASSIFICATION OF THIS PAGE Department of Computer Science College of Engineering Report No. UIUCDCS-R-89-1537 UILU-ENG-89-1756 Knowledge Base Refinement As Improving An Incorrect, Inconsistent And Incomplete Domain Theory David C. Wilkins and Kok-Wah Tan Knowledge-Based Systems Group University of Illinois 405 North Mathews Drive Urbana, IL 61801 August 1989 To appear in: Proceedings of the Sixth International Workshop on Machine Learning, 1989 Knowledge Base Refinement as Improving an Incorrect, Inconsistent, and Incomplete Domain Theory David C. Wilkins and Kok-Wah Tan Knowledge Based Systems Group Department of Computer Science University of Illinois Urbana, IL 61801 Abstract We view automation of knowledge base refinement as improvements to a domain theory. This paper presents a brief overview of the techniques that we have developed to handle three types of domain theory pathologies: incorrectness, inconsistency, and incompleteness. The major sources of power of our learning method are a confirmation theory that connects the domain theory to underlying domain theories, the use of an explicit representation of the strategy knowledge for a generic problem class (e.g., heuristic classification) that is separate from the domain theory (e.g., medicine) to be improved, and lastly an explicit, modular, and declarative knowledge representation for the domain theory. 1 Introduction One central problem of expert systems is knowledge base (KB) refinement (Buchanan and Shortliffe, 1984). The major research efforts that have directly confronted this problem include TEIRESIAS (Davis, 1982) INDUCE (Michalski, 1983), ID3 (Quin- lan, 1983), SEEK (Politakis and Weiss, 1984), AQUINAS (Boose, 1984), RL (Fu and Buchanan, 1985), MORE (Kahn et al., 1985), and ODYSSEUS (Wilkins, 1987). His- torically, these efforts evidence an evolution along three important dimensions: au- tomation of the refinement process, refinement of more complex representations, and greater diversity in the types of refinements. There has been considerable progress in automation of the three principal sub- tasks of KB refinement, which are deficiency detection, suggestion of a repair, and validation of a repair. TEIRESIAS, the first refinement program, facilitated these three subtasks for EMYCIN- based expert systems (Davis, 1982), by providing an intelligent editor that allowed a cooperating human expert to accomplish manually the subtasks. In contrast, when ODYSSEUS refines a HERACLES-based expert system (HERACLES is a descendant of EMYCIN), the three refinement subtasks are automated (Wilkins, 1987). Evolution in refinement of more complex representations is seen in systems which are able to repair many types of rule (e.g., heuristic, definitional) and frame knowledge (e.g., subsumption, hierarchical, causal slots). Evolution in diversity of repairs is seen in systems where many types of domain theory pathologies are correctable. This paper describes an integrated set of methods that have been developed to correct automatically three types of KB pathologies: incorrectness, inconsistency, and incompleteness. We begin with necessary background on the architecture and method of knowledge representation of the expert system to be improved. The methods for handling each of the three KB pathologies are then described. 2 ProHC Heuristic Classification Shell Our learning methods are designed to improve any knowledge base crafted for the ProHCD expert system shell (Tan, 1989). ProHCD is a refinement of HERACLES, based on the experience gained in creating the ODYSSEUS apprenticeship learning program for HERACLES (Wilkins, 1987). HERACLES is itself a refinement of EMYCIN, based on the experience gained in creating the GUIDON case-based tutoring program for EMYCIN (Clancey, 1986). These shells use a problem-solving method called heuristic classification, which is the process of selecting a solution out of a pre-enumerated solution set, using heuristic techniques (Clancey, 1985). The primary application KB for ProHCD and HERACLES is the NEOMYCIN medical KB for diagnosis of meningitis and similar neurological disorders (Clancey, 1984). This section describes the types of knowledge encoded in ProHCD and HERACLES, and how ProHCD differs from HERACLES. Problem State Meta Interpreter Tasks &_ Metarules Meta Predicates Strategy Knowledge T" ■ ■ Domain Facts Domain Rules Derived Facts Dor na in Know le dge Figure 1: ProHC System Architecture. Domain knowledge consists of Mycin-like rules and simple frame knowledge for an application domain (e.g., medicine, geology). An example of rule knowledge in Horn clause format is conclude(migraine-headache, yes, .5) :- finding(photophobia, yes), meaning 'to conclude the patient has a migraine headache with a certainty .5, deter- mine if the patient has photophobia'. An example of frame knowledge is subsumed- by(viral-meningitis, meningitis), meaning 'hypothesis viral meningitis is subsumed by the hypothesis meningitis.' Problem state knowledge is generated during execution of the expert system. Examples of program state knowledge are rule-applied(rulel63), which says that Rule 163 has been applied during this consultation, and differen- tial(migraine-headache, tension-headache), which says that the expert system's active hypotheses are migraine headache and tension headache. Strategy knowledge is contained in the shell, and approximates a cognitive model of heuristic classification problem solving. The different problem-solving strategies that can be employed during problem solving are explicitly represented, which facili- tates use of the model to follow the the line-of-reasoning of a human problem solver. The strategy knowledge determines what domain knowledge is relevant at any given time, and what additional information is needed to solve a the problem. The prob- lem state and domain knowledge, including rules, are represented as tuples. Strategy metarules are quantified over the tuples. Some of the ways in which ProHCD differs from HERACLES at the meta-level are as follows. First, the task interpreter consists of logic metainterpreter that uses a blackboard agenda mechanism to decide which task and metarule to execute next. Second, metarule premises do not change the state of the system, do not call other tasks, do not have procedural attachment to LISP code, and do not call more than one subgoal in their action part. Third, all meta-level control state information, such as task end condition flags, have been eliminated. The major difference between HERACLES and ProHCD at the domain level is the use of Pearl's method to represent rule uncertainty and for propagating information in a hierarchy of diagnostic hypotheses (Pearl, 1986). The differences facilitate solving the learning global credit assignment problem (comparing the behavior of an expert to the expert system and noticing when there is a significant difference) and the learning local credit assignment problem (determining the specific knowledge differences between the expert and the expert system). 3 Incorrect Domain Theory The KB refinement process described in this paper assumes an initial faulty KB, created, for example, by interviewing experts. Refinement goes through three stages that address the problems of incorrect, inconsistent, and incomplete domain theory knowledge, respectively. The method used to handle incorrect domain knowledge relies on a confirma- tion theory and an underlying domain theory. A confirmation theory is a decision procedure that when given an arbitrary candidate tuple of domain knowledge, can decide whether that tuple is true or false; it connects the tuples in the domain theory to the underlying domain theory. An underlying domain theory consists of knowl- edge that can underpin the knowledge in the domain theory. This allows validation of the initial KB for all types of knowledge for which a confirmation theory exists. Tuples that do not pass the test are deleted or modified. The confirmation theory frequently changes the strength of heuristic rules supplied by experts by recomputing their strength based on a case library. 4 Inconsistent Domain Theory The KB may contain knowledge tuples that are individually correct, but that interact deleteriously with other pieces of knowledge during problem solving, and thus give an inconsistent domain theory. A major source of inconsistency for classification expert systems involves heuristic rules that are collinear variants. Such rules fire on almost exactly the same cases. Higher-order correlation information is usually not available, so collinear variants cause hypotheses to have incorrect strengths. Our confirmation theory for heuristic rules detects and removes collinear variants. Another source of inconsistency arises if a KB is sociopathic ( Wilkins and Ma, 1989). By definition, this occurs when the individual rules are good rules, but there exists a subset of the rule set that gives better performance than the entire rule set. Correcting this problem has been proved to be NP-hard. We use a heuristic method, called the sociopathic reduction algorithm, to reduce sociopathicity. 5 Incomplete Domain Theory We have developed two methods for extending an incomplete domain theory: an apprenticeship learning approach, and a case-based reasoning approach. Table 1 shows the major refinement steps and the method of achieving them for apprenticeship and case-based learning. The techniques will be elaborated below. 5.1 Apprenticeship Learning Approach Apprenticeship learning is a form of learning by watching, in which learning oc- curs as a byproduct of building explanations of human problem- solving actions. An apprenticeship is the most powerful method that human experts use to refine and debug their expertise in knowledge-intensive domains such as medicine. The major accomplishment of our method of apprenticeship learning is showing how an explicit representation of the strategy knowledge for a general problem class, such as diag- nosis, can provide a basis for learning the knowledge that is specific to a particular domain, such as medicine. Learning Method Case-Based Learning (similarity-based) Apprenticeship Learning (explanation-based) Scope Heuristic rules. Heuristic rules. 4 other types of relations. Detect KB deficiency Select and run a case. Deficiency exists if case is misdiagnosed. Observe expert solving a case. Deficiency exists if action of expert cannot be explained. Suggest KB deficiency Generalize or specialize rules. Induce new rules. Find tuples that allows explanation to be completed under single fault assumption. Validate KB repair Use underlying domain theory to validate repairs. Use underlying domain theory to validate repairs. Table 1: Comparison of case-based and apprenticeship learning method for extending an incomplete domain theory. Apprenticeship learning involves the construction of explanations, but it is dif- ferent from explanation based learning as formulated in EBG (Mitchell et al., 1986) and EBL (DeJong, 1986). It is also different from explanation based learning in LEAP (Mitchell et al., 1985), even though LEAP also focuses on the problem of improving a knowledge-based expert system. In EBG, EBL, and LEAP, the domain theory is capable of explaining a training instance and learning occurs by generalizing an explanation of the training instance. In contrast, in our apprenticeship research, a learning op- portunity occurs when the domain theory, which is the domain KB, is incapable of producing an explanation of a training instance. The domain theory is incomplete or erroneous, and all learning occurs by making an improvement to this domain theory. The first stage of learning involves the detection of a KB deficiency. Explanations are are constructed for each of an expert's observed problem-solving actions. When ODYSSEUS observes the expert asking a "findout" question, such as asking if the patient has visual problems, it finds all explanations for this action. When none can be found, an explanation failure occurs. This failure suggests that there is a difference between the knowledge of the expert and the expert system and it provides a learning opportunity. ODYSSEUS assumes that deficient domain knowledge is the cause of the explanation failure. The second step is to conjecture a KB repair. The confirmation theory can judge whether an arbitrary tuple of domain knowledge is erroneous, independent of the other knowledge in the KB (Wilkins, 1987). Hence, when a KB deficiency is de- tected during apprenticeship learning, we assume the problem is missing knowledge. The search for the missing knowledge begins with the single fault assumption. The missing knowledge is conceptually a single fault, but because of the way the knowledge is encoded, we can learn more than one tuple when we learn rule knowledge. Con- ceptually, the missing knowledge could be eventually identified by adding a random domain knowledge tuple to the KB and seeing whether an explanation of the expert's findout request can be constructed. How can a promising piece of such knowledge be effectively found? Our approach is to apply backward chaining to the findout question metarule, trying to construct a proof that explains why it was asked. When the proof fails, it is because a tuple of domain or problem state knowledge needed for the proof is not in the knowledge base. If the proof fails because of problem state knowledge, we look for a different proof of the findout question. If the proof fails because of a missing piece of domain knowledge, we temporarily add this tuple to the domain knowledge base. If the proof then goes through, the temporary piece of knowledge is our conjecture of how to refine the knowledge base. The third step is to evaluate the proposed repair. To do this, we use a con- firmation theory containing a decision procedure for each type of domain knowledge that tells us whether a given tuple is acceptable. The current confirmation theory provides an underpinning for 5 of 19 domain tuple types. The confirmation theory for heuristic rules uses a case library, and uses a set of biases for judging rule quality. 5.2 Case-Based Learning Approach The case-based learning approach currently modifies or adds heuristic rules to the KB. It runs all the cases in the library and locates those that are misdiagnosed. Given a misdiagnosed case, the local credit assignment problem is solved as follows. The premises of the rules that concluded the wrong final diagnosis are weakened by specialization, and the premises of the rules that concluded the correct diagnosis are strengthened. If this does not solve the problem, new rules will be induced from the patient case library that apply to the misdiagnosed case and that conclude the correct final diagnosis. The verification procedure used to test all KB modifications is identical to that described for apprenticeship learning. 6 Experimental Results Some preliminary testing has been completed, expanding on results reported earlier (Wilkins, 1988). These tests used the NEOMYCIN KB for neurological disorders (con- structed manually by interviewing experts over many years), and a collection of 114 solved cases that were obtained from records at the Stanford Medical Hospital. Ta- ble 2 shows the various diseases and their sample sizes in the evaluation set. The result of each test suite are described along three dimensions. TP (true positive) refers to the number of cases that the expert system correctly diagnosed as present, FN (false negative) to the number of times a disease was not diagnosed as present but was indeed present, and FP (false positive) to the number of times a disease was incorrectly diagnosed as present. ProHCD with the manually constructed NEOMYCIN KB diagnosed 32 of the 112 cases correctly (28.5% accuracy). The first stage of improvement involves locating and modifying incorrect domain knowledge tuples. Our method modifies 48% of the heuristic rules in the KB. The improvement obtained using the refined KB is shown in column KB2 of Table 2; ProHCD diagnosed 62 cases correctly (55.3% accuracy), showing an improvement of about 27%. The second stage of improvement involves correcting inconsistent domain knowledge. No experimental results are reported here, although our methods have Disease Number Cases KB1 KB2 KB3 KB4 TP FN FP TP FN FP TP FN FP TP FN FP Bacterial Meningitis 16 14 2 49 14 2 21 12 4 4 14 2 13 Brain Abscess 7 7 1 7 1 5 2 15 1 6 Cluster Headache 10 1 9 7 3 4 7 3 4 8 2 Fungal Meningitis 8 8 4 4 3 5 3 5 Migraine 10 4 6 6 1 9 4 6 6 4 Myco-TB Meningitis 4 4 2 4 4 4 1 Primary Brain Tumor 16 16 16 16 3 13 Subarach Hemorrhage 21 1 20 15 6 16 5 2 16 5 3 Tension Headache 9 7 2 5 7 2 6 7 2 6 8 1 3 Viral Meningitis 11 5 6 11 10 1 12 10 1 6 10 1 12 None 6 6 7 7 Totals 112 32 80 80 62 50 50 68 44 44 73 39 39 Table 2: Summary of ProHC experiments. The KBl column is the performance using the manually constructed domain theory. KB2 shows performance after use of methods that correct an in- correct domain theory. KB3 and KB4 show the performance after using case-based learning and apprenticeship learning, respectively, to extend the incomplete domain theory. been previously shown to lead to significant improvement (Wilkins and Ma, 1989). The third stage of improvement involves extending a correct but incomplete domain KB. Two experiments were conducted. The first used case-based learning. All the cases were run, and two misdiagnosed cases in areas where the KB was weak were selected. The case-based learning approach was applied to these two cases. This refinement, shown in column KB3 of Table 2, enabled the system to diagnose 68 cases correctly (60.7% accuracy), showing an aggregate improvement of 32%. The second experiment used apprenticeship learning. The experimental setup involved watching a physician diagnosing two cases not in the set of 112. This refinement, shown in column KB4 of Table 2, enabled the system to diagnose 73 cases correctly (65.2% accuracy), an aggregate improvement of about 37%. More experimental work remains. Our previous experiments with ODYSSEUS suggest that the apprenticeship learning approach is better than a case-based ap- proach for producing a use-independent KB to support multiple problem-solving goals such as learning, teaching, problem-solving and explanation generation. 7 Summary and Conclusions The long-term objectives of this research are the creation of learning methods that can harness an explicit representation of generic shell knowledge and that can lead to the creation of use-independent KB that rests on deep underlying domain models. Within this framework, this paper describes specialized methods that address three major types of KB pathologies: incorrect, inconsistent, and incomplete domain knowl- edge. We believe that the use of specialized methods for different domain knowledge pathologies, and an ordered sequential correction as described in this paper will mini- mize the interactions between pathologies and thereby make the problem much more tractable. 8 Acknowledgements We would like to express our deep gratitude to Lawrence Chachere, Ziad Najem, Young- Tack Park for their major role in the design and implementation of the ProHCD shell and for many fruitful discussions. This research was supported by ONR grant N00014-88K0124 and an Arnold O. Beckman research award from the University of Illinois. Marianne Winslett provided valuable comments on draft versions of this paper. References Boose, J. H. (1984). Personal construct theory and the transfer of human expertise. In Proceedings of the 1983 National Conference on Artificial Intelligence, pages 27-33, Washington, D.C. Buchanan, B. G. and Shortliffe, E. H. (1984). Rule-Based Expert Systems: The MYCIN Experiments of the Stanford Heuristic Programming Project. Reading, Mass.: Addison- Wesley. Clancey, W. J. (1984). NEOMYCIN: Reconfiguring a rule-based system with appli- cation to teaching. In Clancey, W. J. and Shortliffe, E. H., editors, Readings in Medical Artificial Intelligence, chapter 15, pages 361-381. Reading, Mass.: Addison- Wesley. 10 Clancey, W. J. (1985). Heuristic classification. Artificial Intelligence, 27:289-350. Clancey, W. J. (1986). From GUIDON to NEOMYCIN to HERACLES in twenty short lessons. AI Magazine, 7:40-60. Davis, R. (1982). Application of meta level knowledge in the construction, mainte- nance and use of large knowledge bases. In Davis, R. and Lenat, D. B., editors, Know ledge- Based Systems in Artificial Intelligence, pages 229-490. New York: McGraw-Hill. DeJong, G. (1986). An approach to learning from observation. In Michalski, R. S., Carbonell, J. G., and Mitchell, T. M., editors, Machine Learning, Volume II, volume 2, chapter 19, pages 571-590. Los Altos: Morgan Kaufmann. Fu, L. M. and Buchanan, B. G. (1985). Learning intermediate concepts in con- structing a hierarchical knowledge base. In Proceedings of the 1985 IJCAI, pages 659-666, Los Angeles, CA. Kahn, G., Nowlan, S., and McDermott, J. (1985). MORE: An intelligent knowledge acquisition tool. In Proceedings of the 1985 IJCAI, pages 573-580, Los Angeles, CA. Michalski, R. S. (1983). A theory and methodology of inductive inference. In Michal- ski, R. S., Carbonell, J. G., and Mitchell, T. M., editors, Machine Learning: An Artificial Intelligence Approach, chapter 4, pages 83-134. Palo Alto: Tioga Press. Mitchell, T. M., Keller, R. M., and Kedar-Cabelli, S. T. (1986). Explanation-based generalization: A unifying view. Machine Learning, l(l):47-80. Mitchell, T. M., Mahadevan, S., and Steinberg, L. I. (1985). LEAP: A learning apprentice for VLSI design. In Proceedings of the 1985 IJCAI, pages 573-580, Los Angeles, CA. Pearl, J. (1986). On evidential reasoning in a hierarchy of hypotheses. Artificial Intelligence, 28:9-15. Politakis, P. and Weiss, S. M. (1984). Using empirical analysis to refine expert system knowledge bases. Artificial Intelligence, 22(l):23-48. Quinlan, J. R. (1983). Learning efficient classification procedures and their applica- tion to chess end games. In Michalski, R. S., Carbonell, J. G., and Mitchell, T. M., editors, Machine Learning, chapter 15, pages 463-482. Palo Alto: Tioga Press. Tan, K. (1989). Knowledge base validation and refinement for a heuristic classification expert system. Master's thesis, University of Illinois at Urbana-Champaign. 11 Wilkins, D. C. (1987). Apprenticeship Learning Techniques For Knowledge Based Systems. PhD thesis, University of Michigan. Also, Report No. STAN-CS-88- 1242, Dept. of Computer Science, Stanford University, 1988, 153pp. Wilkins, D. C. (1988). Knowledge base refinement using apprenticeship learning tech- niques. In Proceedings of the 1988 National Conference on Artificial Intelligence, pages 646-651, Minneapolis, MN. Wilkins, D. C. and Ma, Y. (1989). Sociopathic knowledge bases. Technical Report UIUCDCS-R-89-1538, Department of Computer Science, University of Illinois. Submitted to Artificial Intelligence. 12 ONR DISTRIBUTION LIST [ILLINOIS/WILKINS] Mi. Lin B. Achille Code 5530 Naval Research Lab Overlook Drive Washington, DC 20375-5000 Dr. Thomas H. Anderson Center for the Study of Reading 174 Children'! Research Center 51 Gerty Drive Champaign, IL 81820 Dr. Gautam Birwai Department of Computer Science Box 1688, Station B Vanderbilt University Nashville, TN 37235 Dr. Mark Bnrstein BBN 10 Moolton Street Cambridge, MA 02131 Dr. Edith Ackermann Media Laboratory E15-311 20 Ames Street Cambridge, MA 0213. Dr. Philip Ackerman Dept. of Psychology University of Minnesota 75 East River road N12I Elliott Hall Minneapolis, MN 55455 Dr. Beth Adelson Department of Computer Science Tufts University Medford, MA 02155 Technical Document Center AFHRL/LRS-TDC Wright-Patterson AFB OH 45433-6503 Dr. Robert Ahlers Code N711 Human Factors Laboratory Naval Training Systems Center Orlando, FL 32813 Dr. Stephen J. Andriole, Chairman Department of Information Systems and Systems Engineering George Mason University 4400 University Drive Fairfax, VA 22030 Dr. John Annett University of Warwick Department of Psychology Coventry CV4 7AL ENGLAND Dr. Edward Atkins Code 61Z1210 Naval Sea Systems Command Washington, DC 203(2-5101 Dr. Michael E. Atwood NYNEX AI Laboratory 500 Westchester Avenue White Plains, NY 10604 Dr. Patricia Baggett School of Education 910 E. University University of Michigan Ann Arbor, MI 48109-1259 Dr. John Black Teachers College, Box 8 Columbia University 525 West 120th Street New York, NY 10027 Dr. Daniel G. Bobrow Intelligent Systems Laboratory Xerox Palo Alto Research Center 3333 Coyote Hill Road Palo Alto, CA 94304 Dr. Deborah A. Boehm-Davis Department of Psychology George Mason University 4400 University Drive Fairfax, VA 22030 Dr. Sue Bogner Army Research Institute ATTN: PERI-SF 5001 Eisenhower Avenue Alexandria, VA 22333-5(00 Dr. Jeff Bonar Learning R&D Center University of Pittsburgh Pittsburgh, PA 15280 Dr. Robert Calfee School of Education Stanford University Stanford, CA 94305 Dr. Robert L. Campbell IBM T.J. Watson Research Center P. O. Box 704 Yorktown Heights, NY 10598 Dr. Joseph C. Campione Center for the Study of Reading University of Illinois 51 Gerty Drive Champaign, IL 81820 Dr. Jaime G. Carbonell Computer Science Department Carnegie-Mellon University Schenley Park Pittsburgh, PA 15213 Dr. Thomas Car.olan Institute for Simulation and Training University of Central Florida 12424 Research Parkway Suite 300 Orlando, FL 32826 Dr. Robert M. Aiken Computer Science Department 038-24 Temple University Philadelphia, PA 19122 Dr. Bruce W. Ballard AT&T Bell Laboratories 000 Mountain Avenue Murray Hill, NJ 07974 Dr. C. Alan Boneau Department of Psychology George Mason University 4400 University Drive Fairfax, VA 22030 Dr. Gail Carpenter Center for Adaptive Systems 111 Cummington St., Room 244 Boston University Boston, MA 02215 Dr. Jan Aikins AION Corporation 101 University Palo Alto, CA 94301 Dr. Donald E. Bamber Code 446 Naval Ocean Systems Center San Diego, CA 92152-5000 Dr. J. C. Boudreaux Center for Manufacturing Engineering National Bureau of Standards Gaithersburg, MD 20899 Dr. John M. Carroll IBM Watson Research Center User Interface Institute P.O. Box 704 Yorktown Heights, NY 10508 Dr. Saul Amarel Dept. of Computer Science Rutgers University New Brunswick, NJ 08903 Mr. Tejwansh S. Anand Philips Laboratories 345 Scarborough Road Briarcliff Manor New York, NY 10520 Dr. James Anderson Brown University Department of Psychology Providence, RI 02912 Dr. John R. Anderson Department of Psychology Carnegie-Mellon University Schenley Park Pittsburgh, PA 15213 Dr. Harold Bamford National Science Foundation 1800 G Street, N.W. Washington, DC 20550 Dr. Ranan Banerji Dept. of Mathematics and CS St. Joseph's University 5600 City Avenue Philadelphia, PA 19131 Dipartimento di Psicologia Via della Pergola 48 50121 Firense ITALY Dr. Marie A. Bienkowski 333 Ravenswood Ave. FK337 SRI International Menlo Park, CA 94025 Dr. Lyle E. Bourne, Jr. Department of Psychology Box 345 University of Colorado Boulder, CO 80309 Dr. Gary L. Bradshaw Psychology Department Campus Box 345 University of Colorado Boulder, CO 80309 kDr. Bruch Buchanan Computer Science Department University of Pittsburgh 322 Alumni Hall Pittsburgh, PA 15260 LT COL Hugh Burns AFHRL/IDI Brooks AFB, TX 78235 CDR Robert Carter Office of the Chief of Naval Operations OP-933D4 Washington, DC 20350-2000 Dr. Frtd Chang Pacific Bell 2600 Camino Ramon Room 3S-450 San Ramon, CA 94583 Dr. Davida Charney English Department Penn State University University Park, PA 16802 Dr. Michelene Chi Learning R&D Center University of Pittsburgh 3939 O'Hara Street Pittsburgh, PA 15260 ONR DISTRIBUTION LIST [ILLINOIS/WILKINS] Dr. 'urn Chipman Personnel * n i! Stop 239-1 Moffett Field, CA 94035 Dr. Derek Sleeman Computing. Science Department The University Aberdeen AB9 2FX Scotland UNITED KINGDOM Dr. Patrick Suppei Stanford University Institute for Mathematical Studies in the Social Science! Stanford, CA 84305-4115 Dr. Kurt Van Lehn Department of Psychology Carnegie-Mellon University Schenley Park Pittiburgh, PA 15213 Dr. Valerie L. Shalin Honeywell S4RC 36(0 Technology Drive Minneapolis, MN 65411 Ms. Gail K. Slemon LOGICON, Int. P.O. Box 15161 San Diego, CA 92138-5158 Dr. Richard Sntton GTE Labs Waltham, MA 02254 Dr. W. S. Vaughan 800 N. Quincy Street Arlington, VA 22217 Dr. Jade W. Shavlik Computer Sciences Department University of Wisconsin Madison, WI 53708 Mr. Colin Sheppard AXC2 Block 3 Admirality Research Establishment Ministry of Defence Portsdown Portsmouth Hants P064AA UNITED KINGDOM Dr. Ben Shneiderman Dept. of Computer Science University of Maryland College Park, MD 20742 Dr. Edward E. Smith Department of Psychology University of Michigan 330 Packard Road Ann Arbor, MI 41103 Dr. Reid G. Smith Schlumberger Technologies Lab. Schlumberger Palo Alto Research 3340 Hillview Avenue Palo Alto, CA 94304 Dr. Elliot Soloway EE/CS Department University of Michigan Ann Arbor, MI 48109-2122 Dr. William Swartout USC Information Sciences Institute 4878 Admirality Way Marina Del Rey, CA 90292 Mr. Prasad Tadepalli Department of Computer Science Rutgers University New Brunswick, NJ 08903 Dr. Gheorghe Tecuci Research Institute for Computer and Informatics 71318, Bd. Miclurin 8-10 Bucharest 1 ROMANIA Dr. Adrian Walker IBM P. O. Boi 704 Yorktown Heights, NY 10598 Dr. Diana Wearne Department of Educational Development University of Delaware Newark, DE 19711 Prof. Sholom M. Weiss Department of Computer Science Hill Center for Mathematical Sciences Rutgers University New Brunswick, NY 08903 Dr. Jeff Shrager Xerox PARC 3333 Coyote Hill Rd. Palo Alto, CA 94304 Linda B. Sorisio IBM-Los Angeles Scientific Center 11(01 Wilshire Blvd., 4th Floor Los Angeles, CA 90025 Dr. Perry W. Thorndyke FMC Corporation Central Engineering Labs 1205 Coleman Avenue, Box 580 Santa Clara, CA 95052 Dr. Keith T. Wescourt FMC Corporation Central Engineering Labs 1205 Coleman Ave., Box 580 Santa Clara, CA 95052 Dr. Howard Shrobe Symbolics, Inc. Eleven Cambridge Center Cambridge, MA 02142 Dr. Randall Shumaker Naval Research Laboratory Code 5510 455S Overlook Avenue, S.W. Washington, DC 20375-5000 Dr. Bernard Siler Information Sciences Fundamental Research Laboratory GTE Laboratories, Inc. 40 Sylvan Road Waltham, MA 02254 Dr. Herbert A. Simon Departments of Computer Science and Psychology Carnegie-Mellon University Pittsburgh, PA 15213 Dr. N. S. Sridharan FMC Corporation Box 580 1205 Coleman Avenue Santa Clara, CA 95052 Dr. Frederick Steinhelser CIA-ORD Ames Building Washington, DC 20505 Dr. Ted Steinke Dept. of Geography University of South Carolina Columbia, SC 29208 Dr. Leon Sterling Dept. of Computer Engineering and Science Crawford Hall Case Western Reserve University Cleveland, Ohio 44108 Dr. Chris Tong Department of Computer Science Rutgers University New Brunswick, NJ 08903 Dr. Douglas Towne Behavioral Technology Labs University of Southern California 1845 S. Elena Ave. Redondo Beach, CA 90277 Lt. Col. Edward Trautman Naval Training Systems Center 12350 Research Parkway Orlando, FL 32820 Dr. Paul T. Twohig Army Research Institute ATTN: PERI-RL 5001 Eisenhower Avenue Alexandria, VA 22333-5800 Dr. David C. Wilkins Dept. of Computer Science University of Illinois 405 N. Mathews Avenue Urbana, tL 81801 Dr. Kent E. Williams Institute for Simulation and Training The University of Central Florida 12424 Research Parkway, Suite 300 Orlando, FL 32828 Dr. Marsha R. Williams Applic. of Advanced Technologies National Science Foundation SEE/MDRISE 1800 G Street, N.W., Room 835-A Washington, DC 20550 S. H. Wilson Code 5505 Naval Research Laboratory Washington, DC 20375-5000 Robert L. Simpson, Jr. DARPA/ISTO 1400 Wilson Blvd. Arlington, VA 22209-2308 Dr. Michael J. Strait UMUC Graduate School College Park, MD 20742 Dr. Paul E. Utgoff Department of Computer and Information Science University of Massachusetts Amherst, MA 01003 Dr. Patrick H. Winston MIT Artificial Intelligence Lab. 545 Technology Square Cambridge, MA 02139 Dr. Zita M. Simutis Chief, Technologies for Skill Acquisition and Retention ARI 5001 Eisenhower Avenue Alexandria, VA 22333 Dr. Devika Subramanian Department of Computer Science Cornell University Ithaca, NY 14853 Dr. Harold P. Van Cott Committee on Human Factors National Academy of Sciences 2101 Constitution Avenue Washington, DC 20411 Dr. Edward Wisniewski Honeywell S and RC 3880 Technology Drive Minneapolis, MN 5S418 ONR DISTRIBUTION LIST [ILLINOIS/WILKINS] Dr. Paul T. Wohig Army Retearcii Imtitute 5001 Eiaenhower Ave. ATTN: PERI-RL Alexandria, VA 22333-S800 Dr. Joieph Wohl Alphatech, Inc. 2 Burlington Executive Center 111 Middleiex Turnpike Burlington, MA 01803 Dr. Beverly P. Woolf Dept. of Computer and Information Science! Univeriity of Mailachuietta Amhertt, MA 01003 Dr. Ronald R. Yager Machine Intelligence Institute Iona College New Rochelle, NT 10801 Dr. Maaoud Yaidani Dept. of Computer Science Univeriity of Exeter Prince of Walei Road Exeter EX44PT ENGLAND Dr. Joseph L. Young National Science Foundation Room 320 1800 G Street, N.W. Waihington, DC 20550 Dr. Maria Zemankova National Science Foundation 1800 G Street N.W. Waihington, DC 20550 Dr. Uri Zernik GE - CRD P. O. Box S Schenectady, NY 12301 UNIVERSITY OF ILLINOIS-URBANA 3 0112 101385430