Publications: 1994
- Automatic Student Modeling and Bug Library Construction using Theory Refinement
[Details] [PDF]
Paul T. Baffes
PhD Thesis, Department of Computer Sciences, The University of Texas at Austin, Austin, TX, 1994.The history of computers in education can be characterized by a continuing effort to construct intelligent tutorial programs which can adapt to the individual needs of a student in a one-on-one setting. A critical component of these intelligent tutorials is a mechanism for modeling the conceptual state of the student so that the system is able to tailor its feedback to suit individual strengths and weaknesses. The primary contribution of this research is a new student modeling technique which can automatically capture novel student errors using only correct domain knowledge, and can automatically compile trends across multiple student models into bug libraries. This approach has been implemented as a computer program, ASSERT, using a machine learning technique called theory refinement which is a method for automatically revising a knowledge base to be consistent with a set of examples. Using a knowledge base that correctly defines a domain and examples of a student's behavior in that domain, ASSERT models student errors by collecting any refinements to the correct knowledge base which are necessary to account for the student's behavior. The efficacy of the approach has been demonstrated by evaluating ASSERT using 100 students tested on a classification task using concepts from an introductory course on the C++ programming language. Students who received feedback based on the models automatically generated by ASSERT performed significantly better on a post test than students who received simple reteaching.
ML ID: 40
- Multiple-Fault Diagnosis Using General Qualitative Models with Fault Modes
[Details] [PDF]
Siddarth Subramanian and Raymond J. Mooney
In Working Papers of the Fifth International Workshop on Principles of Diagnosis, 321-325, New Paltz, NY, October 1994.This paper describes an approach to diagnosis of systems described by qualitative differential equations represented as QSIM models. An implemented system QDOCS is described that performs multiple-fault, fault-model based diagnosis, using constraint satisfaction techniques, of qualitative behaviors of systems described by such models. We demonstrate the utility of this system by accurately diagnosing randomly generated faults using simulated behaviors of a portion of the Reaction Control System of the space shuttle.
ML ID: 39
- Inductive Learning For Abductive Diagnosis
[Details] [PDF]
Cynthia A. Thompson and Raymond J. Mooney
In Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI-94), 664-669, Seattle, WA, August 1994.A new inductive learning system, LAB (Learning for ABduction), is presented which acquires abductive rules from a set of training examples. The goal is to find a small knowledge base which, when used abductively, diagnoses the training examples correctly and generalizes well to unseen examples. This contrasts with past systems that inductively learn rules that are used deductively. Each training example is associated with potentially multiple categories (disorders), instead of one as with typical learning systems. LAB uses a simple hill-climbing algorithm to efficiently build a rule base for a set-covering abductive system. LAB has been experimentally evaluated and compared to other learning systems and an expert knowledge base in the domain of diagnosing brain damage due to stroke.
ML ID: 38
- Comparing Methods For Refining Certainty Factor Rule-Bases
[Details] [PDF]
J. Jeffrey Mahoney and Raymond J. Mooney
In Proceedings of the Eleventh International Workshop on Machine Learning (ML-94), 173--180, Rutgers, NJ, July 1994.This paper compares two methods for refining uncertain knowledge bases using propositional certainty-factor rules. The first method, implemented in the RAPTURE system, employs neural-network training to refine the certainties of existing rules but uses a symbolic technique to add new rules. The second method, based on the one used in the KBANN system, initially adds a complete set of potential new rules with very low certainty and allows neural-network training to filter and adjust these rules. Experimental results indicate that the former method results in significantly faster training and produces much simpler refined rule bases with slightly greater accuracy.
ML ID: 37
- Combining Top-Down And Bottom-Up Techniques In Inductive Logic Programming
[Details] [PDF]
John M. Zelle, Raymond J. Mooney, and Joshua B. Konvisser
In Proceedings of the Eleventh International Workshop on Machine Learning (ML-94), 343--351, Rutgers, NJ, July 1994.This paper describes a new method for inducing logic programs from examples which attempts to integrate the best aspects of existing ILP methods into a single coherent framework. In particular, it combines a bottom-up method similar to GOLEM with a top-down method similar to FOIL. It also includes a method for predicate invention similar to CHAMP and an elegant solution to the ``noisy oracle'' problem which allows the system to learn recursive programs without requiring a complete set of positive examples. Systematic experimental comparisons to both GOLEM and FOIL on a range of problems are used to clearly demonstrate the advantages of the approach.
ML ID: 36
- Inducing Deterministic Prolog Parsers From Treebanks: A Machine Learning Approach
[Details] [PDF]
John M. Zelle and Raymond J. Mooney
In Proceedings of the Twelfth National Conference on Artificial Intelligence (AAAI-94), 748--753, Seattle, WA, July 1994.This paper presents a method for constructing deterministic, context-sensitive, Prolog parsers from corpora of parsed sentences. Our approach uses recent machine learning methods for inducing Prolog rules from examples (inductive logic programming). We discuss several advantages of this method compared to recent statistical methods and present results on learning complete parsers from portions of the ATIS corpus.
ML ID: 35
- Learning Qualitative Models for Systems with Multiple Operating Regions
[Details] [PDF]
Sowmya Ramachandran, Raymond J. Mooney, and Benjamin J. Kuipers
In Proceedings of the Eighth International Workshop on Qualitative Reasoning about Physical Systems, Nara, Japan, 1994.The problem of learning qualitative models of physical systems from observations of its behaviour has been addressed by several researchers in recent years. Most current techniques limit themselves to learning a single qualitative differential equation to model the entire system. However, many systems have several qualitative differential equations underlying them. In this paper, we present an approach to learning the models for such systems. Our technique divides the behaviours into segments, each of which can be explained by a single qualitative differential equation. The qualitative model for each segment can be generated using any of the existing techniques for learning a single model. We show that results of applying our technique to several examples and demonstrate that it is effective.
ML ID: 34
- Modifying Network Architectures For Certainty-Factor Rule-Base Revision
[Details] [PDF]
J. Jeffrey Mahoney and Raymond J. Mooney
In Proceedings of the International Symposium on Integrating Knowledge and Neural Heuristics (ISIKNH-94), 75--85, Pensacola, FL, May 1994.This paper describes RAPTURE --- a system for revising probabilistic rule bases that converts symbolic rules into a connectionist network, which is then trained via connectionist techniques. It uses a modified version of backpropagation to refine the certainty factors of the rule base, and uses ID3's information-gain heuristic (Quinlan) to add new rules. Work is currently under way for finding improved techniques for modifying network architectures that include adding hidden units using the UPSTART algorithm (Frean). A case is made via comparison with fully connected connectionist techniques for keeping the rule base as close to the original as possible, adding new input units only as needed.
ML ID: 33
- A Multistrategy Approach to Theory Refinement
[Details] [PDF]
Raymond J. Mooney and Dirk Ourston
In Ryszard S. Michalski and G. Teccuci, editors, Machine Learning: A Multistrategy Approach, Vol. IV, 141-164, San Mateo, CA, 1994. Morgan Kaufmann.This chapter describes a multistrategy system that employs independent modules for deductive, abductive, and inductive reasoning to revise an arbitrarily incorrect propositional Horn-clause domain theory to fit a set of preclassified training instances. By combining such diverse methods, EITHER is able to handle a wider range of imperfect theories than other theory revision systems while guaranteeing that the revised theory will be consistent with the training data. EITHER has successfully revised two actual expert theories, one in molecular biology and one in plant pathology. The results confirm the hypothesis that using a multistrategy system to learn from both theory and data gives better results than using either theory or data alone.
ML ID: 32
- Theory Refinement Combining Analytical and Empirical Methods
[Details] [PDF]
Dirk Ourston and Raymond J. Mooney
Artificial Intelligence:311-344, 1994.This article describes a comprehensive approach to automatic theory revision. Given an imperfect theory, the approach combines explanation attempts for incorrectly classified examples in order to identify the failing portions of the theory. For each theory fault, correlated subsets of the examples are used to inductively generate a correction. Because the corrections are focused, they tend to preserve the structure of the original theory. Because the system starts with an approximate domain theory, in general fewer training examples are required to attain a given level of performance (classification accuracy) compared to a purely empirical system. The approach applies to classification systems employing a propositional Horn-clause theory. The system has been tested in a variety of application domains, and results are presented for problems in the domains of molecular biology and plant disease diagnosis.
ML ID: 31
- Integrating ILP and EBL
[Details] [PDF]
Raymond J. Mooney and John M. Zelle
Sigart Bulletin (special issue on Inductive Logic Programmming), 5(1):12-21, 1994.This paper presents a review of recent work that integrates methods from Inductive Logic Programming (ILP) and Explanation-Based Learning (EBL). ILP and EBL methods have complementary strengths and weaknesses and a number of recent projects have effectively combined them into systems with better performance than either of the individual approaches. In particular, integrated systems have been developed for guiding induction with prior knowledge (ML-SMART, FOCL, GRENDEL) refining imperfect domain theories (FORTE, AUDREY, Rx), and learning effective search-control knowledge (AxA-EBL, DOLPHIN).
ML ID: 30