PII: 0004-3702(96)80446-2 Artificial ELSEVIER Artificial Intelligence 82 (1996) 393-394 Intelligence Forthcoming Papers A. Becker and D. Geiger, Optimization of Pearl’s method of conditioning and greedy-like approximation algorithms for the vertex feedback set problem We show how to find a small loop curser in a Bayesian network. Finding such a loop cutset is the first step in the method of conditioning for inference. Our algorithm for finding a loop cutset, called MGA, finds a loop cutset which is guaranteed in the worst case to contain less than twice the number of variables contained in a minimum loop cutset. The algorithm is based on a reduction to the weighted vertex feedback set problem and a new approximation of the latter problem. The complexity of MGA is O(NI + n logn) where m and n are the number of edges and vertices respectively. A greedy algorithm, called GA, for the weighted vertex feedback is also analyzed and bounds on its performance are given. We test MGA on randomly generated graphs and find that the average ratio between tbe number of instances associated with the algorithm’s output and the number of instances associated with an optimum solution is 1.22 for the graphs tested. R Walley, Measures of uncertainty in expert systems This paper compares four measures that have been advocated as models for uncertainty in expert systems. The measures are additive probabilities (used in the Bayesian theory), coherent lower (or upper) previsions, belief functions (used in the Dempster-Shafer theory) and possibility measures (fuzzy logic). Special emphasis is given to the theory of coherent lower previsions, in which upper and lower probabilities, expectations and conditional probabilities are constructed from initial assessments through a technique of natural extension. Mathematically, all the measures can be regarded as types of coherent lower or upper previsions, and this per- spective gives some insight into the properties of belief functions and possibility measures. The measures are evaluated according to six criteria: clarity of interpretation; ability to model partial information and imprecise assessments, especially judgements expressed in natural language; rules for combining and updating uncer- tainty, and their justification; consistency of models and inferences; feasibility of assessment; and feasibility of computations. Each of the four measures seems to be useful in special kinds of problems, but only lower and upper previsions appear to be sufficiently general to model the most common types of uncertainty. Y. Moses and M. Tennenholtz, Off-line reasoning for on-line efficiency: knowledge bases The complexity of reasoning is a fundamental issue in AI. In many cases, the fact that an intelligent system needs to perform reasoning on-line contributes to the difficulty of this reasoning. This paper considers the case in which an intelligent system computes whether a query is entailed by the system’s knowledge base. It investigates how an initial phase of off-line preprocessing and design can improve the on-line complexity Elsevier Science B.V.