Arrow Research search

Author name cluster

Fahiem Bacchus

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

70 papers
2 author rows

Possible papers

70

IJCAI Conference 2022 Conference Paper

Large Neighbourhood Search for Anytime MaxSAT Solving

  • Randy Hickey
  • Fahiem Bacchus

Large Neighbourhood Search (LNS) is an algorithmic framework for optimization problems that can yield good performance in many domains. In this paper, we present a method for applying LNS to improve anytime maximum satisfiability (MaxSAT) solving by introducing a neighbourhood selection policy that shows good empirical performance. We show that our LNS solver can often improve the suboptimal solutions produced by other anytime MaxSAT solvers. When starting with a suboptimal solution of reasonable quality, our approach often finds a better solution than the original anytime solver can achieve. We demonstrate that implementing our LNS solver on top of three different state-of-the-art anytime solvers improves the anytime performance of all three solvers within the standard time limit used in the incomplete tracks of the annual MaxSAT Evaluation.

IJCAI Conference 2021 Conference Paper

Abstract Cores in Implicit Hitting Set MaxSat Solving (Extended Abstract)

  • Jeremias Berg
  • Fahiem Bacchus
  • Alex Poole

Maximum satisfiability (MaxSat) solving is an active area of research motivated by numerous successful applications to solving NP-hard combinatorial optimization problems. One of the most successful approaches for solving MaxSat instances from real world domains are the so called implicit hitting set (IHS) solvers. IHS solvers decouple MaxSat solving into separate core-extraction (i. e. reasoning) and optimization steps which are tackled by a Boolean satisfiability (SAT) and an integer linear programming (IP) solver, respectively. While the approach shows state-of-the-art performance on many industrial instances, it is known that there exists instances on which IHS solvers need to extract an exponential number of cores before terminating. Motivated by the simplest of these problematic instances, we propose abstract cores, a compact representation for a potentially exponential number of regular cores. We demonstrate how to incorporate abstract core reasoning into the IHS algorithm and report on an empirical evaluation demonstrating, that including abstract cores into a state-of-the-art IHS solver improves its performance enough to surpass the best performing solvers of the 2019 MaxSat Evaluation.

AAAI Conference 2021 Conference Paper

Learning Branching Heuristics for Propositional Model Counting

  • Pashootan Vaezipoor
  • Gil Lederman
  • Yuhuai Wu
  • Chris Maddison
  • Roger B Grosse
  • Sanjit A. Seshia
  • Fahiem Bacchus

Propositional model counting, or #SAT, is the problem of computing the number of satisfying assignments of a Boolean formula. Many problems from different application areas, including many discrete probabilistic inference problems, can be translated into model counting problems to be solved by #SAT solvers. Exact #SAT solvers, however, are often not scalable to industrial size instances. In this paper, we present Neuro#, an approach for learning branching heuristics to improve the performance of exact #SAT solvers on instances from a given family of problems. We experimentally show that our method reduces the step count on similarly distributed held-out instances and generalizes to much larger instances from the same problem family. It is able to achieve these results on a number of different problem families having very different structures. In addition to step count improvements, Neuro# can also achieve orders of magnitude wall-clock speedups over the vanilla solver on larger instances in some problem families, despite the runtime overhead of querying the model.

SAT Conference 2020 Conference Paper

Abstract Cores in Implicit Hitting Set MaxSat Solving

  • Jeremias Berg
  • Fahiem Bacchus
  • Alex Poole

Abstract Maximum Satisfiability (MaxSat) solving is an active area of research motivated by numerous successful applications to solving NP-hard combinatorial optimization problems. One of the most successful approaches to solving MaxSat instances arising from real world applications is the Implicit Hitting Set (IHS) approach. IHS solvers are complete MaxSat solvers that harness the strengths of both Boolean Satisfiability (SAT) and Integer Linear Programming (IP) solvers by decoupling core-extraction and optimization. While such solvers show state-of-the-art performance on many instances, it is known that there exist MaxSat instances on which IHS solvers need to extract an exponential number of cores before terminating. Motivated by the structure of the simplest of these problematic instances, we propose a technique we call abstract cores that provides a compact representation for a potentially exponential number of regular cores. We demonstrate how to incorporate abstract core reasoning into the IHS algorithm and report on an empirical evaluation demonstrating that including abstract cores into a state-of-the-art IHS solver improves its performance enough to surpass the best performing solvers of the most recent 2019 MaxSat Evaluation.

SAT Conference 2020 Conference Paper

Clause Size Reduction with all-UIP Learning

  • Nick Feng
  • Fahiem Bacchus

Abstract Almost all CDCL SAT solvers use the 1-UIP clause learning scheme for learning new clauses from conflicts, and our current understanding of SAT solving provides good reasons for using that scheme. In particular, the 1-UIP scheme yields asserting clauses, and these asserting clauses have minimum LBD among all possible asserting clauses. As a result of these advantages, other clause learning schemes, like i -UIP and all-UIP, that were proposed in early work are not used in modern solvers. In this paper, we propose a new technique for exploiting the all-UIP clause learning scheme. Our technique is to employ all-UIP learning under the constraint that the learnt clause’s LBD does not increase (over the minimum established by the 1-UIP clause). Our method can learn clauses that are significantly smaller than the 1-UIP clause while preserving the minimum LBD. Unlike previous clause minimization methods, our technique is not limited to learning a sub-clause of the 1-UIP clause. We show empirically that our method can improve the performance of state of the art solvers.

SAT Conference 2020 Conference Paper

Trail Saving on Backtrack

  • Randy Hickey
  • Fahiem Bacchus

Abstract A CDCL SAT solver can backtrack a large distance when it learns a new clause, e. g, when the new learnt clause is a unit clause the solver has to backtrack to level zero. When the length of the backtrack is large, the solver can end up reproducing many of the same decisions and propagations when it redescends the search tree. Different techniques have been proposed to reduce this potential redundancy, e. g. , partial/chronological backtracking and trail saving on restarts. In this paper we present a new trail saving technique that is not restricted to restarts, unlike prior trail saving methods. Our technique makes a copy of the part of the trail that is backtracked over. This saved copy can then be used to improve the efficiency of the solver’s subsequent redescent. Furthermore, the saved trail also provides the solver with the ability to look ahead along the previous trail which can be exploited to improve its efficiency. Our new trail saving technique offers different tradeoffs in comparison with chronological backtracking and often yields superior performance. We also show that our technique is able to improve the performance of state-of-the-art solvers.

SAT Conference 2019 Conference Paper

Speeding Up Assumption-Based SAT

  • Randy Hickey
  • Fahiem Bacchus

Abstract Assumption based SAT solving is an essential tool in many applications of SAT solving, especially in incremental SAT solving. For example, assumption based SAT solving is used when solving MaxSat, when computing minimal unsatisfiable subsets and minimal correction sets, and in various inductive verification applications. The MiniSat SAT solver introduced a simple technique for extending a SAT solver to allow it to handle assumptions by forcing the SAT solver to make the assumed literals its initial decisions. This approach persists in almost all current SAT solvers making it the most commonly used technique for handling assumptions. In this paper we explain some deficiencies in this approach that can hinder its efficiency, and provide a very simple modification that fixes these deficiencies. We show that our modification makes a non-trivial difference in practice, e. g. , allowing two tested state of the art MaxSat solvers to solve 50+ new instances. This improvement is particularly useful since our modification is extremely simple to implement. We also examine the issue of repeated work when the solver backtracks over the assumptions, e. g. , on restarts or when a new unit clause is learnt, and develop a new method for avoiding this repeated work that addresses some deficiencies of prior approaches.

IJCAI Conference 2018 Conference Paper

Reduced Cost Fixing for Maximum Satisfiability

  • Fahiem Bacchus
  • Antti Hyttinen
  • Matti Järvisalo
  • Paul Saikko

Maximum satisfiability (MaxSAT) offers a competitive approach to solving NP-hard real-world optimization problems. While state-of-the-art MaxSAT solvers rely heavily on Boolean satisfiability (SAT) solvers, a recent trend, brought on by MaxSAT solvers implementing the so-called implicit hitting set (IHS) approach, is to integrate techniques from the realm of integer programming (IP) into the solving process. This allows for making use of additional IP solving techniques to further speed up MaxSAT solving. In this line of work, we investigate the integration of the technique of reduced cost fixing from the IP realm into IHS solvers, and empirically show that reduced cost fixing considerable speeds up a state-of-the-art MaxSAT solver implementing the IHS approach.

AAMAS Conference 2016 Conference Paper

Strategy-Proofness in the Stable Matching Problem with Couples

  • Andrew Perrault
  • Joanna Drummond
  • Fahiem Bacchus

Stable matching problems (SMPs) arising in real-world markets often have extra complementarities in the participants’ preferences. These complementarities break many of the theoretical properties of SMP and make it computationally hard to find a stable matching. A common complementarity is the introduction of couples in labor markets, which gives rise to the stable matching problem with couples (SMP-C). A major concern in markets is strategy-proofness since markets that are easily manipulated often unravel. In this paper we provide some key insights into the issue of strategyproofness in SMP-C. We provide theoretical results that relate the set of resident Pareto optimal stable matchings (RPopt ) admitted by an SMP-C instance to the ability of the residents to manipulate. We show that a mechanism returning an RPopt matching is, in certain cases, strategyproof against residents attempting to manipulate by truncating their preference lists. We provide an algorithm for finding an RPopt matching when one exists. And finally, we study empirically the frequency of multiple stable and multiple RPopt matchings as the market sizes grows, and under different proportions of couples in the market. Our empirical results indicate that SMP-C becomes less susceptible to manipulation as both the size of the market grows and the fraction of couples in the market shrinks. General Terms Economics, Algorithms, Theory

LPAR Conference 2015 Conference Paper

Enhancing Search-Based QBF Solving by Dynamic Blocked Clause Elimination

  • Florian Lonsing
  • Fahiem Bacchus
  • Armin Biere
  • Uwe Egly
  • Martina Seidl

Abstract Among preprocessing techniques for quantified Boolean formula (QBF) solving, quantified blocked clause elimination (QBCE) has been found to be extremely effective. We investigate the power of dynamically applying QBCE in search-based QBF solving with clause and cube learning (QCDCL). This dynamic application of QBCE is in sharp contrast to its typical use as a mere preprocessing technique. In our dynamic approach, QBCE is applied eagerly to the formula interpreted under the assignments that have been enumerated in QCDCL. The tight integration of QBCE in QCDCL results in a variant of cube learning which is exponentially stronger than the traditional method. We implemented our approach in the QBF solver DepQBF and ran experiments on instances from the QBF Gallery 2014. On application benchmarks, QCDCL with dynamic QBCE substantially outperforms traditional QCDCL. Moreover, our approach is compatible with incremental solving and can be combined with preprocessing techniques other than QBCE.

IJCAI Conference 2015 Conference Paper

SAT Is an Effective and Complete Method for Solving Stable Matching Problems with Couples

  • Joanna Drummond
  • Andrew Perrault
  • Fahiem Bacchus

Stable matchings can be computed by deferred acceptance (DA) algorithms. However such algorithms become incomplete when complementarities exist among the agent preferences: they can fail to find a stable matching even when one exists. In this paper we examine stable matching problems arising from labour market with couples (SMP-C). The classical problem of matching residents into hospital programs is an example. Couples introduce complementarities under which DA algorithms become incomplete. In fact, SMP-C is NP-complete. Inspired by advances in SAT and integer programming (IP) solvers we investigate encoding SMP-C into SAT and IP and then using state-of-the-art SAT and IP solvers to solve it. We also implemented two previous DA algorithms. After comparing the performance of these different solution methods we find that encoding to SAT can be surprisingly effective, but that our encoding to IP does not scale as well. Using our SAT encoding we are able to determine that the DA algorithms fail on a non-trivial number of cases where a stable matching exists. The SAT and IP encodings also have the property that they can verify that no stable matching exists, something that the DA algorithms cannot do.

SAT Conference 2014 Conference Paper

Cores in Core Based MaxSat Algorithms: An Analysis

  • Fahiem Bacchus
  • Nina Narodytska

Abstract A number of maxsat algorithms are based on the idea of generating unsatisfiable cores. A common approach is to use these cores to construct cardinality (or pseudo-boolean) constraints that are then added to the formula. Each iteration extracts a core of the modified formula that now contains cardinality constraints. Hence, the cores generated are not just cores of the original formula, they are cores of more complicated formulas. The effectiveness of core based algorithms for maxsat is strongly affected by the structure of the cores of the original formula. Hence it is natural to ask the question: how are the cores found by these algorithms related to the cores of the original formula? In this paper we provide a formal characterization of this relationship. Our characterization allows us to identify a possible inefficiency in these algorithms. Hence, finding ways to address it may lead to performance improvements in these state-of-the-art maxsat algorithms.

AAAI Conference 2014 Conference Paper

Maximum Satisfiability Using Core-Guided MaxSAT Resolution

  • Nina Narodytska
  • Fahiem Bacchus

Core-guided approaches to solving MAXSAT have proved to be effective on industrial problems. These approaches solve a MAXSAT formula by building a sequence of SAT formulas, where in each formula a greater weight of soft clauses can be relaxed. The soft clauses are relaxed via the addition of blocking variables, and the total weight of soft clauses that can be relaxed is limited by placing constraints on the blocking variables. In this work we propose an alternative approach. Our approach also builds a sequence of new SAT formulas. However, these formulas are constructed using MAXSAT resolution, a sound rule of inference for MAXSAT. MAXSAT resolution can in the worst case cause a quadratic blowup in the formula, so we propose a new compressed version of MAXSAT resolution. Using compressed MAXSAT resolution our new core-guided solver improves the state-of-theart, solving significantly more problems than other state-ofthe-art solvers on the industrial benchmarks used in the 2013 MAXSAT Solver Evaluation.

AAAI Conference 2014 Conference Paper

Relaxation Search: A Simple Way of Managing Optional Clauses

  • Fahiem Bacchus
  • Jessica Davies
  • Maria Tsimpoukelli
  • George Katsirelos

A number of problems involve managing a set of optional clauses. For example, the soft clauses in a MAXSAT formula are optional—they can be falsified for a cost. Similarly, when computing a Minimum Correction Set for an unsatisfiable formula, all clauses are optional—some can be falsified in order to satisfy the remaining. In both of these cases the task is to find a subset of the optional clauses that achieves some criteria, and whose removal leaves a satisfiable formula. Relaxation search is a simple method of using a standard SAT solver to solve this task. Relaxation search is easy to implement, sometimes requiring only a simple modification of the variable selection heuristic in the SAT solver; it offers considerable flexibility and control over the order in which subsets of optional clauses are examined; and it automatically exploits clause learning to exchange information between the two phases of finding a suitable subset of optional clauses and checking if their removal yields satisfiability. We demonstrate how relaxation search can be used to solve MAXSAT and to compute Minimum Correction Sets. In both cases relaxation search is able to achieve state-of-the-art performance and solve some instances other solvers are not able to solve.

SAT Conference 2013 Conference Paper

Exploiting the Power of mip Solvers in maxsat

  • Jessica Davies 0001
  • Fahiem Bacchus

Abstract maxsat is an optimization version of satisfiability. Since many practical problems involve optimization, there are a wide range of potential applications for effective maxsat solvers. In this paper we present an extensive empirical evaluation of a number of maxsat solvers. In addition to traditional maxsat solvers, we also evaluate the use of a state-of-the-art Mixed Integer Program ( mip ) solver, cplex, for solving maxsat. mip solvers are the most popular technology for solving optimization problems and are also theoretically more powerful than sat solvers. In fact, we show that cplex is quite effective on a range of maxsat instances. Based on these observations we extend a previously developed hybrid approach for solving maxsat, that utilizes both a sat solver and a mip solver. Our extensions aim to take better advantage of the power of the mip solver. The resulting improved hybrid solver is shown to be quite effective.

SAT Conference 2013 Conference Paper

Recovering and Utilizing Partial Duality in QBF

  • Alexandra Goultiaeva
  • Fahiem Bacchus

Abstract Quantified Boolean Formula (QBF) solvers that utilize non-CNF representations are able to reason dually about conflicts and solutions by accessing structural information contained in the non-CNF representation. This structure is not as easily accessed from a CNF representation, hence CNF based solvers are not able to perform the same kind of reasoning. Recent work has shown how this additional structure can be extracted from a non-CNF representation and encoded in a form that can be fed directly to a CNF-based QBF solver without requiring major changes to the solver’s architecture. This combines the benefits of specialized CNF-based techniques and dual reasoning. This approach, however, only works if one has access to a non-CNF representation of the problem, which is often not the case in practice. In this paper we address this problem and show how working only with the CNF encoding we can successfully extract partial structural information in a form that can be soundly given to a CNF-based solver. This yields performance benefits even though the information extracted is incomplete, and allows CNF-based solvers to obtain some of the benefits of dual reasoning in a more general context. To further increase the applicability of our approach we develop a new method for extracting structure from a CNF generated with the commonly used Plaisted-Greenbaum transformation.

AAAI Conference 2012 Conference Paper

MAXSAT Heuristics for Cost Optimal Planning

  • Lei Zhang
  • Fahiem Bacchus

The cost of an optimal delete relaxed plan, known as h+, is a powerful admissible heuristic but is in general intractable to compute. In this paper we examine the problem of computing h+ by encoding it as a MAXSAT problem. We develop a new encoding that utilizes constraint generation to supports the computation of a sequence of increasing lower bounds on h+. We show a close connection between the computations performed by a recent approach to solving MAXSAT and a hitting set approach recently proposed for computing h+. Using this connection we observe that our MAXSAT computation can be initialized with a set of landmarks computed via cheaper methods like LM-cut. By judicious use of MAXSAT solving along with a technique of lazy heuristic evaluation we obtain speedups for finding optimal plans over LM-cut on a number of domains. Our approach enables the exploitation of continued progress in MAXSAT solving, and also makes it possible to consider computing or approximating heuristics that are even more informed that h+ by, for example, adding some information about deletes back into the encoding.

SAT Conference 2012 Conference Paper

Off the Trail: Re-examining the CDCL Algorithm

  • Alexandra Goultiaeva
  • Fahiem Bacchus

Abstract Most state of the art SAT solvers for industrial problems are based on the Conflict Driven Clause Learning (CDCL) paradigm. Although this paradigm evolved from the systematic DPLL search algorithm, modern techniques of far backtracking and restarts make CDCL solvers non-systematic. CDCL solvers do not systematically examine all possible truth assignments as does DPLL. Local search solvers are also non-systematic and in this paper we show that CDCL can be reformulated as a local search algorithm: a local search algorithm that through clause learning is able to prove UNSAT. We show that the standard formulation of CDCL as a backtracking search algorithm and our new formulation of CDCL as a local search algorithm are equivalent, up to tie breaking. In the new formulation of CDCL as local search, the trail no longer plays a central role in the algorithm. Instead, the ordering of the literals on the trail is only a mechanism for efficiently controlling clause learning. This changes the paradigm and opens up avenues for further research and algorithm design. For example, in QBF the quantifier places restrictions on the ordering of variables on the trail. By making the trail less important, an extension of our local search algorithm to QBF may provide a way of reducing the impact of these variable ordering restrictions.

IJCAI Conference 2011 Conference Paper

A Uniform Approach for Generating Proofs and Strategies for Both True and False QBF Formulas

  • Alexandra Goultiaeva
  • Allen Van Gelder
  • Fahiem Bacchus

Many important problems can be compactly represented as quantified boolean formulas (QBF) and solved by general QBF solvers. To date QBF solvers have mainly focused on determining whether or not the input QBF is true or false. However, additional important information about an application can be gathered from its QBF formulation. In this paper we demonstrate that a circuit-based QBF solver can be exploited to obtain a Q-Resolution proof of the truth or the falsity of a QBF. QBFs have a natural interpretation as a two person game and our main result is to show how, via a simple computation, the moves for the winning player can be computed directly from these proofs. This result shows that the proof is a representation of the winning strategy. In previous approaches the winning strategy has often been represented in a way that makes it hard to verify. In our approach the correctness of the strategy follows directly from the correctness of the proof, which is relatively easy to verify.

SAT Conference 2010 Conference Paper

Exploiting Circuit Representations in QBF Solving

  • Alexandra Goultiaeva
  • Fahiem Bacchus

Abstract Previous work has shown that circuit representations can be exploited in QBF solvers to obtain useful performance improvements. In this paper we examine some additional techniques for exploiting a circuit representations. We discuss the techniques of propagating a dual set of values through the circuit, conversion from simple negation normal form to a more optimized circuit representation, and adding phase memorization during search. We have implemented these techniques in a new QBF solver called CirQit2 and evaluated their impact experimentally. The solver has also displayed superior performance in the non-prenex non-CNF track of the QBFEval’10 competition.

AAAI Conference 2010 Conference Paper

Exploiting QBF Duality on a Circuit Representation

  • Alexandra Goultiaeva
  • Fahiem Bacchus

Search based solvers for Quantified Boolean Formulas (QBF) have adapted the SAT solver techniques of unit propagation and clause learning to prune falsifying assignments. The technique of cube learning has been developed to help them prune satisfying assignments. Cubes, however, have not been able to achieve the same degree of effectiveness as clauses. In this paper we demonstrate how a circuit representation for QBF can support the propagation of dual truth values. The dual values support the identical techniques of unit propagation and clause learning, except now it is satisfying assignments rather than falsifying assignments that are pruned. Dual value propagation thus exploits the circuit representation and the duality of QBF formulas so that the same effective SAT techniques can now be used to prune both falsifying and satisfyingly assignments. We show empirically that dual propagation yields significant performance improvements and advances the state of the art in QBF solving.

IJCAI Conference 2009 Conference Paper

  • Matthew Kitching
  • Fahiem Bacchus

Branch and bound is an effective technique for solving constraint optimization problems (COP’s). However, its search space expands very rapidly as the domain sizes of the problem variables grow. In this paper, we present an algorithm that clusters the values of a variable’s domain into sets. Branch and bound can then branch on these sets of values rather than on individual values, thereby reducing the branching factor of its search space. The aim of our clustering algorithm is to construct a collection of sets such that branching on these sets will still allow effective bounding. In conjunction with the reduced branching factor, the size of the explored search space is thus significantly reduced. We test our method and show empirically that it can yield significant performance gains over existing stateof-the-art techniques.

IJCAI Conference 2009 Conference Paper

  • Matthew Kitching
  • Fahiem Bacchus

Decomposition is an effective technique for solving discrete Constraint Optimization Problems (COPs) with low tree-width. On problems with high treewidth, however, existing decomposition algorithms offer little advantage over branch and bound search (B&B). In this paper we propose a method for exploiting decomposition on problems with high treewidth. Our technique involves modifying B&B to detect and exploit decomposition on a selected subset of the problem’s objectives. Decompositions over this subset, generated during search, are exploited to compute tighter bounds allowing B&B to prune more of its search space. We present a heuristic for selecting an appropriate subset of objectives—one that readily decomposes during search and yet can still provide good bounds. We demonstrate empirically that our approach can significantly improve B&B’s performance and outperform standard decomposition algorithms on a variety of high tree-width problems.

SAT Conference 2009 Conference Paper

Beyond CNF: A Circuit-Based QBF Solver

  • Alexandra Goultiaeva
  • Vicki Iverson
  • Fahiem Bacchus

Abstract State-of-the-art solvers for Quantified Boolean Formulas (QBF) have employed many techniques from the field of Boolean Satisfiability (SAT) including the use of Conjunctive Normal Form (CNF) in representing the QBF formula. Although CNF has worked well for SAT solvers, recent work has pointed out some inherent problems with using CNF in QBF solvers. In this paper, we describe a QBF solver, called CirQit (Cir-Q-it) that utilizes a circuit representation rather than CNF. The solver can exploit its circuit representation to avoid many of the problems of CNF. For example, we show how this approach generalizes some previously proposed techniques for overcoming the disadvantages of CNF for QBF solvers. We also show how important techniques like clause and cube learning can be made to work with a circuit representation. Finally, we empirically compare the resulting solver against other state-of-the-art QBF solvers, demonstrating that our approach can often outperform these solvers.

IJCAI Conference 2007 Conference Paper

  • Matthew Kitching
  • Fahiem Bacchus

Caching, symmetries, and search with decomposition are powerful techniques for pruning the search space of constraint problems. In this paper we present an innovative way of efficiently combining these techniques with branch and bound for solving certain types of constraint optimization problems (COPs). Our new method significantly reduces the overhead of performing decomposition during search when dynamic variable orderings are employed. In addition, it supports the exploitation of dynamic symmetries that appear only during search. Symmetries have not previously been combined with decomposition. Finally, we achieve a superior integration of decomposition and caching with branch and bound than previous approaches. We test our methods on the Maximum Density Still Life problem and show that each of our ideas yields a significant gain in search performance.

IJCAI Conference 2007 Conference Paper

  • Jorge A. Baier
  • Fahiem Bacchus
  • Sheila McIlraith

In this paper we propose a suite of techniques for planning with temporally extended preferences (TEPs). To this end, we propose a method for compiling TEP planning problems into simpler domains containing only final-state (simple) preferences and metric functions. With this simplified problem in hand, we propose a variety of heuristic functions for planning with final-state preferences, together with an incremental best-first planning algorithm. A key feature of the planning algorithm is its ability to prune the search space. We identify conditions under which our planning algorithm will generate optimal plans. We implemented our algorithm as an extension to the \tlplan ~planning system and report on extensive testing performed to evaluate the effectiveness of our heuristics and algorithm. Our planner, \hplanp{}, competed in the 5th International Planning Competition, achieving distinguished performance in the \emph{qualitative preferences} track.

SAT Conference 2007 Conference Paper

Dynamically Partitioning for Solving QBF

  • Horst Samulowitz
  • Fahiem Bacchus

Abstract In this paper we present a new technique to solve Quantified Boolean Formulas (QBF). Our technique applies the idea of dynamic partitioning to QBF solvers. Dynamic partitioning has previously been utilized in #SAT solvers that count the number of models of a propositional formula. One of the main differences with the #SAT case comes from the solution learning techniques employed in search based QBF solvers. Extending solution learning to a partitioning solver involves some considerable complexities which we show how to resolve. We have implemented our ideas in a new QBF solver, and demonstrate that dynamic partitioning is able to increase the performance of search based solvers, sometimes significantly. Empirically our new solver offers performance that is superior to other search based solvers and in many cases superior to non-search based solvers.

AAAI Conference 2007 Conference Paper

Using Expectation Maximization to Find Likely Assignments for Solving CSP’s

  • Eric Hsu
  • Fahiem Bacchus

We present a new probabilistic framework for finding likely variable assignments in difficult constraint satisfaction problems. Finding such assignments is key to efficient search, but practical efforts have largely been limited to random guessing and heuristically designed weighting systems. In contrast, we derive a new version of Belief Propagation (BP) using the method of Expectation Maximization (EM). This allows us to differentiate between variables that are strongly biased toward particular values and those that are largely extraneous. Using EM also eliminates the threat of non-convergence associated with regular BP. Theoretically, the derivation exhibits appealing primal/dual semantics. Empirically, it produces an “EMBP”-based heuristic for solving constraint satisfaction problems, as illustrated with respect to the Quasigroup with Holes domain. EMBP outperforms existing techniques for guiding variable and value ordering during backtracking search on this problem.

SAT Conference 2006 Conference Paper

Binary Clause Reasoning in QBF

  • Horst Samulowitz
  • Fahiem Bacchus

Abstract Binary clause reasoning has found some successful applications in SAT, and it is natural to investigate its use in various extensions of SAT. In this paper we investigate the use of binary clause reasoning in the context of solving Quantified Boolean Formulas (QBF). We develop a DPLL based QBF solver that employs extended binary clause reasoning (hyper-binary resolution) to infer new binary clauses both before and during search. These binary clauses are used to discover additional forced literals, as well as to perform equality reduction. Both of these transformations simplify the theory by removing one of its variables. When applied during DPLL search this stronger inference can offer significant decreases in the size of the search tree, but it can also be costly to apply. We are able to show empirically that despite the extra costs, binary clause reasoning can improve our ability to solve QBF.

SAT Conference 2006 Invited Paper

CSPs: Adding Structure to SAT

  • Fahiem Bacchus

Abstract One way of viewing the difference between SAT and CSPs is to think of programming in assembler vs programming in C. It can be considerably simpler to program in C than assembler. Similarly it can be considerably simpler to model real world problems in CSP than in SAT. On the other hand C’s machine model is still rather close to the underlying hardware model accessed directly in assembler. Similarly, in CSPs the main method of reasoning, backtracking search, can be viewed as being an extension of DPLL, the main method of reasoning for SAT. Where the analogy breaks down is that unlike C and assember whose machine models are computationally equivalent, some CSP techniques offer a considerable boost in inferential power over the resolution inferences preformed in DPLL. An intresting question is how to combine this additional inferential power with the more powerful forms of resolution preformed in modern DPLL solvers. One approach for achieving such a combination will be presented.

IJCAI Conference 2005 Conference Paper

Propagating Logical Combinations of Constraints

  • Fahiem Bacchus
  • Toby

Many constraint toolkits provide logical connectives like disjunction, negation and implication. These permit complex constraint expressions to be built from primitive constraints. However, the propagation of such complex constraint expressions is typically limited. We therefore present a simple and light weight method for propagating complex constraint expressions. We provide a precise characterization of when this method enforces generalized arc-consistency. In addition, we demonstrate that with our method many different global constraints can be easily implemented.

ICAPS Conference 2004 Conference Paper

Extending the Knowledge-Based Approach to Planning with Incomplete Information and Sensing

  • Ronald P. A. Petrick
  • Fahiem Bacchus

In (Petrick and Bacchus 2002), a "knowledge-level" approach to planning under incomplete knowledge and sensing was presented. In comparison with alternate approaches based on representing sets of possible worlds, this higher-level representation is richer, but the inferences it supports are weaker. Nevertheless, because of its richer representation, it is able to solve problems that cannot be solved by alternate approaches. In this paper we examine a collection of new techniques for increasing both the representational and inferential power of the knowledge-level approach. These techniques have been fully implemented in the PKS (Planning with Knowledge and Sensing) planning system. Taken together they allow us to solve a range of new types of planning problems under incomplete knowledge and sensing.

FOCS Conference 2003 Conference Paper

Algorithms and Complexity Results for #SAT and Bayesian Inference

  • Fahiem Bacchus
  • Shannon Dalmao
  • Toniann Pitassi

Bayesian inference is an important problem with numerous applications in probabilistic reasoning. Counting satisfying assignments is a closely related problem of fundamental theoretical importance. In this paper, we show that plain old DPLL equipped with memorization (an algorithm we call #DPLLCache) can solve both of these problems with time complexity that is at least as good as state-of-the-art exact algorithms, and that it can also achieve the best known time-space tradeoff. We then proceed to show that there are instances where #DPLLCache can achieve an exponential speedup over existing algorithms.

ICAPS Conference 2003 Conference Paper

Conformant Probabilistic Planning via CSPs

  • Nathanael Hyafil
  • Fahiem Bacchus

We present a new algorithm for the conformant probabilistic planning problem. This is a planning problem in which we have probabilistic actions and we want to optimize the probability of achieving the goal, but we have no observations available to us during the course of the plan’s execution. Our algorithm is based on a CSP encoding of the problem, and a new more efficient caching scheme. The result is a gain in performance of several orders of magnitude over previous AI planners that have addressed the same problem. We also compare our algorithm to algorithms for decision theoretic planning. There our algorithm is faster on small problems but does not scale as well. We identify the reasons for this, and show that the two types of algorithms are able to take advantage of distinct types of problem structure. Finding an algorithm that can lever both types of structure simultaneously is posed as an interesting open problem.

SAT Conference 2003 Conference Paper

Effective Preprocessing with Hyper-Resolution and Equality Reduction

  • Fahiem Bacchus
  • Jonathan Winter

Abstract HypBinRes, a particular form of hyper-resolution, was first employed in the SAT solver 2cls+eq. In 2cls+eq, HypBinRes and equality reduction are used at every node of a DPLL search tree, pruning much of the search tree. This allowed 2cls+eq to display the best all-around performance in the 2002 SAT solver competition. In particular, it was the only solver to qualify for the second round of the competition in all three benchmark categories. In this paper we investigate the use of HypBinRes and equality reduction in a preprocessor that can be used to simplify a CNF formula prior to SAT solving. We present empirical evidence demonstrating that such a preprocessor can be extremely effective on large structured problems, making some previously unsolvable problems solvable. The preprocessor is also able to solve a number of non-trivial instances by itself. Since the preprocessor does not have to worry about undoing changes on backtrack, nor about keeping track of reasons for intelligent backtracking, we are able to develop a new algorithm for applying HypBinRes that can be orders of magnitude more efficient than the algorithm employed inside of 2cls+eq. The net result is a technique that improves our ability to solve hard problems SAT problems.

ICAPS Conference 2002 Conference Paper

A Knowledge-Based Approach to Planning with Incomplete Information and Sensing

  • Ronald P. A. Petrick
  • Fahiem Bacchus

In this paper we present a new approach to the problem of planning with incomplete information and sensing. Our approach is based on a higher level, "knowledge-based", representation of the planner’s knowledge and of the domain actions. In particular, in our approach we use a set of formulas from a first-order modal logic of knowledge to represent the planner’s incomplete knowledge state. Actions are then represented as updates to this collection of formulas. Hence, actions are being modelled in terms of how they modify the knowledge state of the planner rather than in terms of how they modify the physical world. We have constructed a planner to utilize this representation and we use it to show that on many common problems this more abstract representation is perfectly adequate for solving the planning problem, and that in fact it scales better and supports features that make it applicable to much richer domains and problems.

AAAI Conference 2002 Conference Paper

Enhancing Davis Putnam with Extended Binary Clause Reasoning

  • Fahiem Bacchus

The backtracking based Davis Putnam (DPLL) procedure remains the dominant method for deciding the satisfiability of a CNF formula. In recent years there has been much work on improving the basic procedure by adding features like improved heuristics and data structures, intelligent backtracking, clause learning, etc. Reasoning with binary clauses in DPLL has been a much discussed possibility for achieving improved performance, but to date solvers based on this idea have not been competitive with the best unit propagation based DPLL solvers. In this paper we experiment with a DPLL solver called 2CLS+EQ that makes more extensive use of binary clause reasoning than has been tried before. The results are very encouraging—2CLS+EQ is competitive with the very best DPLL solvers. The techniques it uses also open up a number of other possibilities for increasing our ability to solve SAT problems.

UAI Conference 2001 Conference Paper

UCP-Networks: A Directed Graphical Representation of Conditional Utilities

  • Craig Boutilier
  • Fahiem Bacchus
  • Ronen I. Brafman

We propose a new directed graphical representation of utility functions, called UCP-networks, that combines aspects of two existing graphical models: generalized additive models and CP-networks. The network decomposes a utility function into a number of additive factors, with the directionality of the arcs reflecting conditional dependence of preference statements---in the underlying (qualitative) preference ordering---under a {em ceteris paribus} (all else being equal) interpretation. This representation is arguably natural in many settings. Furthermore, the strong CP-semantics ensures that computation of optimization and dominance queries is very efficient. We also demonstrate the value of this representation in decision making. Finally, we describe an interactive elicitation procedure that takes advantage of the linear nature of the constraints on ``tradeoff weights'' imposed by a UCP-network. This procedure allows the network to be refined until the regret of the decision with minimax regret (with respect to the incompletely specified utility function) falls below a specified threshold (e.g., the cost of further questioning.

ICAPS Conference 1998 Conference Paper

Making Forward Chaining Relevant

  • Fahiem Bacchus
  • Yee Whye Teh

Planning by forward chaining through the world space has long been dismissed as being “obviously” infeasible. Nevertheless, this approach to planning has many advantages. Most importantly forward chaining planners maintain complete descriptions of the intermediate states that arise during the course of the plan’s execution. These states can be utilized to provide highly effective search control. Another advantage is that such planners can support richer planning representations that can model, e. g., resources and resource consumption. Forward chaining planners are still plagued however by their traditional weaknesses: a lack of goal direction, and the fact that they search totally ordered action sequences. In this paper we address the issue of goal direction. We present two algorithms that provide a forward chaining planner with more information about the goal, and allow it to avoid certain types of irrelevant state information and actions.

AAAI Conference 1998 Conference Paper

On the Conversion between Non-Binary and Binary Constraint Satisfaction Problems

  • Fahiem Bacchus

It is well known that any non-binary discrete constraint satisfaction problem (CSP) can be translated into an equivalent binary CSP. Two translations are known: the dual graph translation and the hidden variable translation. However, there has been little theoretical or experimental work on how well backtracking algorithms perform on these binary representations in comparison to their performance on the corresponding non-binary CSP. We present both theoretical and empiricalresults to help understandthe tradeoffs involved. In particular, we show that translating a non-binary CSP into a binary representation can be a viable solution technique in certain circumstances. The ultimate aim of this research is to give guidance for when one should consider translating between non-binary and binary representations. Our results supply some initial answers to this question.

AAAI Conference 1997 Conference Paper

Structured Solution Methods for Non-Markovian Decision Processes

  • Fahiem Bacchus

Markov Decision Processes (MDPs), currently a popular method for modeling and solving decision theoretic planning problems, are limited by the Markovian assumption: rewards and dynamics depend on the current state only, and not on previous history. Non-Markovian decision processes (NMDPs) can also be defined, but then the more tractable solution techniques developed for MDP’s cannot be directly applied. In this paper, we show how an NMDP, in which temporal logic is used to specify history dependence, can be automatically converted into an equivalent MDP by adding appropriate temporal variables. The resulting MDP can be represented in a structured fashion and solved using structured policy construction methods. In many cases, this offers significant computational advantages over previous proposals for solving NMDPs.

AAAI Conference 1996 Conference Paper

Planning for Temporally Extended Goals

  • Fahiem Bacchus

In planning, goals have traditionally been viewed as specifying a set of desirable final states. Any plan that transforms the current state to one of these desirable states is viewed to be correct. Goals of this form are limited as they do not allow us to constrain the manner in which the plan achieves its objectives. We propose viewing goals as specifiying desirable sequences of states, and a plan to be correct if its execution yields one of these desirable sequences. We present a logical language, a temporal logic, for specifying goals with this semantics. Our language is rich and allows the representation of a range of temporally extended goals, including classical goals, goals with temporal deadlines, quantified goals (with both universal and existential quantification), safety goals, and maintenance goals. Our formalism is simple and yet extends previous approaches in this area. We also present a planning algorithm that can generate correct plans for these goals. This algorithm has been implemented, and we provide some exam;les of the foralism at work. the end result is a planning system which can generate plans that satisfy a novel and useful set of conditions.

AAAI Conference 1996 Conference Paper

Rewarding Behaviors

  • Fahiem Bacchus

Markov decision processes (MDPs) are a very popular tool for decision theoretic planning (DTP), partly because of the welldeveloped, expressive theory that includes effective solution techniques. But the Markov assumption-that dynamics and rewards depend on the current state only, and not on historyis often inappropriate. This is especially true of rewards: we frequently wish to associate rewards with behaviors that extend over time. Of course, such reward processes can be encoded in an MDP should we have a rich enough state space (where states encode enough history). However it is often difficult to “hand craft” suitable state spaces that encode an appropriate amount of history. We consider this problem in the case where non-Markovian rewards are encoded by assigning values to formulas of a temporal logic. These formulas characterize the value of temporally extended behaviors. We argue that this allows a natural representation of many commonly encountered non-Markovian rewards. The main result is an algorithm which, given a decision process with non-Markovian rewards expressed in this manner, automatically constructs an equivalent MDP (with Markovian reward structure), allowing optimal policy construction using standard techniques.

TIME Conference 1996 Invited Paper

Using Temporal Logics for Planning and Control

  • Fahiem Bacchus

Traditionally, planning work in artificial intelligence has focused on primitive actions and instantaneous states. Innovative work has been done on composing primitive actions so as to bring about desired final states. The thesis advanced in this paper is that instead of simply focusing on primitive actions, it is also useful to use representation and reasoning tools whose primitive objects are sequences of actions (or the associated sequences of states that they generate). Temporal logics are a useful tool for representing and reasoning about action sequences. The author has examined some different applications of temporal logics to problems of planning and control.

UAI Conference 1995 Conference Paper

Graphical models for preference and utility

  • Fahiem Bacchus
  • Adam J. Grove

Probabilistic independence can dramatically simplify the task of eliciting, representing, and computing with probabilities in large domains. A key technique in achieving these benefits is the idea of graphical modeling. We survey existing notions of independence for utility functions in a multi-attribute space, and suggest that these can be used to achieve similar advantages. Our new results concern conditional additive independence, which we show always has a perfect representation as separation in an undirected graph (a Markov network). Conditional additive independencies entail a particular functional for the utility function that is analogous to a product decomposition of a probability function, and confers analogous benefits. This functional form has been utilized in the Bayesian network and influence diagram literature, but generally without an explanation in terms of independence. The functional form yields a decomposition of the utility function that can greatly speed up expected utility calculations, particularly when the utility graph has a similar topology to the probabilistic network being used.

AAAI Conference 1994 Conference Paper

Forming Beliefs about a Changing World

  • Fahiem Bacchus
  • Joseph Y. Halpern

The situation calculus is a popular technique for reasoning about action and change. However, its restriction to a firstorder syntax and pure deductive reasoning makes it unsuitable in many contexts. In particular, we often face uncertainty, due either to lack of knowledge or to some probabilistic aspects of the world. While attempts have been made to address aspects of this problem, most notably using nonmonotonic reasoning formalisms, the general problem of uncertainty in reasoning about action has not been fully dealt with in a logical framework. In this paper we present a theory of action that extends the situation calculus to deal with uncertainty. Our framework is based on applying the random-worlds approach of [BGHK94] to a situation calculus ontology, enriched to allow the expression of probabilistic action effects. Our approach is able to solve many of the problems imposed by incomplete and probabilistic knowledge within a unified framework. In particular, we obtain a default Markov property for chains of actions, a derivation of conditional independence from irrelevance, and a simple solution to the frame problem.

UAI Conference 1994 Conference Paper

Generating New Beliefs from Old

  • Fahiem Bacchus
  • Adam J. Grove
  • Joseph Y. Halpern
  • Daphne Koller

In previous work [BGHK92, BGHK93], we have studied the random-worlds approach -- a particular (and quite powerful) method for generating degrees of belief (i.e., subjective probabilities) from a knowledge base consisting of objective (first-order, statistical, and default) information. But allowing a knowledge base to contain only objective information is sometimes limiting. We occasionally wish to include information about degrees of belief in the knowledge base as well, because there are contexts in which old beliefs represent important information that should influence new beliefs. In this paper, we describe three quite general techniques for extending a method that generates degrees of belief from objective information to one that can make use of degrees of belief as well. All of our techniques are bloused on well-known approaches, such as cross-entropy. We discuss general connections between the techniques and in particular show that, although conceptually and technically quite different, all of the techniques give the same answer when applied to the random-worlds method.

UAI Conference 1994 Conference Paper

Using New Data to Refine a Bayesian Network

  • Wai Lam
  • Fahiem Bacchus

We explore the issue of refining an existent Bayesian network structure using new data which might mention only a subset of the variables. Most previous works have only considered the refinement of the network's conditional probability parameters, and have not addressed the issue of refining the network's structure. We develop a new approach for refining the network's structure. Our approach is based on the Minimal Description Length (MDL) principle, and it employs an adapted version of a Bayesian network learning algorithm developed in our previous work. One of the adaptations required is to modify the previous algorithm to account for the structure of the existent network. The learning algorithm generates a partial network structure which can then be used to improve the existent network. We also present experimental evidence demonstrating the effectiveness of our approach.

IJCAI Conference 1993 Conference Paper

Statistical Foundations for Default Reasoning

  • Fahiem Bacchus
  • Adam J. Grove
  • Joseph Y. Halpern
  • Daphne Koller

We describe a new approach to default, reason­ ing, based on a principle oi indifference among possible worlds. We interpret default rules as extreme statistical statements, thus obtaining a knowledge base KB comprised of statistical and first-order statements. We then assign equal probability to all worlds consistent with KB in order to assign a degree of belief to a state­ ment φ. The degree of belief can be used to de­ cide whether to defeasibly conclude φ. Various natural patterns of reasoning, such as a prefer­ ence for more specific defaults, indifference to irrelevant information, and the ability to com­ bine independent pieces of evidence, turn out to follow naturally from this technique. Further­ more, our approach is not restricted to default reasoning; it supports a spectrum of reasoning. , from quantitative to qualitative. It is also re­ lated to other systems for default reasoning. In particular, we show that the work of |Goldszmidt et al. , 1990], which applies maximum entropy ideas to --semantics, can be embedded in our framework.

UAI Conference 1993 Conference Paper

Using Causal Information and Local Measures to Learn Bayesian Networks

  • Wai Lam
  • Fahiem Bacchus

In previous work we developed a method of learning Bayesian Network models from raw data. This method relies on the well known minimal description length (MDL) principle. The MDL principle is particularly well suited to this task as it allows us to tradeoff, in a principled way, the accuracy of the learned network against its practical usefulness. In this paper we present some new results that have arisen from our work. In particular, we present a new local way of computing the description length. This allows us to make significant improvements in our search algorithm. In addition, we modify our algorithm so that it can take into account partial domain information that might be provided by a domain expert. The local computation of description length also opens the door for local refinement of an existent network. The feasibility of our approach is demonstrated by experiments involving networks of a practical size.

UAI Conference 1993 Conference Paper

Using First-Order Probability Logic for the Construction of Bayesian Networks

  • Fahiem Bacchus

We present a mechanism for constructing graphical models, specifically Bayesian networks, from a knowledge base of general probabilistic information. The unique feature of our approach is that it uses a powerful first-order probabilistic logic for expressing the general knowledge base. This logic allows for the representation of a wide range of logical and probabilistic information. The model construction procedure we propose uses notions from direct inference to identify pieces of local statistical information from the knowledge base that are most appropriate to the particular event we want to reason about. These pieces are composed to generate a joint probability distribution specified as a Bayesian network. Although there are fundamental difficulties in dealing with fully general knowledge, our procedure is practical for quite rich knowledge bases and it supports the construction of a far wider range of networks than allowed for by current template technology.

AAAI Conference 1992 Conference Paper

From Statistics to Beliefs

  • Fahiem Bacchus
  • Daphne Koller

An intelligent agent uses known facts, including statistical knowledge, to assign depees of belief to assertions it is uncertain about. We investigate three principled techniques for doing this. All three are applications of the princ$e of indiflerence, because they assign equal degree of belief to all basic “situations” consistent with the knowledge base. They differ because there are competing intuitions about what the basic situations are. Various natural patterns of reasoning, such as the preference for the most specific statistical data available, turn out to follow from some or all of the techniques. This is an improvement over earlier theories, such as work on direct inference and reference classes, which arbitrarily postulate these patterns without offering any deeper explanations or guarantees of consistency. The three methods we investigate have surprising characterisations: there are connections to the principle of maximum entropy, a principle of maximal independence, and a “center of mass” principle. There are also unexpected connections between the three, that help us understand why the specific language chosen (for the knowledge base) is much more critical in inductive reasoning of the sort we consider than it is in traditional deductive reasoning.

AAAI Conference 1992 Conference Paper

The Expected Value of Hierarchical Problem-Solving

  • Fahiem Bacchus

n the best case using an abstraction hierarchy in problem-solving can yield an exponential speed-up in search efficiency. Such a speed-up is predicted by various analytical models developed in the literature, and efficiency gains of this order have been confirmed empirically. However, these models assume that the Downward Refinement Property (DRP) holds. When this property holds, backtracking never need occur across abstraction levels. When it fails, search may have to consider many different abstract solutions before finding one that can be refined to a concrete solution. In this paper we provide an analysis of the expected search complexity without assuming the DRP. We find that our model predicts a phase boundary where abstraction provides no benefit: if the probability that an abstract solution can be refined is very low or very high, search with abstraction yields significant speed up. However, in the phase boundary area where the probability takes on an intermediate value search efficiency is not nec- essarily improved. The phenomenon of a phase boundary-where search is hardest agrees with recent empirical studies of Cheeseman et al. [CKT91].

IJCAI Conference 1991 Conference Paper

The Downward Refinement Property

  • Fahiem Bacchus
  • Qiang

Using abstraction in planning does not guarantee an im­ provement in search efficiency; it is possible for an abstract planner to display worse performance than one that does not use abstraction. Analysis and experiments have shown that good abstraction hierarchies have, or are close to having, the downward refinement property, whereby, given that a concrete-level solution exists, every abstract solution can be refined to a concrete-level solu­ tion without backtracking across abstract levels. Work­ ing within a semantics for ABSTRIPS-style abstraction we provide a characterization of the downward refinement property. After discussing its effect on search efficiency, we develop a semantic condition sufficient for guarantee­ ing its presence in an abstraction hierarchy. Using the semantic condition, we then provide a set of sufficient and polynomial-time checkable syntactic conditions that can be used for checking a hierarchy for the downward refinement property,

UAI Conference 1989 Conference Paper

Lp: A Logic for Statistical Information

  • Fahiem Bacchus

This extended abstract presents a logic, called Lp, that is capable of representing and reasoning with a wide variety of both qualitative and quantitative statistical information. The advantage of this logical formalism is that it offers a declarative representation of statistical knowledge; knowledge represented in this manner can be used for a variety of reasoning tasks. The logic differs from previous work in probability logics in that it uses a probability distribution over the domain of discourse, whereas most previous work (e.g., Nilsson [2], Scott et al. [3], Gaifinan [4], Fagin et al. [5]) has investigated the attachment of probabilities to the sentences of the logic (also, see Halpern [6] and Bacchus [7] for further discussion of the differences). The logic Lp possesses some further important features. First, Lp is a superset of first order logic, hence it can represent ordinary logical assertions. This means that Lp provides a mechanism for integrating statistical information and reasoning about uncertainty into systems based solely on logic. Second, Lp possesses transparent semantics, based on sets and probabilities of those sets. Hence, knowledge represented in Lp can be understood in terms of the simple primative concepts of sets and probabilities. And finally, the there is a sound proof theory that has wide coverage (the proof theory is complete for certain classes of models). The proof theory captures a sufficient range of valid inferences to subsume most previous probabilistic uncertainty reasoning systems. For example, the linear constraints like those generated by Nilsson?s probabilistic entailment [2] can be generated by the proof theory, and the Bayesian inference underlying belief nets [8] can be performed. In addition, the proof theory integrates quantitative and qualitative reasoning as well as statistical and logical reasoning. In the next section we briefly examine previous work in probability logics, comparing it to Lp. Then we present some of the varieties of statistical information that Lp is capable of expressing. After this we present, briefly, the syntax, semantics, and proof theory of the logic. We conclude with a few examples of knowledge representation and reasoning in Lp, pointing out the advantages of the declarative representation offered by Lp. We close with a brief discussion of probabilities as degrees of belief, indicating how such probabilities can be generated from statistical knowledge encoded in Lp. The reader who is interested in a more complete treatment should consult Bacchus [7].