Arrow Research search

Author name cluster

Barry O'Sullivan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

46 papers
2 author rows

Possible papers

46

AAAI Conference 2026 Conference Paper

Reasoning Transfer for an Extremely Low-Resource and Endangered Language: Bridging Languages Through Sample-Efficient Language Understanding

  • Khanh-Tung Tran
  • Barry O'Sullivan
  • Hoang D. Nguyen

Recent advances have enabled Large Language Models (LLMs) to tackle reasoning tasks by generating chain-of-thought (CoT) rationales, yet these gains have largely applied to high-resource languages, leaving low-resource languages underperformed. In this work, we first investigate CoT techniques in extremely low-resource scenarios through previous prompting, model editing, and fine-tuning approaches. We introduce \emph{English-Pivoted CoT Training}, leveraging the insight that LLMs internally operate in a latent space aligned toward the dominant language. Given input in a low-resource language, we perform supervised fine-tuning to generate CoT in English and output the final response in the target language. Across mathematical reasoning benchmarks, our approach outperforms other baselines with up to 28.33% improvement in low-resource scenarios. Our analyses and additional experiments, including Mixed-Language CoT and Two-Stage Training, show that explicitly separating language understanding from reasoning enhances crosslingual reasoning abilities. To facilitate future work, we also release LC2024, the first benchmark for mathematical task in Irish, an extremely low-resource and endangered language. Our results and resources highlight a practical pathway to multilingual reasoning without extensive retraining in every extremely low-resource language, despite data scarcity.

ECAI Conference 2024 Conference Paper

UCCIX: Irish-eXcellence Large Language Model

  • Khanh-Tung Tran
  • Barry O'Sullivan
  • Hoang D. Nguyen

The development of Large Language Models (LLMs) has predominantly focused on high-resource languages, leaving extremely low-resource languages like Irish with limited representation. This work presents UCCIX, a pioneering effort on the development of an open-source Irish-based LLM. We propose a novel framework for continued pre-training of LLMs specifically adapted for extremely low-resource languages, requiring only a fraction of the textual data typically needed for training LLMs according to scaling laws. Our model, based on Llama 2-13B [23], outperforms much larger models on Irish language tasks with up to 12% performance improvement, showcasing the effectiveness and efficiency of our approach. We also contribute comprehensive Irish benchmarking datasets, including IrishQA, a question-answering dataset, and Irish version of MT-bench [28]. These datasets enable rigorous evaluation and facilitate future research in Irish LLM systems. Our work aims to preserve and promote the Irish language, knowledge, and culture of Ireland in the digital era while providing a framework for adapting LLMs to other indigenous languages.

IJCAI Conference 2023 Conference Paper

Assessing and Enforcing Fairness in the AI Lifecycle

  • Roberta Calegari
  • Gabriel G. Castañé
  • Michela Milano
  • Barry O'Sullivan

A significant challenge in detecting and mitigating bias is creating a mindset amongst AI developers to address unfairness. The current literature on fairness is broad, and the learning curve to distinguish where to use existing metrics and techniques for bias detection or mitigation is difficult. This survey systematises the state-of-the-art about distinct notions of fairness and relative techniques for bias mitigation according to the AI lifecycle. Gaps and challenges identified during the development of this work are also discussed.

ECAI Conference 2023 Conference Paper

Partial Compilation of SAT Using Selective Backbones

  • Andrea Balogh
  • Guillaume Escamocher
  • Barry O'Sullivan

Our goal in this paper is to significantly decrease the compiled size of a given Boolean instance with a large representation, while preserving as much information about the instance as possible. We achieve this by assigning values to a subset of the variables in such a way that the resulting instance has a much smaller representation than the original one, and its number of solutions is almost as high as the starting one. We call the set of variable instantiations that we make the selective backbone of the solutions that we keep. Large selective backbones allow for smaller representations, but also eliminate more solutions. We compare different methods of computing the selective backbone that offer the best compromise.

SoCS Conference 2023 Conference Paper

SAT Feature Analysis for Machine Learning Classification Tasks

  • Marco Dalla
  • Benjamin Provan-Bessell
  • Andrea Visentin
  • Barry O'Sullivan

The extraction of meaningful features from CNF instances is crucial to applying machine learning to SAT solving, enabling algorithm selection and configuration for solver portfolios and satisfiability classification. While many approaches have been proposed for feature extraction, their relevance to these tasks is unclear. Their applicability and comparison of the information extracted and the computational effort needed are complicated by the lack of working or updated implementations, negatively affecting reproducibility. In this paper, we analyse the performance of five sets of features presented in the literature on SAT/UNSAT and problem category classification over a dataset of 3000 instances across ten problem classes distributed equally between SAT and UNSAT. To increase reproducibility and encourage research in this area, we released a Python library containing an updated and clear implementation of structural, graph-based, statistical and probing features presented in the literature for SAT CNF instances; and we define a clear pipeline to compare feature sets in a given learning task robustly. We analysed which of the computed features are relevant for the specific task and the tradeoff they provide between accuracy and computational effort. The results of the analysis provide insights into which features mostly affect an instance

SoCS Conference 2023 Conference Paper

Using Machine Learning Classifiers in SAT Branching [Extended Abstract]

  • Ruth Helen Bergin
  • Marco Dalla
  • Andrea Visentin
  • Barry O'Sullivan
  • Gregory M. Provan

The Boolean Satisfiability Problem (SAT) can be framed as a binary classification task. Recently, numerous machine and deep learning techniques have been successfully deployed to predict whether a CNF has a solution. However, these approaches do not provide a variables assignment when the instance is satisfiable and have not been used as part of SAT solvers. In this work, we investigate the possibility of using a machine-learning SAT/UNSAT classifier to assign a truth value to a variable. A heuristic solver can be created by iteratively assigning one variable to the value that leads to higher predicted satisfiability. We test our approach with and without probing features and compare it to a heuristic assignment based on the variable

IJCAI Conference 2021 Conference Paper

Explanation in Constraint Satisfaction: A Survey

  • Sharmi Dev Gupta
  • Begum Genc
  • Barry O'Sullivan

Much of the focus on explanation in the field of artificial intelligence has focused on machine learning methods and, in particular, concepts produced by advanced methods such as neural networks and deep learning. However, there has been a long history of explanation generation in the general field of constraint satisfaction, one of the AI's most ubiquitous subfields. In this paper we survey the major seminal papers on the explanation and constraints, as well as some more recent works. The survey sets out to unify many disparate lines of work in areas such as model-based diagnosis, constraint programming, Boolean satisfiability, truth maintenance systems, quantified logics, and related areas.

IJCAI Conference 2017 Conference Paper

Finding Robust Solutions to Stable Marriage

  • Begum Genc
  • Mohamed Siala
  • Barry O'Sullivan
  • Gilles Simonin

We study the notion of robustness in stable matching problems. We first define robustness by introducing (a, b)-supermatches. An (a, b)-supermatch is a stable matching in which if a pairs break up it is possible to find another stable matching by changing the partners of those a pairs and at most b other pairs. In this context, we define the most robust stable matching as a (1, b)-supermatch where b is minimum. We show that checking whether a given stable matching is a (1, b)-supermatch can be done in polynomial time. Next, we use this procedure to design a constraint programming model, a local search approach, and a genetic algorithm to find the most robust stable matching. Our empirical evaluation on large instances show that local search outperforms the other approaches.

AAAI Conference 2017 Short Paper

Robust Stable Marriage

  • Begum Genc
  • Mohamed Siala
  • Barry O'Sullivan
  • Gilles Simonin

Stable Marriage (SM) is a well-known matching problem, where the aim is to match a set of men and women. The resulting matching must satisfy two properties: there is no unassigned person and there are no other assignments where two people of opposite gender prefer each other to their current assignments. We propose a new version of SM called as Robust Stable Marriage (RSM) by combining stability and robustness. We define robustness by introducing (a, b)-supermatches, which has been inspired by (a, b)supermodels (Ginsberg, Parkes, and Roy 1998). An (a, b)supermatch is a stable matching, where if at most a pairs want to break up, it is possible to find another stable matching by breaking at most b other pairs.

IS Journal 2017 Journal Article

The Inductive Constraint Programming Loop

  • Christian Bessiere
  • Luc De Raedt
  • Tias Guns
  • Lars Kotthoff
  • Mirco Nanni
  • Siegfried Nijssen
  • Barry O'Sullivan
  • Anastasia Paparrizou

Constraint programming is used for a variety of real-world optimization problems, such as planning, scheduling, and resource allocation problems, all while we continuously gather vast amounts of data about these problems. Current constraint programming software doesn't exploit such data to update schedules, resources, and plans. The authors propose a new framework that they call the inductive constraint programming loop. In this approach, data is gathered and analyzed systematically to dynamically revise and adapt constraints and optimization criteria. Inductive constraint programming aims to bridge the gap between the areas of data mining and machine learning on one hand and constraint programming on the other.

AAAI Conference 2016 Conference Paper

A CP-Based Approach for Popular Matching

  • Danuta Chisca
  • Mohamed Siala
  • Gilles Simonin
  • Barry O'Sullivan

We propose a constraint programming approach to the popular matching problem. We show that one can use the Global Cardinality Constraint to encode the problem even in cases that involve ties in the ordinal preferences of the applicants.

SoCS Conference 2016 Conference Paper

An Improved Metaheuristic Algorithm for Maximizing Demand Satisfaction in the Population Harvest Cutting Stock Problem

  • Laura Climent
  • Barry O'Sullivan
  • Richard J. Wallace

We present a greedy version of an existing metaheuristic al-gorithm for a special version of the Cutting Stock Problem(CSP). For this version, it is only possible to have indirectcontrol over the patterns via a vector of continuous valueswhich we refer to as a weights vector. Our algorithm itera-tively generates new weights vectors by making local changesover the best weights vector computed so far. This allows usto achieve better solutions much faster than is possible withthe original metaheuristic.

IJCAI Conference 2016 Conference Paper

Towards Fast Algorithms for the Preference Consistency Problem Based on Hierarchical Models

  • Anne-Marie George
  • Nic Wilson
  • Barry O'Sullivan

In this paper, we construct and compare algorithmic approaches to solve the Preference Consistency Problem for preference statements based on hierarchical models. Instances of this problem contain a set of preference statements that are direct comparisons (strict and non-strict) between some alternatives, and a set of evaluation functions by which all alternatives can be rated. An instance is consistent based on hierarchical preference models, if there exists an hierarchical model on the evaluation functions that induces an order relation on the alternatives by which all relations given by the preference statements are satisfied. Deciding if an instance is consistent is known to be NP-complete for hierarchical models. We develop three approaches to solve this decision problem. The first involves a Mixed Integer Linear Programming (MILP) formulation, the other two are recursive algorithms that are based on properties of the problem by which the search space can be pruned. Our experiments on synthetic data show that the recursive algorithms are faster than solving the MILP formulation and that the ratio between the running times increases extremely quickly.

IJCAI Conference 2015 Conference Paper

Computation and Complexity of Preference Inference Based on Hierarchical Models

  • Nic Wilson
  • Anne-Marie George
  • Barry O'Sullivan

Preference Inference involves inferring additional user preferences from elicited or observed preferences, based on assumptions regarding the form of the user’s preference relation. In this paper we consider a situation in which alternatives have an associated vector of costs, each component corresponding to a different criterion, and are compared using a kind of lexicographic order, similar to the way alternatives are compared in a Hierarchical Constraint Logic Programming model. It is assumed that the user has some (unknown) importance ordering on criteria, and that to compare two alternatives, firstly, the combined cost of each alternative with respect to the most important criteria are compared; only if these combined costs are equal, are the next most important criteria considered. The preference inference problem then consists of determining whether a preference statement can be inferred from a set of input preferences. We show that this problem is coNP-complete, even if one restricts the cardinality of the equal-importance sets to have at most two elements, and one only considers non-strict preferences. However, it is polynomial if it is assumed that the user’s ordering of criteria is a total ordering; it is also polynomial if the sets of equally important criteria are all equivalence classes of a given fixed equivalence relation. We give an efficient polynomial algorithm for these cases, which also throws light on the structure of the inference.

IJCAI Conference 2015 Conference Paper

ReACTR: Realtime Algorithm Configuration through Tournament Rankings

  • Tadhg Fitzgerald
  • Yuri Malitsky
  • Barry O'Sullivan

It is now readily accepted that automated algorithm configuration is a necessity for ensuring optimized performance of solvers on a particular problem domain. Even the best developers who have carefully designed their solver are not always able to manually find the best parameter settings for it. Yet, the opportunity for improving performance has been repeatedly demonstrated by configuration tools like ParamILS, SMAC, and GGA. However, all these techniques currently assume a static environment, where demonstrative instances are procured beforehand, potentially unlimited time is provided to adequately search the parameter space, and the solver would never need to be retrained. This is not always the case in practice. The ReACT system, proposed in 2014, demonstrated that a solver could be configured during runtime as new instances arrive in a steady stream. This paper further develops that approach and shows how a ranking scheme, like TrueSkill, can further improve the configurator’s performance, making it able to quickly find good parameterizations without adding any overhead on the time needed to solve any new instance, and then continuously improve as new instances are evaluated. The enhancements to ReACT that we present enable us to even outperform existing static configurators like SMAC in a non-dynamic setting.

IJCAI Conference 2015 Conference Paper

Statistical Regimes and Runtime Prediction

  • Barry Hurley
  • Barry O'Sullivan

The last decade has seen a growing interest in solver portfolios, automated solver configuration, and runtime prediction methods. At their core, these methods rely on a deterministic, consistent behaviour from the underlying algorithms and solvers. However, modern state-of-the-art solvers have elements of stochasticity built in such as randomised variable and value selection, tie-breaking, and randomised restarting. Such features can elicit dramatic variations in the overall performance between repeated runs of the solver, often by several orders of magnitude. Despite the success of the aforementioned fields, such performance variations in the underlying solvers have largely been ignored. Supported by a large-scale empirical study employing many years of industrial SAT Competition instances including repeated runs, we present statistical and empirical evidence that such a performance variation phenomenon necessitates a change in the evaluation of portfolio, runtime prediction, and automated configuration methods. In addition, we demonstrate that this phenomenon can have a significant impact on empirical solver competitions. Specifically, we show that the top three solvers from the 2014 SAT Competition could have been ranked in any permutation. These findings demonstrate the need for more statistically well-founded regimes in empirical evaluations.

ECAI Conference 2014 Conference Paper

A Decomposition Approach for Discovering Discriminative Motifs in a Sequence Database

  • David Lesaint
  • Deepak Mehta 0001
  • Barry O'Sullivan
  • Vincent Vigneron

This paper addresses the discovery of discriminative nary motifs in databases of labeled sequences. We consider databases made up of positive and negative sequences and define a motif as a set of patterns embedded in all positive sequences and subject to alignment constraints. We formulate constraints to eliminate redundant motifs and present a general constraint optimization framework to compute motifs that are exclusive to the positive sequences. We cast the discovery of closed and replication-free motifs in this framework and propose a two-stage approach whose last stage reduces to a minimum set covering problem. Experiments on protein sequence datasets demonstrate its efficiency.

SoCS Conference 2014 Conference Paper

Latent Features for Algorithm Selection

  • Yuri Malitsky
  • Barry O'Sullivan

The success and power of algorithm selection techniques has been empirically demonstrated on numerous occasions, most noticeably in the competition settings like those for SAT, CSP, MaxSAT, QBF, etc. Yet while there is now a plethora of competing approaches, all of them are dependent on the quality of a set of structural features they use to distinguish amongst the instances. Over the years, each domain has defined and refined its own set of features, yet at their core they are mostly a collection of everything that was considered useful in the past. As an alternative to this shotgun generation of features, this paper instead proposes a more systematic approach. Specifically, the paper shows how latent features gathered from matrix decomposition are enough for a linear model to achieve a level of performance comparable to a perfect Oracle portfolio. This information can, in turn, help guide researchers to the kinds of structural features they should be looking for, or even just identifying when such features are missing.

ECAI Conference 2014 Conference Paper

Optimisation for the Ride-Sharing Problem: a Complexity-based Approach

  • Gilles Simonin
  • Barry O'Sullivan

The dial-a-ride problem is a classic challenge in transportation and continues to be relevant across a large spectrum of applications, e. g. door-to-door transportation services, patient transportation, etc. Recently a new variant of the dial-a-ride problem, called ride-sharing, has received attention due to emergence of the use of smartphone-based applications that support location-aware transportation services. The general dial-a-ride problem involves complex constraints on a time-dependent network. In ride-sharing riders (resp. drivers) specify transportation requests (resp. offers) between journey origins and destinations. The two sets of participants, namely riders and drivers, have different constraints; the riders have time windows for starting and finishing the journey, while drivers have a starting time window, a destination, and a vehicle capacity. The challenge is to maximise the overall utility of the participants in the system which can be defined in a variety of ways. In this paper we study variations of the ride-sharing problem, under different notions of utility, from a computational complexity perspective, and identify a number of tractable and intractable cases. These results provide a basis for the development of efficient methods and heuristics for solving problems of real-world scale.

SoCS Conference 2013 Conference Paper

Evolving Instance Specific Algorithm Configuration

  • Yuri Malitsky
  • Deepak Mehta 0001
  • Barry O'Sullivan

Combinatorial problems are ubiquitous in artificial intelligence and related areas. While there has been a significant amount of research into the design and implementation of solvers for combinatorial problems, it is well-known that there is still no single solver that performs best across a broad set of problem types and domains. This has motivated the development of portfolios of solvers. A portfolio typically comprises either many different solvers, instances of the same solver tuned in different ways, or some combination of these. However, current approaches to portfolio design take a static view of the process. Specifically, the design of the portfolio is determined offline, and then deployed in some setting. In this paper we propose an approach to evolving the portfolio over time based on the problems instances that it encounters. We study several challenges raised by such a dynamic approach, such as how to re-tune the portfolio over time. Our empirical results demonstrate that our evolving portfolio approach significantly out-performed the standard static approach in the case when the type of instances observed change over time.

AAAI Conference 2012 Conference Paper

Opportunities and Challenges for Constraint Programming

  • Barry O'Sullivan

Constraint programming has become an important technology for solving hard combinatorial problems in a diverse range of application domains. It has its roots in artificial intelligence, mathematical programming, operations research, and programming languages. This paper gives a perspective on where constraint programming is today, and discusses a number of opportunities and challenges that could provide focus for the research community into the future.

AAAI Conference 2010 Conference Paper

Automated Modelling and Solving in Constraint Programming

  • Barry O'Sullivan

Constraint programming can be divided very crudely into modeling and solving. Modeling defines the problem, in terms of variables that can take on different values, subject to restrictions (constraints) on which combinations of variables are allowed. Solving finds values for all the variables that simultaneously satisfy all the constraints. However, the impact of constraint programming has been constrained by a lack of “user-friendliness”. Constraint programming has a major “declarative” aspect, in that a problem model can be handed off for solution to a variety of standard solving methods. These methods are embedded in algorithms, libraries, or specialized constraint programming languages. To fully exploit this declarative opportunity however, we must provide more assistance and automation in the modeling process, as well as in the design of application-specific problem solvers. Automated modelling and solving in constraint programming presents a major challenge for the artificial intelligence community. Artificial intelligence, and in particular machine learning, is a natural field in which to explore opportunities for moving more of the burden of constraint programming from the user to the machine. This paper presents technical challenges in the areas of constraint model acquisition, formulation and reformulation, synthesis of filtering algorithms for global constraints, and automated solving. We also present the metrics by which success and progress can be measured.

ECAI Conference 2010 Conference Paper

Data Mining for Biodiversity Prediction in Forests

  • Barry O'Sullivan
  • Steven Keady
  • Enda Keane
  • Sandra Irwin
  • John O'Halloran

There is international consensus on the key elements of sustainable forest management. Biological diversity has been recognised as one of them. This paper investigates the usefulness of terrestrial laser scanning technology in forest biodiversity assessment. Laser scanning is a rapidly emerging technology that captures high-resolution, 3-D structural information about forests and presently has applications in standing timber measurement. Forest biodiversity is influenced by structural complexity in the forest although precise repeatable measures are difficult to achieve using traditional methods. The aim of the research presented here is to apply laser scanning technology to the assessment of forest structure and deadwood, and relate this information to the diversity of plants, invertebrates and birds in a range of forest types including native woodlands and commercial plantations. Procedures for forest biodiversity assessment are known to be expensive due to their reliance on labour-intensive field visits. We describe our progress on the application of terrestrial laser scanning in an automated approach to biodiversity assessment. We apply regression techniques from the field of data mining to predict several biodiversity measures using physical attributes of the forest with very promising results.

ECAI Conference 2010 Conference Paper

Improving the Global Constraint SoftPrec

  • David Lesaint
  • Deepak Mehta 0001
  • Barry O'Sullivan
  • Luis Quesada 0001
  • Nic Wilson

A soft global constraint SOFTPREC has been proposed recently for solving optimisation problems involving precedence relations. In this paper we present new pruning rules for this global constraint. We introduce a pruning rule that improves propagation from the objective variable to the decision variables, which is believed to be harder to achieve. We further introduce a pruning rule based on linear programming, and thereby make SOFTPREC a hybrid of constraint programming and linear programming. We present results demonstrating the efficiency of the pruning rules.

ECAI Conference 2010 Conference Paper

Knowledge Compilation for Itemset Mining

  • Hadrien Cambazard
  • Tarik Hadzic
  • Barry O'Sullivan

We present a novel approach to itemset mining whereby the set of all itemsets are compiled into a compact form, closely related to binary decision diagrams. While there were previous attempts to utilize decision diagrams for storing the set of frequent itemsets this is the first approach that does not rely on backtrack search to generate such a set. Our empirical evaluation demonstrates that our approach is complementary to current approaches.

ECAI Conference 2008 Conference Paper

A BDD Approach to the Feature Subscription Problem

  • Tarik Hadzic
  • David Lesaint
  • Deepak Mehta 0001
  • Barry O'Sullivan
  • Luis Quesada 0001
  • Nic Wilson

Modern feature-rich telecommunications services offer significant opportunities to human users. To make these services more usable, facilitating personalisation is very important since it enhances the users' experience considerably. However, regardless how service providers organise their catalogues of features, they cannot achieve complete configurability due to the existence of feature interactions. Distributed Feature Composition (DFC) provides a comprehensive methodology, underpinned by a formal architecture model to address this issue. In this paper we present an approach based on using Binary Decision Diagrams (BDD) to find optimal reconfigurations of features when a user's preferences violate the technical constraints defined by a set of DFC rules. In particular, we propose hybridizing constraint programming and standard BDD compilation techniques in order to scale the construction of a BDD for larger size catalogues. Our approach outperforms the standard BDD techniques by reducing the memory requirements by as much as five orders-of-magnitude and compiles the catalogues for which the standard techniques ran out of memory.

IJCAI Conference 2007 Conference Paper

  • Emmanuel Hebrard
  • Barry O'Sullivan
  • Toby Walsh

Users can often naturally express their preferences in terms of ideal or non-ideal solutions. We show how to reason about logical combinations of distance constraints on ideals and non-ideals using a novel global constraint. We evaluate our approach on both randomly generated and real-world configuration problem instances.

IJCAI Conference 2007 Conference Paper

  • Alan Holland
  • Barry O'Sullivan

Given a winning-bid withdrawal in a combinatorial auction, finding an alternative repair solution of adequate revenue without causing undue disturbance to the remaining winning bids in the original solution may be difficult or even impossible. This "bid-takers exposure problem" may be preemptively addressed by finding a solution that is robust to winning-bid withdrawal. We introduce the concept of monotonicity-in-expectation. We provide impossibility results concerning truthful mechanisms for robust solutions with bounded social-welfare losses in which the bid-taker cannot rescind items fromwinning bidders to repair a solution. We also show that this result extends to combinatorial auctions that include a form of leveledcommitment contract. However, we present a positive result regarding truthfulness for combinatorial auctions in a restricted setting that comprises a computationally efficient allocation algorithm that seeks to maximize expected social welfare.

IJCAI Conference 2007 Conference Paper

  • Christian Bessiere
  • Remi Coletta
  • Barry O'Sullivan
  • Mathias Paulin

The modelling and reformulation of constraint networks are recognised as important problems. The task of automatically acquiring a constraint network formulation of a problem from a subset of its solutions and non-solutions has been presented in the literature. However, the choice of such a subset was assumed to be made independently of the acquisition process. We present an approach in which an interactive acquisition system actively selects a good set of examples. We show that the number of examples required to acquire a constraint network is significantly reduced using our approach.

IJCAI Conference 2007 Conference Paper

  • Alex Ferguson
  • Barry O'Sullivan

The Quantified Constraint Satisfaction Problem (QCSP) is a generalisation of the classical CSP in which some of variables can be universally quantified. In this paper, we extend two well-known concepts in classical constraint satisfaction to the quantified case: problem relaxation and explanation of inconsistency. We show that the generality of the QCSP allows for a number of different forms of relaxation not available in classical CSP. We further present an algorithmfor computing a generalisation of conflict-based explanations of inconsistency for the QCSP.

IS Journal 2007 Journal Article

Configuration

  • Carsten Sinz
  • Albert Haag
  • Nina Narodytska
  • Toby Walsh
  • Esther Gelle
  • Mihaela Sabin
  • Ulrich Junker
  • Barry O'Sullivan

Over the years, a whole sector of AI dealing with configuration problems has emerged, and since 1996, an annual configuration workshop has been held in affiliation with a major AI conference. This installment of Trends & Controversies presents essays from the configuration workshop held in August 2006 as part of ECAI in Riva del Garda, Italy.

AAAI Conference 2007 System Paper

Generating and Solving Logic Puzzles through Constraint Satisfaction

  • Barry O'Sullivan

Solving logic puzzles has become a very popular past-time, particularly since the Sudoku puzzle started appearing in newspapers all over the world. We have developed a puzzle generator for a modification of Sudoku, called Jidoku, in which clues are binary disequalities between cells on a 9 × 9 grid. Our generator guarantees that puzzles have unique solutions, have graded difficulty, and can be solved using inference alone. This demonstration provides a fun application of many standard constraint satisfaction techniques, such as problem formulation, global constraints, search and inference. It is ideal as both an education and outreach tool. Our demonstration will allow people to generate and interactively solve puzzles of user-selected difficulty, with the aid of hints if required, through a specifically built Java applet.

AAAI Conference 2007 Conference Paper

Representative Explanations for Over-Constrained Problems

  • Barry O'Sullivan

In many interactive decision making scenarios there is often no solution that satisfies all of the user’s preferences. The decision process can be helped by providing explanations. Relaxations show sets of consistent preferences and, thus, indicate which preferences can be enforced, while exclusion sets show which preferences can be relaxed to obtain a solution. We propose a new approach to explanation based on the notion of a representative set of explanations. The size of the set of explanations we compute is exponentially more compact than that found using common approaches from the literature based on finding all minimal conflicts.

AAAI Conference 2006 Conference Paper

Approximate Compilation for Embedded Model-based Reasoning

  • Barry O'Sullivan

The use of embedded technology has become widespread. Many complex engineered systems comprise embedded features to perform self-diagnosis or self-reconfiguration. These features require fast response times in order to be useful in domains where embedded systems are typically deployed. Researchers often advocate the use of compilation-based approaches to store the set of environments (resp. solutions) to a diagnosis (resp. reconfiguration) problem, in some compact representation. However, the size of a compiled representation may be exponential in the treewidth of the problem. In this paper we propose a novel method for compiling the most preferred environments in order to reduce the large space requirements of our compiled representation. We show that approximate compilation is an effective means of generating the highest-valued environments, while obtaining a representation whose size can be tailored to any embedded application. The method also provides a graceful way to tradeoff space requirements with the completeness of our coverage of the environment space.

ECAI Conference 2006 Conference Paper

Guiding Search Using Constraint-Level Advice

  • Radoslaw Szymanek
  • Barry O'Sullivan

Constraint satisfaction problems are traditionally solved using some form of backtrack search that propagates constraints after each decision is made. The efficiency of search relies heavily on the use of good variable and value ordering heuristics. In this paper we show that constraints can also be used to guide the search process by actively proposing the next choice point to be branched on. We show that search effort can be reduced significantly.

AAAI Conference 2005 Conference Paper

Finding Diverse and Similar Solutions in Constraint Programming

  • Emmanuel Hebrard
  • Barry O'Sullivan

It is useful in a wide range of situations to find solutions which are diverse (or similar) to each other. We therefore define a number of different classes of diversity and similarity problems. For example, what is the most diverse set of solutions of a constraint satisfaction problem with a given cardinality? We first determine the computational complexity of these problems. We then propose a number of practical solution methods, some of which use global constraints for enforcing diversity (or similarity) between solutions. Empirical evaluation on a number of problems show promising results.