AIJ Journal 2016 Journal Article
ASlib: A benchmark library for algorithm selection
- Bernd Bischl
- Pascal Kerschke
- Lars Kotthoff
- Marius Lindauer
- Yuri Malitsky
- Alexandre Fréchette
- Holger Hoos
- Frank Hutter
Author name cluster
Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.
AIJ Journal 2016 Journal Article
AAAI Conference 2016 Conference Paper
It is well established that in many scenarios there is no single solver that will provide optimal performance across a wide range of problem instances. Taking advantage of this observation, research into algorithm selection is designed to help identify the best approach for each problem at hand. This segregation is usually based on carefully constructed features, designed to quickly present the overall structure of the instance as a constant size numeric vector. Based on these features, a plethora of machine learning techniques can be utilized to predict the appropriate solver to execute, leading to significant improvements over relying solely on any one solver. However, being manually constructed, the creation of good features is an arduous task requiring a great deal of knowledge of the problem domain of interest. To alleviate this costly yet crucial step, this paper presents an automated methodology for producing an informative set of features utilizing a deep neural network. We show that the presented approach completely automates the algorithm selection pipeline and is able to achieve significantly better performance than a single best solver across multiple problem domains.
AIJ Journal 2016 Journal Article
IJCAI Conference 2015 Conference Paper
IJCAI Conference 2015 Conference Paper
It is now readily accepted that automated algorithm configuration is a necessity for ensuring optimized performance of solvers on a particular problem domain. Even the best developers who have carefully designed their solver are not always able to manually find the best parameter settings for it. Yet, the opportunity for improving performance has been repeatedly demonstrated by configuration tools like ParamILS, SMAC, and GGA. However, all these techniques currently assume a static environment, where demonstrative instances are procured beforehand, potentially unlimited time is provided to adequately search the parameter space, and the solver would never need to be retrained. This is not always the case in practice. The ReACT system, proposed in 2014, demonstrated that a solver could be configured during runtime as new instances arrive in a steady stream. This paper further develops that approach and shows how a ranking scheme, like TrueSkill, can further improve the configurator’s performance, making it able to quickly find good parameterizations without adding any overhead on the time needed to solve any new instance, and then continuously improve as new instances are evaluated. The enhancements to ReACT that we present enable us to even outperform existing static configurators like SMAC in a non-dynamic setting.
AAAI Conference 2015 Conference Paper
A Data Scientist typically performs a number of tedious and time-consuming steps to derive insight from a raw data set. The process usually starts with data ingestion, cleaning, and transformation (e. g. outlier removal, missing value imputation), then proceeds to model building, and finally a presentation of predictions that align with the end-users objectives and preferences. It is a long, complex, and sometimes artful process requiring substantial time and effort, especially because of the combinatorial explosion in choices of algorithms (and platforms), their parameters, and their compositions. Tools that can help automate steps in this process have the potential to accelerate the time-to-delivery of useful results, expand the reach of data science to non-experts, and offer a more systematic exploration of the available options. This work presents a step towards this goal.
SoCS Conference 2014 Conference Paper
The success and power of algorithm selection techniques has been empirically demonstrated on numerous occasions, most noticeably in the competition settings like those for SAT, CSP, MaxSAT, QBF, etc. Yet while there is now a plethora of competing approaches, all of them are dependent on the quality of a set of structural features they use to distinguish amongst the instances. Over the years, each domain has defined and refined its own set of features, yet at their core they are mostly a collection of everything that was considered useful in the past. As an alternative to this shotgun generation of features, this paper instead proposes a more systematic approach. Specifically, the paper shows how latent features gathered from matrix decomposition are enough for a linear model to achieve a level of performance comparable to a perfect Oracle portfolio. This information can, in turn, help guide researchers to the kinds of structural features they should be looking for, or even just identifying when such features are missing.
AAAI Conference 2014 Conference Paper
Our objective is to boost the state-of-the-art performance in MaxSAT solving. To this end, we employ the instancespecific algorithm configurator ISAC, and improve it with the latest in portfolio technology. Experimental results on SAT show that this combination marks a significant step forward in our ability to tune algorithms instance-specifically. We then apply the new methodology to a number of MaxSAT problem domains and show that the resulting solvers consistently outperform the best existing solvers on the respective problem families. In fact, the solvers presented here were independently evaluated at the 2013 MaxSAT Evaluation where they won six of the eleven categories.
AAAI Conference 2014 Conference Paper
SoCS Conference 2014 Conference Paper
The success or failure of a solver is oftentimes closely tied to the proper configuration of the solver
ECAI Conference 2014 Conference Paper
IJCAI Conference 2013 Conference Paper
Different solution approaches for combinatorial problems often exhibit incomparable performance that depends on the concrete problem instance to be solved. Algorithm portfolios aim to combine the strengths of multiple algorithmic approaches by training a classifier that selects or schedules solvers dependent on the given instance. We devise a new classifier that selects solvers based on a cost-sensitive hierarchical clustering model. Experimental results on SAT and MaxSAT show that the new method outperforms the most effective portfolio builders to date.
SoCS Conference 2013 Conference Paper
Combinatorial problems are ubiquitous in artificial intelligence and related areas. While there has been a significant amount of research into the design and implementation of solvers for combinatorial problems, it is well-known that there is still no single solver that performs best across a broad set of problem types and domains. This has motivated the development of portfolios of solvers. A portfolio typically comprises either many different solvers, instances of the same solver tuned in different ways, or some combination of these. However, current approaches to portfolio design take a static view of the process. Specifically, the design of the portfolio is determined offline, and then deployed in some setting. In this paper we propose an approach to evolving the portfolio over time based on the problems instances that it encounters. We study several challenges raised by such a dynamic approach, such as how to re-tune the portfolio over time. Our empirical results demonstrate that our evolving portfolio approach significantly out-performed the standard static approach in the case when the type of instances observed change over time.
AAAI Conference 2012 Conference Paper
SAT Conference 2011 Conference Paper
Abstract When tackling a computationally challenging combinatorial problem, one often observes that some solution approaches work well on some instances, while other approaches work better on other instances. This observation has given rise to the idea of building algorithm portfolios [5]. Leyton-Brown et al. [1], for instance, proposed to select one of the algorithms in the portfolio based on some features of the instance to be solved. This approach has been blessed with tremendous success in the past. Especially in SAT, the SATzilla portfolios [7] have performed extremely well in past SAT Competitions [6].
AAAI Conference 2010 Conference Paper
We present a highly efficient incremental algorithm for propagating bounded knapsack constraints. Our algorithm is based on the sublinear filtering algorithm for binary knapsack constraints by Katriel et al. and achieves similar speed-ups of one to two orders of magnitude when compared with its lineartime counterpart. We also show that the representation of bounded knapsacks as binary knapsacks leads to ineffective filtering behavior. Experiments on standard knapsack benchmarks show that the new algorithm significantly outperforms existing methods for handling bounded knapsack constraints.
ECAI Conference 2010 Conference Paper
We present a new method for instance-specific algorithm configuration (ISAC). It is based on the integration of the algorithm configuration system GGA and the recently proposed stochastic offline programming paradigm. ISAC is provided a solver with categorical, ordinal, and/or continuous parameters, a training benchmark set of input instances for that solver, and an algorithm that computes a feature vector that characterizes any given instance. ISAC then provides high quality parameter settings for any new input instance. Experiments on a variety of different constrained optimization and constraint satisfaction solvers show that automatic algorithm configuration vastly outperforms manual tuning. Moreover, we show that instance-specific tuning frequently leads to significant speed-ups over instance-oblivious configurations.