Arrow Research search

Author name cluster

Dietmar Jannach

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

16 papers
2 author rows

Possible papers

16

AAAI Conference 2026 Conference Paper

Choosing Abstraction Levels for Model-Based Software Debugging: A Theoretical and Empirical Analysis for Spreadsheet Programs (Abstract Reprint)

  • Patrick Rodler
  • Birgit Hofer
  • Dietmar Jannach
  • Iulia Nica
  • Franz Wotawa

Model-based diagnosis is a generally applicable, principled approach to the systematic debugging of a wide range of system types such as circuits, knowledge bases, physical devices, or software. Based on a formal description of the system, it enables precise and deterministic reasoning about potential faults responsible for observed misbehavior. In software, such a formal system description can often even be extracted from the buggy program fully automatically. As logical reasoning is central to diagnosis, the performance of model-based debuggers is largely influenced by reasoning efficiency, which in turn depends on the complexity and expressivity of the system description. Since highly detailed models capturing exact semantics often exceed the capabilities of current reasoning tools, researchers have proposed more abstract representations. In this work, we thoroughly analyze system modeling techniques with a focus on fault localization in spreadsheets—one of the most widely used end-user programming paradigms. Specifically, we present three constraint model types characterizing spreadsheets at different abstraction levels, show how to extract them automatically from faulty spreadsheets, and provide theoretical and empirical investigations of the impact of abstraction on both diagnostic output and computational performance. Our main conclusions are that (i) for the model types, there is a trade-off between the conciseness of generated fault candidates and computation time, (ii) the exact model is often impractical, and (iii) a new model based on qualitative reasoning yields the same solutions as the exact one in up to more than half the cases while being orders of magnitude faster. Due to their ability to restrict the solution space in a sound way, the explored model-based techniques, rather than being used as standalone approaches, are expected to realize their full potential in combination with iterative sequential diagnosis or indeterministic but more performant statistical debugging methods.

TIST Journal 2025 Journal Article

Considering Time and Feature Entropy in Calibrated Recommendations

  • Diego Corrêa da Silva
  • Dietmar Jannach
  • Frederico Araújo Durão

The essence of calibration in recommender systems is to generate recommendations that match the distribution of a given user’s past preferences regarding certain item features—e.g., in terms of preferred genres in the case of movies—while preserving relevance. The user’s past preference distribution is usually derived by considering the features of all items that the user previously liked. However, the most common approach in the literature to derive this distribution has certain limitations. First, it does not consider that user preferences may change over time. Second, there are domains where the relevant item features are set-valued, e.g., a movie can have several genres. In such cases, existing calibration approaches may represent the true user’s preference distribution in a suboptimal way. In this work, we, therefore, propose two novel approaches to derive the preference distributions of users for the purpose of calibration. The first method allows us to decrease the relevance of possibly outdated preference information. The second method is an entropy-based approach, which aims to capture better the user’s true preferences toward certain item features. Extensive experimental evaluations on four distinct datasets confirm that the proposed techniques are more effective in reducing the level of miscalibration than the common state-of-the-art calibration approach.

KR Conference 2021 Short Paper

Randomized Problem-Relaxation Solving for Over-Constrained Schedules

  • Patrick Rodler
  • Erich Teppan
  • Dietmar Jannach

Optimal production planning in the form of job shop scheduling problems (JSSP) is a vital problem in many industries. In practice, however, it can happen that the volume of jobs (orders) exceeds the production capacity for a given planning horizon. A reasonable aim in such situations is the completion of as many jobs as possible in time (while postponing the rest). We call this the Job Set Optimization Problem (JOP). Technically, when constraint programming is used for solving JSSPs, the formulated objective in the constraint model can be adapted so that the constraint solver addresses JOP, i. e. , searches for schedules that maximize the number of timely finished jobs. However, also highly specialized solvers which proved very powerful for JSSPs may struggle with the increased complexity of the reformulated problem and may fail to generate a JOP solution given practical computation timeouts. As a remedy, we suggest a framework for solving multiple randomly modified instances of a relaxation of the JOP, which allows to gradually approach a JOP solution. The main idea is to have one module compute subset-minimal job sets to be postponed, and another one effectuating that random job sets are found. Different algorithms from literature can be used to realize these modules. Using IBM’s cutting-edge CP Optimizer suite, experiments on well-known JSSP benchmark problems show that using the proposed framework consistently leads to more scheduled jobs for various computation timeouts than a standalone constraint solver approach.

IJCAI Conference 2020 Conference Paper

Methodological Issues in Recommender Systems Research (Extended Abstract)

  • Maurizio Ferrari Dacrema
  • Paolo Cremonesi
  • Dietmar Jannach

The development of continuously improved machine learning algorithms for personalized item ranking lies at the core of today's research in the area of recommender systems. Over the years, the research community has developed widely-agreed best practices for comparing algorithms and demonstrating progress with offline experiments. Unfortunately, we find this accepted research practice can easily lead to phantom progress due to the following reasons: limited reproducibility, comparison with complex but weak and non-optimized baseline algorithms, over-generalization from a small set of experimental configurations. To assess the extent of such problems, we analyzed 18 research papers published recently at top-ranked conferences. Only 7 were reproducible with reasonable effort, and 6 of them could often be outperformed by relatively simple heuristic methods, e. g. , nearest neighbors. In this paper, we discuss these observations in detail, and reflect on the related fundamental problem of over-reliance on offline experiments in recommender systems research.

IJCAI Conference 2016 Conference Paper

Efficient Sequential Model-Based Fault-Localization with Partial Diagnoses

  • Kostyantyn Shchekotykhin
  • Thomas Schmitz
  • Dietmar Jannach

Model-Based Diagnosis is a principled approach to identify the possible causes when a system under observation behaves unexpectedly. In case the number of possible explanations for the unexpected behavior is large, sequential diagnosis approaches can be applied. The strategy of such approaches is to iteratively take additional measurements to narrow down the set of alternatives in order to find the true cause of the problem. In this paper we propose a sound and complete sequential diagnosis approach which does not require any information about the structure of the diagnosed system. The method is based on the new concept of "partial" diagnoses, which can be efficiently computed given a small number of minimal conflicts. As a result, the overall time needed for determining the best next measurement point can be significantly reduced. An experimental evaluation on different benchmark problems shows that our sequential diagnosis approach needs considerably less computation time when compared with an existing domain-independent approach.

JAIR Journal 2016 Journal Article

Parallel Model-Based Diagnosis on Multi-Core Computers

  • Dietmar Jannach
  • Thomas Schmitz
  • Kostyantyn Shchekotykhin

Model-Based Diagnosis (MBD) is a principled and domain-independent way of analyzing why a system under examination is not behaving as expected. Given an abstract description (model) of the system's components and their behavior when functioning normally, MBD techniques rely on observations about the actual system behavior to reason about possible causes when there are discrepancies between the expected and observed behavior. Due to its generality, MBD has been successfully applied in a variety of application domains over the last decades. In many application domains of MBD, testing different hypotheses about the reasons for a failure can be computationally costly, e.g., because complex simulations of the system behavior have to be performed. In this work, we therefore propose different schemes of parallelizing the diagnostic reasoning process in order to better exploit the capabilities of modern multi-core computers. We propose and systematically evaluate parallelization schemes for Reiter's hitting set algorithm for finding all or a few leading minimal diagnoses using two different conflict detection techniques. Furthermore, we perform initial experiments for a basic depth-first search strategy to assess the potential of parallelization when searching for one single diagnosis. Finally, we test the effects of parallelizing "direct encodings" of the diagnosis problem in a constraint solver.

IJCAI Conference 2015 Conference Paper

MergeXplain: Fast Computation of Multiple Conflicts for Diagnosis

  • Kostyantyn Shchekotykhin
  • Dietmar Jannach
  • Thomas Schmitz

The computation of minimal conflict sets is a central task when the goal is to find relaxations or explanations for overconstrained problem formulations and in particular in the context of Model- Based Diagnosis (MBD) approaches. In this paper we propose MERGEXPLAIN, a non-intrusive conflict detection algorithm which implements a divide-and-conquer strategy to decompose a problem into a set of smaller independent subproblems. Our technique allows us to efficiently determine multiple minimal conflicts during one single problem decomposition run, which is particularly helpful in MBD problem settings. An empirical evaluation on various benchmark problems shows that our method can lead to a significant reduction of the required diagnosis times.

AAAI Conference 2015 Conference Paper

Parallelized Hitting Set Computation for Model-Based Diagnosis

  • Dietmar Jannach
  • Thomas Schmitz
  • Kostyantyn Shchekotykhin

Model-Based Diagnosis techniques have been successfully applied to support a variety of fault-localization tasks both for hardware and software artifacts. In many applications, Reiter’s hitting set algorithm has been used to determine the set of all diagnoses for a given problem. In order to construct the diagnoses with increasing cardinality, Reiter proposed a breadth-first search scheme in combination with different tree-pruning rules. Since many of today’s computing devices have multi-core CPU architectures, we propose techniques to parallelize the construction of the tree to better utilize the computing resources without losing any diagnoses. Experimental evaluations using different benchmark problems show that parallelization can help to significantly reduce the required running times. Additional simulation experiments were performed to understand how the characteristics of the underlying problem structure impact the achieved performance gains.

TIST Journal 2013 Journal Article

Improving recommendation accuracy based on item-specific tag preferences

  • Fatih Gedikli
  • Dietmar Jannach

In recent years, different proposals have been made to exploit Social Web tagging information to build more effective recommender systems. The tagging data, for example, were used to identify similar users or were viewed as additional information about the recommendable items. Recent research has indicated that “attaching feelings to tags” is experienced by users as a valuable means to express which features of an item they particularly like or dislike. When following such an approach, users would therefore not only add tags to an item as in usual Web 2.0 applications, but also attach a preference ( affect ) to the tag itself, expressing, for example, whether or not they liked a certain actor in a given movie. In this work, we show how this additional preference data can be exploited by a recommender system to make more accurate predictions. In contrast to previous work, which also relied on so-called tag preferences to enhance the predictive accuracy of recommender systems, we argue that tag preferences should be considered in the context of an item. We therefore propose new schemes to infer and exploit context-specific tag preferences in the recommendation process. An evaluation on two different datasets reveals that our approach is capable of providing more accurate recommendations than previous tag-based recommender algorithms and recent tag-agnostic matrix factorization techniques.

IS Journal 2007 Journal Article

Comparing Recommendation Strategies in a Commercial Context

  • Markus Zanker
  • Markus Jessenitschnig
  • Dietmar Jannach
  • Sergiu Gordea

From an industrial perspective, recommender systems constitute the base technology for providing interactivity and personalization in electronic business-to-consumer marketplaces. Robin Burke distinguishes between five different recommendation techniques: collaborative, content based, utility based, demographic, and knowledge based.