Arrow Research search

Author name cluster

Mark T. Keane

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

19 papers
2 author rows

Possible papers

19

AAAI Conference 2026 Conference Paper

Explanations for Sequential Decision-Making – an Overview

  • Hendrik Baier
  • Mark T. Keane
  • Sarath Sreedharan
  • Silvia Tulli
  • Abhinav Verma

In this paper, we highlight the field of explainable sequential decision making. We discuss how the problem of explaining sequential decisions gives rise to problems and challenges that are absent from scenarios that focus on explaining single-shot decision making. We provide a short survey of some of the more prominent subareas within explainable sequential decision-making and their unique focuses and blind spots. Here, we argue that we need to go beyond simply focusing on individual subareas like explainable planning, reinforcement learning, or robotics, and move towards studying and tackling the more general problem of explainable sequential decision-making. Such a holistic approach will not only allow us to identify previously ignored problems, but also provide us with the ability to transfer ideas and intuitions from one subarea of explainable sequential decision-making to another. We end the paper with a discussion on future directions and some of the most pressing open questions.

AAAI Conference 2024 Conference Paper

Counterfactual Explanations for Misclassified Images: How Human and Machine Explanations Differ (Abstract Reprint)

  • Eoin Delaney
  • Arjun Pakrashi
  • Derek Greene
  • Mark T. Keane

Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems because people easily understand them, they apply across different problem domains and seem to be legally compliant. Although over 100 counterfactual methods exist in the XAI literature, each claiming to generate plausible explanations akin to those preferred by people, few of these methods have actually been tested on users (∼7%). Even fewer studies adopt a user-centered perspective; for instance, asking people for their counterfactual explanations to determine their perspective on a “good explanation”. This gap in the literature is addressed here using a novel methodology that (i) gathers human-generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not “minimally edit” images when generating counterfactual explanations. Instead, they make larger, “meaningful” edits that better approximate prototypes in the counterfactual class. An analysis based on “explanation goals” is proposed to account for this divergence between human and machine explanations. The implications of these proposals for future work are discussed.

IJCAI Conference 2023 Conference Paper

Advancing Post-Hoc Case-Based Explanation with Feature Highlighting

  • Eoin M. Kenny
  • Eoin Delaney
  • Mark T. Keane

Explainable AI (XAI) has been proposed as a valuable tool to assist in downstream tasks involving human-AI collaboration. Perhaps the most psychologically valid XAI techniques are case-based approaches which display "whole" exemplars to explain the predictions of black-box AI systems. However, for such post-hoc XAI methods dealing with images, there has been no attempt to improve their scope by using multiple clear feature "parts" of the images to explain the predictions while linking back to relevant cases in the training data, thus allowing for more comprehensive explanations that are faithful to the underlying model. Here, we address this gap by proposing two general algorithms (latent and superpixel-based) which can isolate multiple clear feature parts in a test image, and then connect them to the explanatory cases found in the training data, before testing their effectiveness in a carefully designed user study. Results demonstrate that the proposed approach appropriately calibrates a user's feelings of "correctness" for ambiguous classifications in real world data on the ImageNet dataset, an effect which does not happen when just showing the explanation without feature highlighting.

AIJ Journal 2023 Journal Article

Counterfactual explanations for misclassified images: How human and machine explanations differ

  • Eoin Delaney
  • Arjun Pakrashi
  • Derek Greene
  • Mark T. Keane

Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating the predictions of black-box deep-learning systems because people easily understand them, they apply across different problem domains and seem to be legally compliant. Although over 100 counterfactual methods exist in the XAI literature, each claiming to generate plausible explanations akin to those preferred by people, few of these methods have actually been tested on users (∼7%). Even fewer studies adopt a user-centered perspective; for instance, asking people for their counterfactual explanations to determine their perspective on a “good explanation”. This gap in the literature is addressed here using a novel methodology that (i) gathers human-generated counterfactual explanations for misclassified images, in two user studies and, then, (ii) compares these human-generated explanations to computationally-generated explanations for the same misclassifications. Results indicate that humans do not “minimally edit” images when generating counterfactual explanations. Instead, they make larger, “meaningful” edits that better approximate prototypes in the counterfactual class. An analysis based on “explanation goals” is proposed to account for this divergence between human and machine explanations. The implications of these proposals for future work are discussed.

IJCAI Conference 2023 Conference Paper

Even If Explanations: Prior Work, Desiderata & Benchmarks for Semi-Factual XAI

  • Saugat Aryal
  • Mark T. Keane

Recently, eXplainable AI (XAI) research has focused on counterfactual explanations as post-hoc justifications for AI-system decisions (e. g. , a customer refused a loan might be told “if you asked for a loan with a shorter term, it would have been approved”). Counterfactuals explain what changes to the input-features of an AI system change the output-decision. However, there is a sub-type of counterfactual, semi-factuals, that have received less attention in AI (though the Cognitive Sciences have studied them more). This paper surveys semi-factual explanation, summarising historical and recent work. It defines key desiderata for semi-factual XAI, reporting benchmark tests of historical algorithms (as well as a novel, naïve method) to provide a solid basis for future developments.

AIJ Journal 2021 Journal Article

Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies

  • Eoin M. Kenny
  • Courtney Ford
  • Molly Quinn
  • Mark T. Keane

In this paper, we describe a post-hoc explanation-by-example approach to eXplainable AI (XAI), where a black-box, deep learning system is explained by reference to a more transparent, proxy model (in this situation a case-based reasoner), based on a feature-weighting analysis of the former that is used to find explanatory cases from the latter (as one instance of the so-called Twin Systems approach). A novel method (COLE-HP) for extracting the feature-weights from black-box models is demonstrated for a convolutional neural network (CNN) applied to the MNIST dataset; in which extracted feature-weights are used to find explanatory, nearest-neighbours for test instances. Three user studies are reported examining people's judgements of right and wrong classifications made by this XAI twin-system, in the presence/absence of explanations-by-example and different error-rates (from 3-60%). The judgements gathered include item-level evaluations of both correctness and reasonableness, and system-level evaluations of trust, satisfaction, correctness, and reasonableness. Several proposals are made about the user's mental model in these tasks and how it is impacted by explanations at an item- and system-level. The wider lessons from this work for XAI and its user studies are reviewed.

IJCAI Conference 2021 Conference Paper

If Only We Had Better Counterfactual Explanations: Five Key Deficits to Rectify in the Evaluation of Counterfactual XAI Techniques

  • Mark T. Keane
  • Eoin M. Kenny
  • Eoin Delaney
  • Barry Smyth

In recent years, there has been an explosion of AI research on counterfactual explanations as a solution to the problem of eXplainable AI (XAI). These explanations seem to offer technical, psychological and legal benefits over other explanation techniques. We survey 100 distinct counterfactual explanation methods reported in the literature. This survey addresses the extent to which these methods have been adequately evaluated, both psychologically and computationally, and quantifies the shortfalls occurring. For instance, only 21% of these methods have been user tested. Five key deficits in the evaluation of these methods are detailed and a roadmap, with standardised benchmark evaluations, is proposed to resolve the issues arising; issues, that currently effectively block scientific progress in this field.

IJCAI Conference 2020 Conference Paper

Bayesian Case-Exclusion and Personalized Explanations for Sustainable Dairy Farming (Extended Abstract)

  • Eoin M. Kenny
  • Elodie Ruelle
  • Anne Geoghegan
  • Laurence Shalloo
  • Micheál O'Leary
  • Michael O'Donovan
  • Mohammed Temraz
  • Mark T. Keane

Smart agriculture (SmartAg) has emerged as a rich domain for AI-driven decision support systems (DSS); however, it is often challenged by user-adoption issues. This paper reports a case-based reasoning (CBR) system, PBI-CBR, that predicts grass growth for dairy farmers, that combines predictive accuracy and explanations to improve user adoption. PBI-CBR’s key novelty is its use of Bayesian methods for case-base maintenance in a regression domain. Experiments report the tradeoff between predictive accuracy and explanatory capability for different variants of PBI-CBR, and how updating Bayesian priors each year improves performance.

IJCAI Conference 2019 Conference Paper

Twin-Systems to Explain Artificial Neural Networks using Case-Based Reasoning: Comparative Tests of Feature-Weighting Methods in ANN-CBR Twins for XAI

  • Eoin M. Kenny
  • Mark T. Keane

In this paper, twin-systems are described to address the eXplainable artificial intelligence (XAI) problem, where a black box model is mapped to a white box “twin” that is more interpretable, with both systems using the same dataset. The framework is instantiated by twinning an artificial neural network (ANN; black box) with a case-based reasoning system (CBR; white box), and mapping the feature weights from the former to the latter to find cases that explain the ANN’s outputs. Using a novel evaluation method, the effectiveness of this twin-system approach is demonstrated by showing that nearest neighbor cases can be found to match the ANN predictions for benchmark datasets. Several feature-weighting methods are competitively tested in two experiments, including our novel, contributions-based method (called COLE) that is found to perform best. The tests consider the ”twinning” of traditional multilayer perceptron (MLP) networks and convolutional neural networks (CNN) with CBR systems. For the CNNs trained on image data, qualitative evidence shows that cases provide plausible explanations for the CNN’s classifications.

IJCAI Conference 2011 Conference Paper

Mining the Web for the "Voice of the Herd" to Track Stock Market Bubbles

  • Aaron Gerow
  • Mark T. Keane

We show that power-law analyses of financial commentaries from newspaper web-sites can be used to identify stock market bubbles, supplementing traditional volatility analyses. Using a four-year corpus of 17, 713 online, finance-related articles (10M+ words) from the Financial Times, the New York Times, and the BBC, we show that week-to-week changes in power-law distributions reflect market movements of the Dow Jones Industrial Average (DJI), the FTSE-100, and the NIKKEI-225. Notably, the statistical regularities in language track the 2007 stock market bubble, showing emerging structure in the language of commentators, as progressively greater agreement arose in their positive perceptions of the market. Furthermore, during the bubble period, a marked divergence in positive language occurs as revealed by a Kullback-Leibler analysis.

IJCAI Conference 2007 Conference Paper

  • Ronan Mac Ruairi
  • Mark T. Keane

Monitoring a diffuse event with a wireless sensor network differs from well studied applications such as target tracking and habitat monitoring and therefore we suggest that new approaches are needed. In this paper we propose a novel low power technique based on a multiple agent framework. We show how a set of simple rules can produce complex behavior that encompasses event characterization and data routing. We demonstrate the approach and examine its accuracy and scalability using a simulated gaseous plume monitoring scenario.

IJCAI Conference 2005 Conference Paper

Towards More Intelligent Mobile Search

  • Karen Church
  • Mark T. Keane
  • Barry

As the mobile Internet continues to grow there is an increasing need to provide users with effective search facilities. In this paper we argue that the standard Web search approach of providing snippet text alongside each result is not appropriate given the interface limitations of mobile devices. Instead we evaluate an alternative approach involving the use of related queries in place of snippet text for result gisting.

AIJ Journal 1998 Journal Article

Adaptation-guided retrieval: questioning the similarity assumption in reasoning

  • Barry Smyth
  • Mark T. Keane

One of the major assumptions in Artificial Intelligence is that similar experiences can guide future reasoning, problem solving and learning; what we will call, the similarity assumption. The similarity assumption is used in problem solving and reasoning systems when target problems are dealt with by resorting to a previous situation with common conceptual features. In this article, we question this assumption in the context of case-based reasoning (CBR). In CBR, the similarity assumption plays a central role when new problems are solved, by retrieving similar cases and adapting their solutions. The success of any CBR system is contingent on the retrieval of a case that can be successfully reused to solve the target problem. We show that it is often unwarranted to assume that the most similar case is also the most appropriate from a reuse perspective. We argue that similarity must be augmented by deeper, adaptation knowledge about whether a case can be easily modified to fit a target problem. We implement this idea in a new technique, called adaptation-guided retrieval (AGR), which provides a direct link between retrieval similarity and adaptation needs. This technique uses specially formulated adaptation knowledge, which, during retrieval, facilitates the computation of a precise measure of a case's adaptation requirements. In closing, we assess the broader implications of AGR and argue that it is just one of a growing number of methods that seek to overcome the limitations of the traditional similarity assumption in an effort to deliver more sophisticated and scalable reasoning systems.

IJCAI Conference 1995 Conference Paper

Remembering To Forget: A Competence-Preserving Case Deletion Policy for Case-Based Reasoning Systems

  • Barry Smyth
  • Mark T. Keane

The utility problem occurs when the cost associated with searching for relevant knowledge outweighs the benefit of applying this knowledge. One common machine learning strategy for coping with this problem ensures that stored knowledge is genuinely useful, deleting any structures that do not contribute to performance in a positive sense, and essentially limiting the size of the knowledge-base. We will examine this deletion strategy in the context of casebased reasoning (CBR) systems. In CBR the impact of the utility problem is very much dependant on the size and growth of the case-base; larger case-bases mean more expensive retrieval stages, an expensive overhead in CBR systems. Traditional deletion strategies will keep performance in check (and thereby control the classical utility problem) but they may cause problems for CBR system competence. This effect is demonstrated experimentally and in reply two new deletion strategies are proposed that can take both competence and performance into consideration during deletion.

AIIM Journal 1994 Journal Article

Effective retrieval in Hospital Information Systems: The use of context in answering queries to Patient Discharge Summaries

  • Brenda Nangle
  • Mark T. Keane

The move towards the electronic storage of medical records in Hospital Information Systems (HISs) presents significant challenges for AI retrieval techniques. In this paper, we argue that adequate information retrieval in such systems will have to rely on the exploitation of the conceptual knowledge in those records rather than superficial string searches. However, this course of action is dependent on the developments of natural language processing techniques and on retrieval systems that can exploit semantic/conceptual knowledge. We present a retrieval system, which attempts to realise the second of these developments. This system, called CONIR [developed in the context of the European Community project MENELAS (AIM 2023)] operates in the domain of Patient Discharge Summaries on coronary illness. CONIR uses flexible retrieval techniques, that exploit conceptual context information, over a database of elaborated semantic records. In the course of the paper we outline the sorts of knowledge structures that are required to do this type of retrieval and indicate how they are constructed.