Arrow Research search

Author name cluster

Michael Pazzani

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
1 author row

Possible papers

7

AAAI Conference 2022 Conference Paper

Expert-Informed, User-Centric Explanations for Machine Learning

  • Michael Pazzani
  • Severine Soltani
  • Robert Kaufman
  • Samson Qian
  • Albert Hsiao

We argue that the dominant approach to explainable AI for explaining image classification, annotating images with heatmaps, provides little value for users unfamiliar with deep learning. We argue that explainable AI for images should produce output like experts produce when communicating with one another, with apprentices, and with novices. We provide an expanded set of goals of explainable AI systems and propose a Turing Test for explainable AI.

JBHI Journal 2021 Journal Article

A Comprehensive Explanation Framework for Biomedical Time Series Classification

  • Praharsh Ivaturi
  • Matteo Gadaleta
  • Amitabh C. Pandey
  • Michael Pazzani
  • Steven R. Steinhubl
  • Giorgio Quer

In this study, we propose a post-hoc explainability framework for deep learning models applied to quasi-periodic biomedical time-series classification. As a case study, we focus on the problem of atrial fibrillation (AF) detection from electrocardiography signals, which has strong clinical relevance. Starting from a state-of-the-art pretrained model, we tackle the problem from two different perspectives: global and local explanation. With global explanation, we analyze the model behavior by looking at entire classes of data, showing which regions of the input repetitive patterns have the most influence for a specific outcome of the model. Our explanation results align with the expectations of clinical experts, showing that features crucial for AF detection contribute heavily to the final decision. These features include R-R interval regularity, absence of the P-wave or presence of electrical activity in the isoelectric period. On the other hand, with local explanation, we analyze specific input signals and model outcomes. We present a comprehensive analysis of the network facing different conditions, whether the model has correctly classified the input signal or not. This enables a deeper understanding of the network's behavior, showing the most informative regions that trigger the classification decision and highlighting possible causes of misbehavior.

NeurIPS Conference 1996 Conference Paper

Combining Neural Network Regression Estimates with Regularized Linear Weights

  • Christopher Merz
  • Michael Pazzani

When combining a set of learned models to form an improved es(cid: 173) timator, the issue of redundancy or multicollinearity in the set of models must be addressed. A progression of existing approaches and their limitations with respect to the redundancy is discussed. A new approach, PCR, based on principal components regres(cid: 173) sion is proposed to address these limitations. An evaluation of the new approach on a collection of domains reveals that: 1) PCR was the most robust combination method as the redundancy of the learned models increased, 2) redundancy could be handled without eliminating any of the learned models, and 3) the principal compo(cid: 173) nents of the learned models provided a continuum of "regularized" weights from which PCR * could choose.

AAAI Conference 1993 Conference Paper

Finding Accurate Frontiers: A Knowledge-Intensive Approach to Relational Learning

  • Michael Pazzani

An approach to analytic learning is described that searches for accurate entailments of a Horn Clause domain theory. A hill-climbing search, guided by an information based evaluation function, is performed by applying a set of operators that derive frontiers from domain theories. The analytic learning system is one component of a multi-strategy relational learning system. We compare the accuracy of concepts learned with this analytic strategy to concepts learned with an analytic strategy that operationalizes the domain theory.

IJCAI Conference 1987 Conference Paper

A Comparison of Concept Identification in Human Learning and Network Learning with the Generalized Delta Rule

  • Michael Pazzani
  • Michael Dyer

The generalized delta rule (which is also known as error backpropagation) is a significant advance over previous procedures for network learning. In this paper, we compare network learning using the generalized delta rule to human learning on two concept identification tasks: • Relative ease of concept identification • Generalizing from incomplete data

IJCAI Conference 1987 Conference Paper

Using Prior Learning to Facilitate the Learning of New Causal Theories

  • Michael Pazzani
  • Michael Dyer
  • Margot Flowers

We present an approach to learning causal knowledge which lies in between two extremely different approaches to learning: • empirical methods (e. g. , [12, 17]) which detect similarities and differences between between examples to reveal regularities. • explanation-based methods (e. g. , [13, 4]) which derive a causal explanation for a single event from existing causal knowledge. The event and the causal explanation are generalized to create a new "chunk" of causal knowledge by retaining only those features of the event which were needed to produce the explanation. In the approach to learning presented in this paper and implemented in a program called OCCAM, prior knowledge indicating what sort of distinctions have proven useful in the past influences the search for causal hypotheses. Our approach to learning snares a goal with explanation-based learning: to allow existing knowledge to facilitate future learning so that fewer examples are required. However, it does not share one shortcoming of explanation-based learning since it can create causal theories which are not implications of existing causal theories.

AAAI Conference 1986 Conference Paper

Refining the Knowledge Base of a Diagnostic Expert System: An Application of Failure-Driven Learning

  • Michael Pazzani

This paper discusses an application of failure-driven learning to the construction of the knowledge base of a diagnostic expert system. Diagnosis heuristics (i. e. , efficient rules which encode empirical associations between atypical device behavior and device failures) are learned from information implicit in device models. This approach is desireable since less effort is required to obtain information about device functionality and connectivity to define device models than to encode and debug diagnosis heuristics from a domain expert, We give results of applying this technique in an expert system for the diagnosis of failures in the attitude control system of the DSCS-III satellite. The system is fully implemented in a combination of LISP and PROLOG on a Symbolics 3600. The results indicate that realistic applications can be built using this approach. The performance of the diagnostic expert system after learning is equivalent to and, in some cases, better than the performace of the expert system with rules supplied by a domain expert.