Arrow Research search

Author name cluster

Andrew Ross

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

AAAI Conference 2020 Conference Paper

Ensembles of Locally Independent Prediction Models

  • Andrew Ross
  • Weiwei Pan
  • Leo Celi
  • Finale Doshi-Velez

Ensembles depend on diversity for improved performance. Many ensemble training methods, therefore, attempt to optimize for diversity, which they almost always define in terms of differences in training set predictions. In this paper, however, we demonstrate the diversity of predictions on the training set does not necessarily imply diversity under mild covariate shift, which can harm generalization in practical settings. To address this issue, we introduce a new diversity metric and associated method of training ensembles of models that extrapolate differently on local patches of the data manifold. Across a variety of synthetic and real-world tasks, we find that our method improves generalization and diversity in qualitatively novel ways, especially under data limits and covariate shift.

YNICL Journal 2020 Journal Article

Multi-modal normalization of resting-state using local physiology reduces changes in functional connectivity patterns observed in mTBI patients

  • Allen A. Champagne
  • Nicole S. Coverdale
  • Andrew Ross
  • Yining Chen
  • Christopher I. Murray
  • David Dubowitz
  • Douglas J. Cook

Blood oxygenation level dependent (BOLD) resting-state functional magnetic resonance imaging (rs-fMRI) may serve as a sensitive marker to identify possible changes in the architecture of large-scale networks following mild traumatic brain injury (mTBI). Differences in functional connectivity (FC) measurements derived from BOLD rs-fMRI may however be confounded by changes in local cerebrovascular physiology and neurovascular coupling mechanisms, without changes in the underlying neuronally driven connectivity of networks. In this study, multi-modal neuroimaging data including BOLD rs-fMRI, baseline cerebral blood flow (CBF0) and cerebrovascular reactivity (CVR; acquired using a hypercapnic gas breathing challenge) were collected in 23 subjects with reported mTBI (14.6±14.9 months post-injury) and 27 age-matched healthy controls. Despite no group differences in CVR within the networks of interest (P > 0.05, corrected), significantly higher CBF0 was documented in the mTBI subjects (P < 0.05, corrected), relative to the controls. A normalization method designed to account for differences in CBF0 post-mTBI was introduced to evaluate the effects of such an approach on reported group differences in network connectivity. Inclusion of regional perfusion measurements in the computation of correlation coefficients within and across large-scale networks narrowed the differences in FC between the groups, suggesting that this approach may elucidate unique changes in connectivity post-mTBI while accounting for shared variance with CBF0. Altogether, our results provide a strong paradigm supporting the need to account for changes in physiological modulators of BOLD in order to expand our understanding of the effects of brain injury on large-scale FC of cortical networks.

NeurIPS Conference 2018 Conference Paper

Human-in-the-Loop Interpretability Prior

  • Isaac Lage
  • Andrew Ross
  • Samuel Gershman
  • Been Kim
  • Finale Doshi-Velez

We often desire our models to be interpretable as well as accurate. Prior work on optimizing models for interpretability has relied on easy-to-quantify proxies for interpretability, such as sparsity or the number of operations required. In this work, we optimize for interpretability by directly including humans in the optimization loop. We develop an algorithm that minimizes the number of user studies to find models that are both predictive and interpretable and demonstrate our approach on several data sets. Our human subjects results show trends towards different proxy notions of interpretability on different datasets, which suggests that different proxies are preferred on different tasks.

AAAI Conference 2018 Conference Paper

Improving the Adversarial Robustness and Interpretability of Deep Neural Networks by Regularizing Their Input Gradients

  • Andrew Ross
  • Finale Doshi-Velez

Deep neural networks have proven remarkably effective at solving many classification problems, but have been criticized recently for two major weaknesses: the reasons behind their predictions are uninterpretable, and the predictions themselves can often be fooled by small adversarial perturbations. These problems pose major obstacles for the adoption of neural networks in domains that require security or transparency. In this work, we evaluate the effectiveness of defenses that differentiably penalize the degree to which small changes in inputs can alter model predictions. Across multiple attacks, architectures, defenses, and datasets, we find that neural networks trained with this input gradient regularization exhibit robustness to transferred adversarial examples generated to fool all of the other models. We also find that adversarial examples generated to fool gradient-regularized models fool all other models equally well, and actually lead to more “legitimate, ” interpretable misclassifications as rated by people (which we confirm in a human subject experiment). Finally, we demonstrate that regularizing input gradients makes them more naturally interpretable as rationales for model predictions. We conclude by discussing this relationship between interpretability and robustness in deep neural networks.