Arrow Research search

Author name cluster

Lance Kaplan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
1 author row

Possible papers

7

NeurIPS Conference 2025 Conference Paper

ADMN: A Layer-Wise Adaptive Multimodal Network for Dynamic Input Noise and Compute Resources

  • Jason Wu
  • Yuyang Yuan
  • Kang Yang
  • Lance Kaplan
  • Mani Srivastava

Multimodal deep learning systems are deployed in dynamic scenarios due to the robustness afforded by multiple sensing modalities. Nevertheless, they struggle with varying compute resource availability (due to multi-tenancy, device heterogeneity, etc. ) and fluctuating quality of inputs (from sensor feed corruption, environmental noise, etc. ). Statically provisioned multimodal systems cannot adapt when compute resources change over time, while existing dynamic networks struggle with strict compute budgets. Additionally, both systems often neglect the impact of variations in modality quality. Consequently, modalities suffering substantial corruption may needlessly consume resources better allocated towards other modalities. We propose ADMN, a layer-wise Adaptive Depth Multimodal Network capable of tackling both challenges - it adjusts the total number of active layers across all modalities to meet compute resource constraints, and continually reallocates layers across input modalities according to their modality quality. Our evaluations showcase ADMN can match the accuracy of state-of-the-art networks while reducing up to 75% of their floating-point operations.

AAAI Conference 2020 Conference Paper

Uncertainty-Aware Deep Classifiers Using Generative Models

  • Murat Sensoy
  • Lance Kaplan
  • Federico Cerutti
  • Maryam Saleki

Deep neural networks are often ignorant about what they do not know and overconfident when they make uninformed predictions. Some recent approaches quantify classification uncertainty directly by training the model to output high uncertainty for the data samples close to class boundaries or from the outside of the training distribution. These approaches use an auxiliary data set during training to represent out-ofdistribution samples. However, selection or creation of such an auxiliary data set is non-trivial, especially for high dimensional data such as images. In this work we develop a novel neural network model that is able to express both aleatoric and epistemic uncertainty to distinguish decision boundary and out-of-distribution regions of the feature space. To this end, variational autoencoders and generative adversarial networks are incorporated to automatically generate out-of-distribution exemplars for training. Through extensive analysis, we demonstrate that the proposed approach provides better estimates of uncertainty for in- and out-of-distribution samples, and adversarial examples on well-known data sets against state-ofthe-art approaches including recent Bayesian approaches for neural networks and anomaly detection methods.

AAAI Conference 2019 Conference Paper

Probabilistic Logic Programming with Beta-Distributed Random Variables

  • Federico Cerutti
  • Lance Kaplan
  • Angelika Kimmig
  • Murat Şensoy

We enable aProbLog—a probabilistic logical programming approach—to reason in presence of uncertain probabilities represented as Beta-distributed random variables. We achieve the same performance of state-of-the-art algorithms for highly specified and engineered domains, while simultaneously we maintain the flexibility offered by aProbLog in handling complex relational domains. Our motivation is that faithfully capturing the distribution of probabilities is necessary to compute an expected utility for effective decision making under uncertainty: unfortunately, these probability distributions can be highly uncertain due to sparse data. To understand and accurately manipulate such probability distributions we need a well-defined theoretical framework that is provided by the Beta distribution, which specifies a distribution of probabilities representing all the possible values of a probability when the exact value is unknown.

NeurIPS Conference 2019 Conference Paper

Spherical Text Embedding

  • Yu Meng
  • Jiaxin Huang
  • Guangyuan Wang
  • Chao Zhang
  • Honglei Zhuang
  • Lance Kaplan
  • Jiawei Han

Unsupervised text embedding has shown great power in a wide range of NLP tasks. While text embeddings are typically learned in the Euclidean space, directional similarity is often more effective in tasks such as word similarity and document clustering, which creates a gap between the training stage and usage stage of text embedding. To close this gap, we propose a spherical generative model based on which unsupervised word and paragraph embeddings are jointly learned. To learn text embeddings in the spherical space, we develop an efficient optimization algorithm with convergence guarantee based on Riemannian optimization. Our model enjoys high efficiency and achieves state-of-the-art performances on various text embedding tasks including word similarity and document clustering.

NeurIPS Conference 2018 Conference Paper

Evidential Deep Learning to Quantify Classification Uncertainty

  • Murat Sensoy
  • Lance Kaplan
  • Melih Kandemir

Deterministic neural nets have been shown to learn effective predictors on a wide range of machine learning problems. However, as the standard approach is to train the network to minimize a prediction loss, the resultant model remains ignorant to its prediction confidence. Orthogonally to Bayesian neural nets that indirectly infer prediction uncertainty through weight uncertainties, we propose explicit modeling of the same using the theory of subjective logic. By placing a Dirichlet distribution on the class probabilities, we treat predictions of a neural net as subjective opinions and learn the function that collects the evidence leading to these opinions by a deterministic neural net from data. The resultant predictor for a multi-class classification problem is another Dirichlet distribution whose parameters are set by the continuous output of a neural net. We provide a preliminary analysis on how the peculiarities of our new loss function drive improved uncertainty estimation. We observe that our method achieves unprecedented success on detection of out-of-distribution queries and endurance against adversarial perturbations.

TIST Journal 2018 Journal Article

GeoBurst+

  • Chao Zhang
  • Dongming Lei
  • Quan Yuan
  • Honglei Zhuang
  • Lance Kaplan
  • Shaowen Wang
  • Jiawei Han

The real-time discovery of local events (e.g., protests, disasters) has been widely recognized as a fundamental socioeconomic task. Recent studies have demonstrated that the geo-tagged tweet stream serves as an unprecedentedly valuable source for local event detection. Nevertheless, how to effectively extract local events from massive geo-tagged tweet streams in real time remains challenging. To bridge the gap, we propose a method for effective and real-time local event detection from geo-tagged tweet streams. Our method, named G eo B urst+, first leverages a novel cross-modal authority measure to identify several pivots in the query window. Such pivots reveal different geo-topical activities and naturally attract similar tweets to form candidate events. G eo B urst+ further summarizes the continuous stream and compares the candidates against the historical summaries to pinpoint truly interesting local events. Better still, as the query window shifts, G eo B urst+ is capable of updating the event list with little time cost, thus achieving continuous monitoring of the stream. We used crowdsourcing to evaluate G eo B urst+ on two million-scale datasets and found it significantly more effective than existing methods while being orders of magnitude faster.

AAMAS Conference 2016 Conference Paper

SOBE: Source Behavior Estimation for Subjective Opinions in Multiagent Systems (Extended Abstract)

  • Murat Sensoy
  • Lance Kaplan
  • Geeth De Mel
  • Taha D. Gunes

In cooperative or hostile environments, agents communicate their subjective opinions about various phenomenon. However, sources of these opinions may not always be competent and honest but more likely erroneous or even malicious. Furthermore, malicious sources may adopt certain behaviors to mislead the decision maker in a specific way. Fortunately, the reports of such misleading sources are correlated to ground truth. In this work, we propose to learn statistically meaningful opinion transformations that represent various behaviors of information sources. Then, we exploit these transformations while fusing opinions from unreliable sources. We show that our approach can be used to determine set of transformations that may lead to more accurate estimation of the truth.