Arrow Research search

Author name cluster

Vasant Honavar

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

15 papers
1 author row

Possible papers

15

AAAI Conference 2025 Conference Paper

Checking Consistency of CP-Theory Preferences in Polynomial Time

  • Erik Rauer
  • Samik Basu
  • Vasant Honavar

We investigate the problem of checking the consistency of qualitative preferences expressed in CP-theory. This problem is PSPACE-Complete even when the preferences are locally consistent or the preference variables have binary domain. We present a new sufficient condition for consistency of preferences and show that the condition can be checked in polynomial time in settings of practical relevance (locally consistent or binary domain preference variables). We further show how the resulting sufficient condition can be used to efficiently identify a subset of outcomes that are non-dominated with respect to a set of qualitative preferences.

NeurIPS Conference 2025 Conference Paper

Simple Distillation for One-Step Diffusion Models

  • Huaisheng Zhu
  • Teng Xiao
  • shijie zhou
  • Zhimeng Guo
  • Hangfan Zhang
  • Siyuan Xu
  • Vasant Honavar

Diffusion models have established themselves as leading techniques for image generation. However, their reliance on an iterative denoising process results in slow sampling speeds, which limits their applicability to interactive and creative applications. An approach to overcoming this limitation involves distilling multistep diffusion models into efficient one-step generators. However, existing distillation methods typically suffer performance degradation or require complex iterative training procedures which increase their complexity and computational cost. In this paper, we propose Contrastive Energy Distillation (CED), a simple yet effective approach to distill multistep diffusion models into effective one-step generators. Our key innovation is the introduction of an unnormalized joint energy-based model (EBM) that represents the generator and an auxiliary score model. CED optimizes a Noise Contrastive Estimation (NCE) objective to efficiently transfers knowledge from a multistep teacher diffusion model without additional modules or iterative training complexity. We further show that CED implicitly optimizes the KL divergence between the distributions modeled by the multistep diffusion model and the one-step generator. We present results of experiments which demonstrate that CED achieves competitive performance with the representative baselines for distilling multistep diffusion models while maintaining excellent memory efficiency.

AAAI Conference 2024 Conference Paper

Inducing Clusters Deep Kernel Gaussian Process for Longitudinal Data

  • Junjie Liang
  • Weijieying Ren
  • Hanifi Sahar
  • Vasant Honavar

We consider the problem of predictive modeling from irregularly and sparsely sampled longitudinal data with unknown, complex correlation structures and abrupt discontinuities. To address these challenges, we introduce a novel inducing clusters longitudinal deep kernel Gaussian Process (ICDKGP). ICDKGP approximates the data generating process by a zero-mean GP with a longitudinal deep kernel that models the unknown complex correlation structure in the data and a deterministic non-zero mean function to model the abrupt discontinuities. To improve the scalability and interpretability of ICDKGP, we introduce inducing clusters corresponding to centers of clusters in the training data. We formulate the training of ICDKGP as a constrained optimization problem and derive its evidence lower bound. We introduce a novel relaxation of the resulting problem which under rather mild assumptions yields a solution with error bounded relative to the original problem. We describe the results of extensive experiments demonstrating that ICDKGP substantially outperforms the state-of-the-art longitudinal methods on data with both smoothly and non-smoothly varying outcomes.

JBHI Journal 2023 Journal Article

Forecasting User Interests Through Topic Tag Predictions in Online Health Communities

  • Amogh Subbakrishna Adishesha
  • Lily Jakielaszek
  • Fariha Azhar
  • Peixuan Zhang
  • Vasant Honavar
  • Fenglong Ma
  • Chandra Belani
  • Prasenjit Mitra

The increasing reliance on online communities for healthcare information by patients and caregivers has led to the increase in the spread of misinformation, or subjective, anecdotal and inaccurate or non-specific recommendations, which, if acted on, could cause serious harm to the patients. Hence, there is an urgent need to connect users with accurate and tailored health information in a timely manner to prevent such harm. This article proposes an innovative approach to suggesting reliable information to participants in online communities as they move through different stages in their disease or treatment. We hypothesize that patients with similar histories of disease progression or course of treatment would have similar information needs at comparable stages. Specifically, we pose the problem of predicting topic tags or keywords that describe the future information needs of users based on their profiles, traces of their online interactions within the community (past posts, replies) and the profiles and traces of online interactions of other users with similar profiles and similar traces of past interaction with the target users. The result is a variant of the collaborative information filtering or recommendation system tailored to the needs of users of online health communities. We report results of our experiments on two unique datasets from two different social media platforms which demonstrates the superiority of the proposed approach over the state of the art baselines with respect to accurate and timely prediction of topic tags (and hence information sources of interest).

AAAI Conference 2020 Short Paper

Algorithmic Bias in Recidivism Prediction: A Causal Perspective (Student Abstract)

  • Aria Khademi
  • Vasant Honavar

ProPublica’s analysis of recidivism predictions produced by Correctional Offender Management Profiling for Alternative Sanctions (COMPAS) software tool for the task, has shown that the predictions were racially biased against African American defendants. We analyze the COMPAS data using a causal reformulation of the underlying algorithmic fairness problem. Specifically, we assess whether COMPAS exhibits racial bias against African American defendants using FACT, a recently introduced causality grounded measure of algorithmic fairness. We use the Neyman-Rubin potential outcomes framework for causal inference from observational data to estimate FACT from COMPAS data. Our analysis offers strong evidence that COMPAS exhibits racial bias against African American defendants. We further show that the FACT estimates from COMPAS data are robust in the presence of unmeasured confounding.

AAAI Conference 2020 Conference Paper

LMLFM: Longitudinal Multi-Level Factorization Machine

  • Junjie Liang
  • Dongkuan Xu
  • Yiwei Sun
  • Vasant Honavar

We consider the problem of learning predictive models from longitudinal data, consisting of irregularly repeated, sparse observations from a set of individuals over time. Such data often exhibit longitudinal correlation (LC) (correlations among observations for each individual over time), cluster correlation (CC) (correlations among individuals that have similar characteristics), or both. These correlations are often accounted for using mixed effects models that include fixed effects and random effects, where the fixed effects capture the regression parameters that are shared by all individuals, whereas random effects capture those parameters that vary across individuals. However, the current state-of-the-art methods are unable to select the most predictive fixed effects and random effects from a large number of variables, while accounting for complex correlation structure in the data and non-linear interactions among the variables. We propose Longitudinal Multi-Level Factorization Machine (LMLFM), to the best of our knowledge, the first model to address these challenges in learning predictive models from longitudinal data. We establish the convergence properties, and analyze the computational complexity, of LMLFM. We present results of experiments with both simulated and real-world longitudinal data which show that LMLFM outperforms the state-of-the-art methods in terms of predictive accuracy, variable selection ability, and scalability to data with large number of variables. The code and supplemental material is available at https: //github. com/junjieliang672/LMLFM.

IJCAI Conference 2019 Conference Paper

MEGAN: A Generative Adversarial Network for Multi-View Network Embedding

  • Yiwei Sun
  • Suhang Wang
  • Tsung-Yu Hsieh
  • Xianfeng Tang
  • Vasant Honavar

Data from many real-world applications can be naturally represented by multi-view networks where the different views encode different types of relationships (e. g. , friendship, shared interests in music, etc. ) between real-world individuals or entities. There is an urgent need for methods to obtain low-dimensional, information preserving and typically nonlinear embeddings of such multi-view networks. However, most of the work on multi-view learning focuses on data that lack a network structure, and most of the work on network embeddings has focused primarily on single-view networks. Against this background, we consider the multi-view network representation learning problem, i. e. , the problem of constructing low-dimensional information preserving embeddings of multi-view networks. Specifically, we investigate a novel Generative Adversarial Network (GAN) framework for Multi-View Network Embedding, namely MEGAN, aimed at preserving the information from the individual network views, while accounting for connectivity across (and hence complementarity of and correlations between) different views. The results of our experiments on two real-world multi-view data sets show that the embeddings obtained using MEGAN outperform the state-of-the-art methods on node classification, link prediction and visualization tasks.

AAAI Conference 2016 Conference Paper

On Learning Causal Models from Relational Data

  • Sanghack Lee
  • Vasant Honavar

Many applications call for learning causal models from relational data. We investigate Relational Causal Models (RCM) under relational counterparts of adjacency-faithfulness and orientation-faithfulness, yielding a simple approach to identifying a subset of relational d-separation queries needed for determining the structure of an RCM using d-separation against an unrolled DAG representation of the RCM. We provide original theoretical analysis that offers the basis of a sound and efficient algorithm for learning the structure of an RCM from relational data. We describe RCD-Light, a sound and efficient constraint-based algorithm that is guaranteed to yield a correct partially-directed RCM structure with at least as many edges oriented as in that produced by RCD, the only other existing algorithm for learning RCM. We show that unlike RCD, which requires exponential time and space, RCD- Light requires only polynomial time and space to orient the dependencies of a sparse RCM.

AAAI Conference 2013 Conference Paper

m-Transportability: Transportability of a Causal Effect from Multiple Environments

  • Sanghack Lee
  • Vasant Honavar

We study m-transportability, a generalization of transportability, which offers a license to use causal information elicited from experiments and observations in m ≥ 1 source environments to estimate a causal effect in a given target environment. We provide a novel characterization of mtransportability that directly exploits the completeness of docalculus to obtain the necessary and sufficient conditions for m-transportability. We provide an algorithm for deciding mtransportability that determines whether a causal relation is m-transportable; and if it is, produces a transport formula, that is, a recipe for estimating the desired causal effect by combining experimental information from m source environments with observational information from the target environment.

NeurIPS Conference 2013 Conference Paper

Transportability from Multiple Environments with Limited Experiments

  • Elias Bareinboim
  • Sanghack Lee
  • Vasant Honavar
  • Judea Pearl

This paper considers the problem of transferring experimental findings learned from multiple heterogeneous domains to a target environment, in which only limited experiments can be performed. We reduce questions of transportability from multiple domains and with limited scope to symbolic derivations in the do-calculus, thus extending the treatment of transportability from full experiments introduced in Pearl and Bareinboim (2011). We further provide different graphical and algorithmic conditions for computing the transport formula for this setting, that is, a way of fusing the observational and experimental information scattered throughout different domains to synthesize a consistent estimate of the desired effects.

IJCAI Conference 2011 Conference Paper

On the Utility of Curricula in Unsupervised Learning of Probabilistic Grammars

  • Kewei Tu
  • Vasant Honavar

We examine the utility of a curriculum (a means of presenting training samples in a meaningful order) in unsupervised learning of probabilistic grammars. We introduce the {\em incremental construction hypothesis} that explains the benefits of a curriculum in learning grammars and offers some useful insights into the design of curricula as well as learning algorithms. We present results of experiments with (a) carefully crafted synthetic data that provide support for our hypothesis and (b) natural language corpus that demonstrate the utility of curricula in unsupervised learning of probabilistic grammars.

AAAI Conference 2011 Conference Paper

Verifying Intervention Policies to Counter Infection Propagation over Networks: A Model Checking Approach

  • Ganesh Ram Santhanam
  • Yuly Suvorov
  • Samik Basu
  • Vasant Honavar

Spread of infections (diseases, ideas, etc.) in a network can be modeled as the evolution of states of nodes in a graph as a function of the states of their neighbors. Given an initial configuration of a network in which a subset of the nodes have been infected, and an infection propagation function that specifies how the states of the nodes evolve over time, we show how to use model checking to identify, verify, and evaluate the effectiveness of intervention policies for containing the propagation of infection over such networks. %vspace-0. 1in

AAAI Conference 2010 Conference Paper

Dominance Testing via Model Checking

  • Ganesh Ram Santhanam
  • Samik Basu
  • Vasant Honavar

Dominance testing, the problem of determining whether an outcome is preferred over another, is of fundamental importance in many applications. Hence, there is a need for algorithms and tools for dominance testing. CP-nets and TCP-nets are some of the widely studied languages for representing and reasoning with preferences. We reduce dominance testing in TCP-nets to reachability analysis in a graph of outcomes. We provide an encoding of TCP-nets in the form of a Kripke structure for CTL. We show how to compute dominance using NuSMV, a model checker for CTL. We present results of experiments that demonstrate the feasibility of our approach to dominance testing.

KR Conference 2010 Conference Paper

Efficient Dominance Testing for Unconditional Preferences

  • Ganesh Ram Santhanam
  • Samik Basu
  • Vasant Honavar

We study a dominance relation for comparing outcomes based on unconditional qualitative preferences and compare it with its unconditional counterparts for TCP-nets and their variants. Dominance testing based on this relation can be carried out in polynomial time by evaluating the satisfiability of a logic formula.

IJCAI Conference 1989 Conference Paper

Generation, Local Receptive Fields and Global Convergence Improve Perceptual Learning in Connectionist Networks

  • Vasant Honavar
  • Leonard Uhr

This paper presents and compares results for three types of connectionist networks on perceptual learning tasks: [A] Multi-layered converging networks of neuron-like units, with each unit connected to a small randomly chosen subset of units in the adjacent layers, that learn by re-weighting of their links; [B] Networks of neuron-like units structured into successively larger modules under brain-like topological constraints (such as layered, converging-diverging hierarchies and local receptive fields) that learn by re-weighting of their links; [C] Networks with brain-like structures that learn by generation-discovery, which involves the growth of links and recruiting of units in addition to reweighting of links. Preliminary empirical results from simulation of these networks for perceptual recognition tasks show significant improvements in learning from using brain-like structures (e. g. , local receptive fields, global convergence) over networks that lack such structure; further improvements in learning result from the use of generation in addition to reweighting of links.