Arrow Research search

Author name cluster

Aneesh Komanduri

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

AAAI Conference 2025 Short Paper

Toward Causal Generative Modeling: From Representation to Generation

  • Aneesh Komanduri

Deep learning has given rise to the field of representation learning, which aims to automatically extract rich semantics from data. However, there have been several challenges in the generalization capabilities of deep learning models. Recent works have highlighted beneficial properties of causal models that are desirable for learning robust models under distribution shifts. Thus, there has been a growing interest in causal representation learning for achieving generalizability in tasks involving reasoning and planning. The goal of my dissertation is to develop theoretical intuitions and practical algorithms that uncover the nature of causal representations and their applications. In my work, I focus on causal generative modeling with an emphasis on either representation or generation. For representation learning, I investigate the disentanglement of causal representations through the lens of independent causal mechanisms. For generation tasks, I develop algorithms for counterfactual generation under weak supervision settings by leveraging recent advances in generative modeling. The proposed approaches have been empirically shown to be effective in achieving disentanglement and generating counterfactuals.

ECAI Conference 2024 Conference Paper

Causal Diffusion Autoencoders: Toward Counterfactual Generation via Diffusion Probabilistic Models

  • Aneesh Komanduri
  • Chen Zhao 0010
  • Feng Chen 0001
  • Xintao Wu

Diffusion probabilistic models (DPMs) have become the state-of-the-art in high-quality image generation. However, DPMs have an arbitrary noisy latent space with no interpretable or controllable semantics. Although there has been significant research effort to improve image sample quality, there is little work on representation-controlled generation using diffusion models. Specifically, causal modeling and controllable counterfactual generation using DPMs is an underexplored area. In this work, we propose CausalDiffAE, a diffusion-based causal representation learning framework to enable counterfactual generation according to a specified causal model. Our key idea is to use an encoder to extract high-level semantically meaningful causal variables from high-dimensional data and model stochastic variation using reverse diffusion. We propose a causal encoding mechanism that maps high-dimensional data to causally related latent factors and parameterize the causal mechanisms among latent factors using neural networks. To enforce the disentanglement of causal variables, we formulate a variational objective and leverage auxiliary label information in a prior to regularize the latent space. We propose a DDIM-based counterfactual generation procedure subject to do-interventions. Finally, to address the limited label supervision scenario, we also study the application of CausalDiffAE when a part of the training data is unlabeled, which also enables granular control over the strength of interventions in generating counterfactuals during inference. We empirically show that CausalDiffAE learns a disentangled latent space and is capable of generating high-quality counterfactual images.

TMLR Journal 2024 Journal Article

From Identifiable Causal Representations to Controllable Counterfactual Generation: A Survey on Causal Generative Modeling

  • Aneesh Komanduri
  • Xintao Wu
  • Yongkai Wu
  • Feng Chen

Deep generative models have shown tremendous capability in data density estimation and data generation from finite samples. While these models have shown impressive performance by learning correlations among features in the data, some fundamental shortcomings are their lack of explainability, tendency to induce spurious correlations, and poor out-of-distribution extrapolation. To remedy such challenges, recent work has proposed a shift toward causal generative models. Causal models offer several beneficial properties to deep generative models, such as distribution shift robustness, fairness, and interpretability. Structural causal models (SCMs) describe data-generating processes and model complex causal relationships and mechanisms among variables in a system. Thus, SCMs can naturally be combined with deep generative models. We provide a technical survey on causal generative modeling categorized into causal representation learning and controllable counterfactual generation methods. We focus on fundamental theory, methodology, drawbacks, datasets, and metrics. Then, we cover applications of causal generative models in fairness, privacy, out-of-distribution generalization, precision medicine, and biological sciences. Lastly, we discuss open problems and fruitful research directions for future work in the field.

IJCAI Conference 2024 Conference Paper

Learning Causally Disentangled Representations via the Principle of Independent Causal Mechanisms

  • Aneesh Komanduri
  • Yongkai Wu
  • Feng Chen
  • Xintao Wu

Learning disentangled causal representations is a challenging problem that has gained significant attention recently due to its implications for extracting meaningful information for downstream tasks. In this work, we define a new notion of causal disentanglement from the perspective of independent causal mechanisms. We propose ICM-VAE, a framework for learning causally disentangled representations supervised by causally related observed labels. We model causal mechanisms using nonlinear learnable flow-based diffeomorphic functions to map noise variables to latent causal variables. Further, to promote the disentanglement of causal factors, we propose a causal disentanglement prior learned from auxiliary labels and the latent causal structure. We theoretically show the identifiability of causal factors and mechanisms up to permutation and elementwise reparameterization. We empirically demonstrate that our framework induces highly disentangled causal factors, improves interventional robustness, and is compatible with counterfactual generation.