Arrow Research search

Author name cluster

Ramesh Raskar

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

16 papers
2 author rows

Possible papers

16

AAAI Conference 2025 Conference Paper

Co-Dream: Collaborative Dream Synthesis over Decentralized Models

  • Abhishek Singh
  • Gauri Gupta
  • Yichuan Shi
  • Alex Dang
  • Ritvik Kapila
  • Sheshank Shankar
  • Mohammed Ehab
  • Ramesh Raskar

Federated Learning (FL) has pioneered the idea of "share wisdom not raw data" to enable collaborative learning over decentralized data. FL achieves this goal by averaging model parameters instead of centralizing data. However, representing "wisdom" in the form of model parameters has its own limitations including the requirement for uniform model architectures across clients and communication overhead proportional to model size. In this work we introduce Co-Dream a framework for representing "wisdom" in data space instead of model parameters. Here, clients collaboratively optimize random inputs based on their locally trained models and aggregate gradients of their inputs. Our proposed approach overcomes the aforementioned limitations and comes with additional benefits such as adaptive optimization and interpretable representation of knowledge. We empirically demonstrate the effectiveness of Co-Dream and compare its performance with existing techniques.

ICRA Conference 2025 Conference Paper

Enhancing Autonomous Navigation by Imaging Hidden Objects Using Single-Photon LiDAR

  • Aaron Young
  • Nevindu Batagoda
  • Harry Zhang
  • Akshat Dave
  • Adithya Pediredla
  • Dan Negrut
  • Ramesh Raskar

Robust autonomous navigation in environments with limited visibility remains a critical challenge in robotics. We present a novel approach that leverages Non-Line-of-Sight (NLOS) sensing using single-photon LiDAR to improve visibility and enhance autonomous navigation. Our method enables mobile robots to “see around corners” by utilizing multi-bounce light information, effectively expanding their perceptual range without additional infrastructure. We propose a three-module pipeline: (1) Sensing, which captures multi-bounce histograms using SPAD-based LiDAR; (2) Perception, which estimates occupancy maps of hidden regions from these histograms using a convolutional neural network; and (3) Control, which allows a robot to follow safe paths based on the estimated occupancy. We evaluate our approach through simulations and real-world experiments on a mobile robot navigating an L-shaped corridor with hidden obstacles. Our work represents the first experimental demonstration of NLOS imaging for autonomous navigation, paving the way for safer and more efficient robotic systems operating in complex environments. We also contribute a novel dynamics-integrated transient rendering framework for simulating NLOS scenarios, facilitating future research in this domain.

AAMAS Conference 2025 Conference Paper

On the Limits of Agency in Agent-based Models

  • Ayush Chopra
  • Shashank Kumar
  • Nurullah Giray Kuru
  • Ramesh Raskar
  • Arnau Quera-Bofarull

Agent-based modeling (ABM) offers powerful insights into complex systems, but its practical utility has been limited by computational constraints and simplistic agent behaviors, especially when simulating large populations. Recent advancements in large language models (LLMs) could enhance ABMs with adaptive agents, but their integration into large-scale simulations remains challenging. This work introduces a novel methodology that bridges this gap by efficiently integrating LLMs into ABMs, enabling the simulation of millions of adaptive agents. We present LLM archetypes, a technique that balances behavioral complexity with computational efficiency, allowing for nuanced agent behavior in large-scale simulations. Our analysis explores the crucial trade-off between simulation scale and individual agent expressiveness, comparing different agent architectures ranging from simple heuristic-based agents to fully adaptive LLM-powered agents. We demonstrate the real-world applicability of our approach through a case study of the COVID-19 pandemic, simulating 8. 4 million agents representing New York City and capturing the intricate interplay between health behaviors and economic outcomes. Our method significantly enhances ABM capabilities for predictive and counterfactual analyses, addressing limitations of historical data in policy design. By implementing these advances in an open-source framework, we facilitate the adoption of LLM archetypes across diverse ABM applications. Our results show that LLM archetypes can markedly improve the realism and utility of large-scale ABMs while maintaining computational feasibility, opening new avenues for modeling complex societal challenges and informing data-driven policy decisions.

NeurIPS Conference 2024 Conference Paper

Data Acquisition via Experimental Design for Data Markets

  • Charles Lu
  • Baihe Huang
  • Sai Praneeth Karimireddy
  • Praneeth Vepakomma
  • Michael Jordan
  • Ramesh Raskar

The acquisition of training data is crucial for machine learning applications. Data markets can increase the supply of data, particularly in data-scarce domains such as healthcare, by incentivizing potential data providers to join the market. A major challenge for a data buyer in such a market is choosing the most valuable data points from a data seller. Unlike prior work in data valuation, which assumes centralized data access, we propose a federated approach to the data acquisition problem that is inspired by linear experimental design. Our proposed data acquisition method achieves lower prediction error without requiring labeled validation data and can be optimized in a fast and federated procedure. The key insight of our work is that a method that directly estimates the benefit of acquiring data for test set prediction is particularly compatible with a decentralized market setting.

AAMAS Conference 2024 Conference Paper

First 100 days of Pandemic: An Interplay of Pharmaceutical, Behavioral and Digital Interventions - A Study using Agent Based Modeling

  • Gauri Gupta
  • Ritvik Kapila
  • Ayush Chopra
  • Ramesh Raskar

Pandemics, notably the recent COVID-19 outbreak, have impacted both public health and global economy. We need a profound understanding of disease progression and efficient response strategies to prepare for potential future outbreaks. In this paper, we emphasize the potential of Agent-Based Models (ABM) in capturing complex infection dynamics and understanding the impact of interventions. We simulate realistic pharmaceutical, behavioral, and digital interventions and suggest a holistic combination of these interventions for pandemic response. We study the trends of emergent behavior on a large-scale population based on real-world socio-demographic and geo-census data from Kings County in Washington. Our analysis reveals the pivotal role of the initial 100 days in dictating a pandemic’s course, emphasizing the importance of quick decisionmaking and efficient policy development. Further, we highlight that investing in behavioral and digital interventions can reduce the burden on pharmaceutical interventions by reducing the total number of hospitalizations, and by delaying the pandemic’s peak. We also infer that allocating the same amount of dollars towards extensive testing with contact tracing and self-quarantine offers greater cost efficiency compared to spending the entire budget on vaccinations. Our code: https: //github. com/mitmedialab/DeepABM-Pandemic/.

AAMAS Conference 2024 Conference Paper

flame: A F ramework for L earning in A gent-based M od E ls

  • Ayush Chopra
  • Jayakumar Subramanian
  • Balaji Krishnamurthy
  • Ramesh Raskar

Agent-based models (ABMs) are discrete simulators comprising agents that act and interact in a computational world. Despite wide applicability, infrastructure for ABMs has been fragmented and lacks a standard framework to integrate benefits of recent computing advances, especially in machine learning and automatic differentiation (autograd). To alleviate this gap we introduce flame: a framework to define, simulate and optimize differentiable agentbased models. First, flame introduces a domain-specific language that describes ABMs with stochastic dynamics across several domains and can be implemented using abstractions of autograd. Second, flame models can execute simulations on GPU, process millions of interactions per second and seamlessly scale from few hundred agents to million-size populations. Third, flame provides custom utilities to implement fully differentiable ABMs which can benefit from gradient-based learning and integrate with deep neural networks (DNNs), in several ways. Specifically, ABMs can now use supervised and reinforcement learning to calibrate simulation parameters, optimize agent actions and learn expressive interaction rules. Finally, flame is easily accessible with a simple Python API. We validate flame through multiple case studies that study tissue morphogenesis over bio-electric networks, infectious disease epidemiology over physical networks and opinion dynamics over social networks. We hope flame can ignite further innovation at the intersection of AI and ABMs. Our code is here.

ICLR Conference 2024 Conference Paper

Incentive-Aware Federated Learning with Training-Time Model Rewards

  • Zhaoxuan Wu
  • Mohammad Mohammadi Amiri
  • Ramesh Raskar
  • Bryan Kian Hsiang Low

In federated learning (FL), incentivizing contributions of training resources (e.g., data, compute) from potentially competitive clients is crucial. Existing incentive mechanisms often distribute post-training monetary rewards, which suffer from practical challenges of timeliness and feasibility of the rewards. Rewarding the clients after the completion of training may incentivize them to abort the collaboration, and monetizing the contribution is challenging in practice. To address these problems, we propose an incentive-aware algorithm that offers differentiated training-time model rewards for each client at each FL iteration. We theoretically prove that such a $\textit{local}$ design ensures the $\textit{global}$ objective of client incentivization. Through theoretical analyses, we further identify the issue of error propagation in model rewards and thus propose a stochastic reference-model recovery strategy to ensure theoretically that all the clients eventually obtain the optimal model in the limit. We perform extensive experiments to demonstrate the superior incentivizing performance of our method compared to existing baselines.

TMLR Journal 2024 Journal Article

Privacy-Preserving Split Learning with Vision Transformers using Patch-Wise Random and Noisy CutMix

  • Seungeun Oh
  • Sihun Baek
  • Jihong Park
  • Hyelin Nam
  • Praneeth Vepakomma
  • Ramesh Raskar
  • Mehdi Bennis
  • Seong-Lyun Kim

In computer vision, the vision transformer (ViT) has increasingly superseded the convolutional neural network (CNN) for improved accuracy and robustness. However, ViT's large model sizes and high sample complexity make it difficult to train on resource-constrained edge devices. Split learning (SL) emerges as a viable solution, leveraging server-side resources to train ViTs while utilizing private data from distributed devices. However, SL requires additional information exchange for weight updates between the device and the server, which can be exposed to various attacks on private training data. To mitigate the risk of data breaches in classification tasks, inspired from the CutMix regularization, we propose a novel privacy-preserving SL framework that injects Gaussian noise into smashed data and mixes randomly chosen patches of smashed data across clients, coined DP-CutMixSL. Our analysis demonstrates that DP-CutMixSL is a differentially private (DP) mechanism that strengthens privacy protection against membership inference attacks during forward propagation. Through simulations, we show that DP-CutMixSL improves privacy protection against membership inference attacks, reconstruction attacks, and label inference attacks, while also improving accuracy compared to DP-SL and DP-MixSL.

AAMAS Conference 2024 Conference Paper

Private Agent-Based Modeling

  • Ayush Chopra
  • Arnau Quera-Bofarull
  • Nurullah Giray-Kuru
  • Michael Wooldridge
  • Ramesh Raskar

The practical utility of agent-based models in decision-making relies on their capacity to accurately replicate populations while seamlessly integrating real-world data streams. Yet, the incorporation of such data poses significant challenges due to privacy concerns. To address this issue, we introduce a paradigm for private agentbased modeling wherein the simulation, calibration, and analysis of agent-based models can be achieved without centralizing the agents’ attributes or interactions. The key insight is to leverage techniques from secure multi-party computation to design protocols for decentralized computation in agent-based models. This ensures the confidentiality of the simulated agents without compromising on simulation accuracy. We showcase our protocols on a case study with an epidemiological simulation comprising over 150, 000 agents. We believe this is a critical step towards deploying agent-based models to real-world applications.

AAMAS Conference 2023 Conference Paper

Differentiable Agent-based Epidemiology

  • Ayush Chopra
  • Alexander Rodríguez
  • Jayakumar Subramanian
  • Arnau Quera-Bofarull
  • Balaji Krishnamurthy
  • B. Aditya Prakash
  • Ramesh Raskar

Mechanistic simulators are an indispensable tool for epidemiology to explore the behavior of complex, dynamic infections under varying conditions and navigate uncertain environments. Agent-based models (ABMs) are an increasingly popular simulation paradigm that can represent the heterogeneity of contact interactions with granular detail and agency of individual behavior. However, conventional ABM frameworks not differentiable and present challenges in scalability; due to which it is non-trivial to connect them to auxiliary data sources. In this paper, we introduce GradABM: a scalable, differentiable design for agent-based modeling that is amenable to gradient-based learning with automatic differentiation. GradABM can quickly simulate million-size populations in few seconds on commodity hardware, integrate with deep neural networks and ingest heterogeneous data sources. This provides an array of practical benefits for calibration, forecasting, and evaluating policy interventions. We demonstrate the efficacy of GradABM via extensive experiments with real COVID-19 and influenza datasets.

AAMAS Conference 2023 Conference Paper

Don't Simulate Twice: One-Shot Sensitivity Analyses via Automatic Differentiation

  • Arnau Quera-Bofarull
  • Ayush Chopra
  • Joseph Aylett-Bullock
  • Carolina Cuesta-Lazaro
  • Anisoara Calinescu
  • Ramesh Raskar
  • Michael Wooldridge

Agent-based models (ABMs) are a promising tool to simulate complex environments. Their rapid adoption requires scalable specification, efficient data-driven calibration, and validation through sensitivity analyses. Recent progress in tensorized and differentiable ABM design (GradABM) has enabled fast calibration of million-size populations, however, validation through sensitivity analysis is still computationally prohibitive due to the need for running the model a large number of times. Here, we present a novel methodology that uses automatic differentiation to perform a sensitivity analysis on a calibrated ABM without requiring any further simulations. The key insight is to leverage gradients of a GradABM to compute exact partial derivatives of any model output with respect to an arbitrary combination of parameters. We demonstrate the benefits of this approach on a case study of the first wave of COVID-19 in London, where we investigate the causes of variations in infections by age, socio-economic index, ethnicity, and geography. Finally, we also show that the same methodology allows for the design of optimal policy interventions. The code to reproduce the presented results is made available on GitHub 1.

ICML Conference 2023 Conference Paper

Federated Conformal Predictors for Distributed Uncertainty Quantification

  • Charles Lu 0001
  • Yaodong Yu
  • Sai Praneeth Karimireddy
  • Michael I. Jordan
  • Ramesh Raskar

Conformal prediction is emerging as a popular paradigm for providing rigorous uncertainty quantification in machine learning since it can be easily applied as a post-processing step to already trained models. In this paper, we extend conformal prediction to the federated learning setting. The main challenge we face is data heterogeneity across the clients — this violates the fundamental tenet of exchangeability required for conformal prediction. We propose a weaker notion of partial exchangeability, better suited to the FL setting, and use it to develop the Federated Conformal Prediction (FCP) framework. We show FCP enjoys rigorous theoretical guarantees and excellent empirical performance on several computer vision and medical imaging datasets. Our results demonstrate a practical approach to incorporating meaningful uncertainty quantification in distributed and heterogeneous environments. We provide code used in our experiments https: //github. com/clu5/federated-conformal.

AAAI Conference 2023 Conference Paper

Fundamentals of Task-Agnostic Data Valuation

  • Mohammad Mohammadi Amiri
  • Frederic Berdoz
  • Ramesh Raskar

We study valuing the data of a data owner/seller for a data seeker/buyer. Data valuation is often carried out for a specific task assuming a particular utility metric, such as test accuracy on a validation set, that may not exist in practice. In this work, we focus on task-agnostic data valuation without any validation requirements. The data buyer has access to a limited amount of data (which could be publicly available) and seeks more data samples from a data seller. We formulate the problem as estimating the differences in the statistical properties of the data at the seller with respect to the baseline data available at the buyer. We capture these statistical differences through second moment by measuring diversity and relevance of the seller’s data for the buyer; we estimate these measures through queries to the seller without requesting the raw data. We design the queries with the proposed approach so that the seller is blind to the buyer’s raw data and has no knowledge to fabricate responses to the queries to obtain a desired outcome of the diversity and relevance trade-off. We will show through extensive experiments on real tabular and image datasets that the proposed estimates capture the diversity and relevance of the seller’s data for the buyer.

NeurIPS Conference 2023 Conference Paper

Posthoc privacy guarantees for collaborative inference with modified Propose-Test-Release

  • Abhishek Singh
  • Praneeth Vepakomma
  • Vivek Sharma
  • Ramesh Raskar

Cloud-based machine learning inference is an emerging paradigm where users query by sending their data through a service provider who runs an ML model on that data and returns back the answer. Due to increased concerns over data privacy, recent works have proposed Collaborative Inference (CI) to learn a privacy-preserving encoding of sensitive user data before it is shared with an untrusted service provider. Existing works so far evaluate the privacy of these encodings through empirical reconstruction attacks. In this work, we develop a new framework that provides formal privacy guarantees for an arbitrarily trained neural network by linking its local Lipschitz constant with its local sensitivity. To guarantee privacy using local sensitivity, we extend the Propose-Test-Release (PTR) framework to make it tractable for neural network queries. We verify the efficacy of our framework experimentally on real-world datasets and elucidate the role of Adversarial Representation Learning (ARL) in improving the privacy-utility trade-off.

AAAI Conference 2022 Conference Paper

PrivateMail: Supervised Manifold Learning of Deep Features with Privacy for Image Retrieval

  • Praneeth Vepakomma
  • Julia Balla
  • Ramesh Raskar

Differential Privacy offers strong guarantees such as immutable privacy under any post-processing. In this work, we propose a differentially private mechanism called PrivateMail for performing supervised manifold learning. We then apply it to the use case of private image retrieval to obtain nearest matches to a client’s target image from a server’s database. PrivateMail releases the target image as part of a differentially private manifold embedding. We give bounds on the global sensitivity of the manifold learning map in order to obfuscate and release embeddings with differential privacy inducing noise. We show that PrivateMail obtains a substantially better performance in terms of the privacy-utility trade off in comparison to several baselines on various datasets. We share code for applying PrivateMail at http: //tiny. cc/PrivateMail.

NeurIPS Conference 2018 Conference Paper

Maximum-Entropy Fine Grained Classification

  • Abhimanyu Dubey
  • Otkrist Gupta
  • Ramesh Raskar
  • Nikhil Naik

Fine-Grained Visual Classification (FGVC) is an important computer vision problem that involves small diversity within the different classes, and often requires expert annotators to collect data. Utilizing this notion of small visual diversity, we revisit Maximum-Entropy learning in the context of fine-grained classification, and provide a training routine that maximizes the entropy of the output probability distribution for training convolutional neural networks on FGVC tasks. We provide a theoretical as well as empirical justification of our approach, and achieve state-of-the-art performance across a variety of classification tasks in FGVC, that can potentially be extended to any fine-tuning task. Our method is robust to different hyperparameter values, amount of training data and amount of training label noise and can hence be a valuable tool in many similar problems.