Arrow Research search

Author name cluster

Dhaval Salwala

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

AAAI Conference 2026 System Paper

Auto-BenchmarkCard: Automated Synthesis of Benchmark Documentation

  • Aris Hofmann
  • Inge Vejsbjerg
  • Dhaval Salwala
  • Elizabeth M. Daly

We present Auto-BenchmarkCard, a workflow for generating validated descriptions of AI benchmarks. Benchmark documentation is often incomplete or inconsistent, making it difficult to interpret and compare benchmarks across tasks or domains. Auto-BenchmarkCard addresses this gap by combining multi-agent data extraction from heterogeneous sources (e.g., Hugging Face, Unitxt, academic papers) with LLM-driven synthesis. A validation phase evaluates factual accuracy through atomic entailment scoring using the FactReasoner tool. This workflow has the potential to promote transparency, comparability, and reusability in AI benchmark reporting, enabling researchers and practitioners to better navigate and evaluate benchmark choices.

AAAI Conference 2026 System Paper

Risk Atlas Nexus: A System for Managing AI Risks

  • Inge Vejsbjerg
  • Rahul Nair
  • Elizabeth M. Daly
  • Dhaval Salwala
  • Seshu Tirupathi

We present Risk Atlas Nexus, an open source system for governing AI risks. The system unifies several risk classification frameworks through a common ontology. Given an AI application use case (called an intent), the system estimates risks and associated mitigations that are linked to identified risks. The tool is designed to be incorporated in AI governance workflows where recommendations can be translated to business controls to cover risks arising from AI use in firms.

NeurIPS Conference 2025 Conference Paper

Forging Time Series with Language: A Large Language Model Approach to Synthetic Data Generation

  • Cécile Rousseau
  • Tobia Boschi
  • Giandomenico Cornacchia
  • Dhaval Salwala
  • Alessandra Pascale
  • Juan Moreno

SDForger is a flexible and efficient framework for generating high-quality multivariate time series using LLMs. Leveraging a compact data representation, SDForger provides synthetic time series generation from a few samples and low-computation fine-tuning of any autoregressive LLM. Specifically, the framework transforms univariate and multivariate signals into tabular embeddings, which are then encoded into text and used to fine-tune the LLM. At inference, new textual embeddings are sampled and decoded into synthetic time series that retain the original data's statistical properties and temporal dynamics. Across a diverse range of datasets, SDForger outperforms existing generative models in many scenarios, both in similarity-based evaluations and downstream forecasting tasks. By enabling textual conditioning in the generation process, SDForger paves the way for multimodal modeling and the streamlined integration of time series with textual information. The model is open-sourced at https: //github. com/IBM/fms-dgt/tree/main/fms dgt/public/databuilders/time series.

AAAI Conference 2025 System Paper

Usage Governance Advisor: From Intent to AI Governance

  • Elizabeth M. Daly
  • Seshu Tirupathi
  • Sean Rooney
  • Inge Vejsbjerg
  • Dhaval Salwala
  • Christopher Giblin
  • Frank Bagehorn
  • Luis Garces-Erice

Bringing a new AI system into a production environment involves multiple different stakeholders such as business owners, risk officer, ethics officers approving the AI System for a specific usage. Governance frameworks typically include multiple manual steps, including curating information needed to assess risks and reviewing outcomes to identify appropriate actions and governance strategies. We demo a human-in-the-loop automation system that takes a natural language description of an intended use case for an AI system in order to create semi-structured governance information, recommend the most appropriate model for that use case, prioritise risks to be evaluated, automatically running those evaluations and finally storing these results for auditing, reporting and future recommendations. As a result we increase transparency to stakeholders and provide valuable information to aid in decision making when assessing risks associated with an AI solution.