Arrow Research search

Author name cluster

Inge Vejsbjerg

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

AAAI Conference 2026 System Paper

Auto-BenchmarkCard: Automated Synthesis of Benchmark Documentation

  • Aris Hofmann
  • Inge Vejsbjerg
  • Dhaval Salwala
  • Elizabeth M. Daly

We present Auto-BenchmarkCard, a workflow for generating validated descriptions of AI benchmarks. Benchmark documentation is often incomplete or inconsistent, making it difficult to interpret and compare benchmarks across tasks or domains. Auto-BenchmarkCard addresses this gap by combining multi-agent data extraction from heterogeneous sources (e.g., Hugging Face, Unitxt, academic papers) with LLM-driven synthesis. A validation phase evaluates factual accuracy through atomic entailment scoring using the FactReasoner tool. This workflow has the potential to promote transparency, comparability, and reusability in AI benchmark reporting, enabling researchers and practitioners to better navigate and evaluate benchmark choices.

AAAI Conference 2026 System Paper

Risk Atlas Nexus: A System for Managing AI Risks

  • Inge Vejsbjerg
  • Rahul Nair
  • Elizabeth M. Daly
  • Dhaval Salwala
  • Seshu Tirupathi

We present Risk Atlas Nexus, an open source system for governing AI risks. The system unifies several risk classification frameworks through a common ontology. Given an AI application use case (called an intent), the system estimates risks and associated mitigations that are linked to identified risks. The tool is designed to be incorporated in AI governance workflows where recommendations can be translated to business controls to cover risks arising from AI use in firms.

AAAI Conference 2025 System Paper

Usage Governance Advisor: From Intent to AI Governance

  • Elizabeth M. Daly
  • Seshu Tirupathi
  • Sean Rooney
  • Inge Vejsbjerg
  • Dhaval Salwala
  • Christopher Giblin
  • Frank Bagehorn
  • Luis Garces-Erice

Bringing a new AI system into a production environment involves multiple different stakeholders such as business owners, risk officer, ethics officers approving the AI System for a specific usage. Governance frameworks typically include multiple manual steps, including curating information needed to assess risks and reviewing outcomes to identify appropriate actions and governance strategies. We demo a human-in-the-loop automation system that takes a natural language description of an intended use case for an AI system in order to create semi-structured governance information, recommend the most appropriate model for that use case, prioritise risks to be evaluated, automatically running those evaluations and finally storing these results for auditing, reporting and future recommendations. As a result we increase transparency to stakeholders and provide valuable information to aid in decision making when assessing risks associated with an AI solution.

AAAI Conference 2024 System Paper

Interactive Human-Centric Bias Mitigation

  • Inge Vejsbjerg
  • Elizabeth M. Daly
  • Rahul Nair
  • Svetoslav Nizhnichenkov

Bias mitigation algorithms differ in their definition of bias and how they go about achieving that objective. Bias mitigation algorithms impact different cohorts differently and allowing end users and data scientists to understand the impact of these differences in order to make informed choices is a relatively unexplored domain. This demonstration presents an interactive bias mitigation pipeline that allows users to understand the cohorts impacted by their algorithm choice and provide feedback in order to provide a bias mitigated pipeline that most aligns with their goals.