Arrow Research search

Author name cluster

Christopher M. Homan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

8 papers
2 author rows

Possible papers

8

AAAI Conference 2026 Short Paper

ProRefine: Inference-Time Prompt Refinement with Textual Feedback (Student Abstract)

  • Deepak Pandita
  • Tharindu Cyril Weerasooriya
  • Ankit Shah
  • Isabelle Diana May-Xin Ng
  • Christopher M. Homan
  • Wei Wei

Agentic workflows, where multiple AI agents collaborate to accomplish complex tasks like reasoning or planning, play a substantial role in many cutting-edge commercial applications. These workflows depend critically on the prompts used to provide the roles models play in such workflows. Poorly designed prompts that fail even slightly to guide individual agents can lead to sub-optimal performance that may snowball within a system of agents, limiting their reliability and scalability. To address this important problem of inference-time prompt optimization, we introduce ProRefine, an innovative inference-time optimization method that uses an agentic loop of LLMs to generate and apply textual feedback. ProRefine dynamically refines prompts for multi-step reasoning tasks without additional training or ground truth labels. Evaluated on five benchmark mathematical reasoning datasets, ProRefine significantly surpasses zero-shot Chain-of-Thought baselines by 3 to 37 percentage points. This approach not only boosts accuracy but also allows smaller models to approach the performance of their larger counterparts. This highlights its potential for building cost-effective and powerful hybrid AI systems, thereby democratizing access to high-performing AI.

AAAI Conference 2025 Conference Paper

ARTICLE: Annotator Reliability Through In-Context Learning

  • Sujan Dutta
  • Deepak Pandita
  • Tharindu Cyril Weerasooriya
  • Marcos Zampieri
  • Christopher M. Homan
  • Ashiqur R. KhudaBukhsh

Ensuring annotator quality in training and evaluation data is a key piece of machine learning in NLP. Tasks such as sentiment analysis and offensive speech detection are intrinsically subjective, creating a challenging scenario for traditional quality assessment approaches because it is hard to distinguish disagreement due to poor work from that due to differences of opinions between sincere annotators. With the goal of increasing diverse perspectives in annotation while ensuring consistency, we propose ARTICLE, an in-context learning (ICL) framework to estimate annotation quality through self-consistency. We evaluate this framework on two offensive speech datasets using multiple LLMs and compare its performance with traditional methods. Our findings indicate that ARTICLE can be used as a robust method for identifying reliable annotators, hence improving data quality.

AAAI Conference 2025 Short Paper

ARTICLE: Annotator Reliability Through In-Context Learning (Student Abstract)

  • Sujan Dutta
  • Deepak Pandita
  • Tharindu Cyril Weerasooriya
  • Marcos Zampieri
  • Christopher M. Homan
  • Ashiqur R. KhudaBukhsh

Ensuring annotator quality in training and evaluation data is a key piece of machine learning in NLP. Tasks such as sentiment analysis and offensive speech detection are intrinsically subjective, creating a challenging scenario for traditional quality assessment approaches because it is hard to distinguish disagreement due to poor work from that due to differences of opinions between sincere annotators. With the goal of increasing diverse perspectives in annotation while ensuring consistency, we propose ARTICLE, an in-context learning (ICL) framework to estimate annotation quality through self-consistency. We evaluate this framework on two offensive speech datasets using multiple LLMs and compare its performance with traditional methods. Our findings indicate that ARTICLE can be used as a robust method for identifying reliable annotators, hence improving data quality.

ECAI Conference 2020 Conference Paper

Neighborhood-Based Pooling for Population-Level Label Distribution Learning

  • Tharindu Cyril Weerasooriya
  • Tong Liu 0010
  • Christopher M. Homan

Supervised machine learning often requires human-annotated data. While annotator disagreement is typically interpreted as evidence of noise, population-level label distribution learning (PLDL) treats the collection of annotations for each data item as a sample of the opinions of a population of human annotators, among whom disagreement may be proper and expected, even with no noise present. From this perspective, a typical training set may contain a large number of very small-sized samples, one for each data item, none of which, by itself, is large enough to be considered representative of the underlying population’s beliefs about that item. We propose an algorithmic framework and new statistical tests for PLDL that account for sampling size. We apply them to previously proposed methods for sharing labels across similar data items. We also propose new approaches for label sharing, which we call neighborhood-based pooling.

MFCS Conference 2006 Conference Paper

Guarantees for the Success Frequency of an Algorithm for Finding Dodgson-Election Winners

  • Christopher M. Homan
  • Lane A. Hemachandra

Abstract Dodgson’s election system elegantly satisfies the Condorcet criterion. However, determining the winner of a Dodgson election is known to be \({\mathrm{\Theta}^{\mathit{p}}_2}\) -complete ([1], see also [2]), which implies that unless P = NP no polynomial-time solution to this problem exists, and unless the polynomial hierarchy collapses to NP the problem is not even in NP. Nonetheless, we prove that when the number of voters is much greater than the number of candidates (although the number of voters may still be polynomial in the number of candidates), a simple greedy algorithm very frequently finds the Dodgson winners in such a way that it “knows” that it has found them, and furthermore the algorithm never incorrectly declares a nonwinner to be a winner.