Arrow Research search

Author name cluster

Xingjiao Wu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
2 author rows

Possible papers

4

AAAI Conference 2025 Conference Paper

An Exemplar-based Framework for Chinese Text Recognition

  • Zhao Zhou
  • Xiangcheng Du
  • Yingbin Zheng
  • Xingjiao Wu
  • Cheng Jin

This paper introduces a novel exemplar-based framework for reading Chinese texts in natural scene or document images. We present the Deep Exemplar-based Chinese Text Recognizer, which is structured to first identify candidate characters as exemplars from each text-line, and subsequently recognize them by retrieving analogous exemplars from a database. With text-line level annotations, we design the exemplar discovery network to simultaneously recognize texts and capture individual character positions in a weak-supervision manner. The exemplar retrieval module is then crafted to identify the most similar exemplar and propagate the corresponding character label. This enables us to effectively rectify the misrecognized characters and boost the performance of scene text recognition. Experiments on four scenarios of Chinese texts demonstrate the effectiveness of our proposed framework.

ICRA Conference 2025 Conference Paper

Multi-Type Preference Learning: Empowering Preference-Based Reinforcement Learning with Equal Preferences

  • Ziang Liu 0019
  • Junjie Xu
  • Xingjiao Wu
  • Jing Yang 0023
  • Liang He 0001

Preference-Based reinforcement learning (PBRL) learns directly from the preferences of human teachers regarding agent behaviors without needing meticulously designed reward functions. However, existing PBRL methods often learn primarily from explicit preferences, neglecting the possibility that teachers may choose equal preferences. This neglect may hinder the understanding of the agent regarding the task perspective of the teacher, leading to the loss of important information. To address this issue, we introduce the Equal Preference Learning Task, which optimizes the neural network by promoting similar reward predictions when the behaviors of two agents are labeled as equal preferences. Building on this task, we propose a novel PBRL method, Multi-Type Preference Learning (MTPL), which allows simultaneous learning from equal preferences while leveraging existing methods for learning from explicit preferences. To validate our approach, we design experiments applying MTPL to four existing state-of-the-art baselines across ten locomotion and robotic manipulation tasks in the DeepMind Control Suite. The experimental results indicate that simultaneous learning from both equal and explicit preferences enables the PBRL method to more comprehensively understand the feedback from teachers, thereby enhancing feedback efficiency. Project page: https://github.com/FeiCuiLengMMbb/paper_MTPL

IJCAI Conference 2025 Conference Paper

Unleashing the Semantic Adaptability of Controlled Diffusion Model for Image Colorization

  • Xiangcheng Du
  • Zhao Zhou
  • Yanlong Wang
  • Yingbin Zheng
  • Xingjiao Wu
  • Peizhu Gong
  • Cheng Jin

Recent data-driven image colorization methods have leveraged pre-trained Text-to-Image (T2I) diffusion models as generative prior, while still suffering from unsatisfactory and inaccurate semantic-level color control. To address these issues, we propose a Semantic Adaptation method (SeAda) that enhances the prior while considering the semantic discrepancy between color and grayscale image pairs. The SeAda employs a semantic adapter to produce refined semantic embeddings and a controlled T2I diffusion model to create reasonably colored images. Specifically, the semantic adapter transfers the embedding from grayscale to color domain, while the diffusion model utilizes the refined embedding and prior knowledge to achieve realistic and diverse results. We also design a three-staged training strategy to improve semantic comprehension and prior integration for further performance improvement. Extensive experiments on public datasets demonstrate that our method outperforms existing state-of-the-art techniques, yielding superior performance in image colorization.

ECAI Conference 2024 Conference Paper

MindScope: Exploring Cognitive Biases in Large Language Models Through Multi-Agent Systems

  • Zhentao Xie
  • Jiabao Zhao
  • Yilei Wang
  • Jinxin Shi
  • Yanhong Bai
  • Xingjiao Wu
  • Liang He 0001

Detecting cognitive biases in large language models (LLMs) is a fascinating task that aims to probe the existing cognitive biases within these models. Current methods for detecting cognitive biases in language models generally suffer from incomplete detection capabilities and a restricted range of detectable bias types. To address this issue, we introduced the ‘MindScope’ dataset, which distinctively integrates static and dynamic elements. The static component comprises 5, 170 open-ended questions spanning 72 cognitive bias categories. The dynamic component leverages a rule-based, multi-agent communication framework to facilitate the generation of multi-round dialogues. This framework is flexible and readily adaptable for various psychological experiments involving LLMs. In addition, we introduce a multi-agent detection method applicable to a wide range of detection tasks, which integrates Retrieval-Augmented Generation (RAG), competitive debate, and a reinforcement learning-based decision module. Demonstrating substantial effectiveness, this method has shown to improve detection accuracy by as much as 35. 10% compared to GPT-4. Codes and appendix are available at https: //github. com/2279072142/MindScope.