Arrow Research search

Author name cluster

Chunxiao Li

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

9 papers
1 author row

Possible papers

9

AAAI Conference 2026 Conference Paper

Thinking Aesthetics Assessment of Image Color Temperature: Models, Datasets and Benchmarks

  • Jinguang Cheng
  • Chunxiao Li
  • Shuai He
  • Taiyu Chen
  • Anlong Ming

Color temperature, as a crucial attribute influencing image color, plays a critical role in Image Aesthetics Assessment (IAA). Yet, within the existing IAA field, little light has been shed on assessing the aesthetic quality of image color temperature. To bridge this gap, we introduce a new task: Image Color Temperature Aesthetics Assessment (ICTAA). However, this task poses the following challenges: 1) Perceptual Sensitivity: humans exhibit high sensitivity to subtle shifts in color temperature, necessitating a model to enable fine-grained discrimination; 2) Spectral Continuity: The theoretical modeling of color temperature aesthetics requires continuous labels; however, the just-noticeable-difference property of human perception makes continuous labeling infeasible, necessitating a well-designed labeling strategy. To address the aforementioned challenges, we make the following efforts. First, we propose a multi-modal contrastive learning framework, ICTA2Net, that models color temperature differences between image pairs while strictly controlling other visual attributes. Second, leveraging color temperature transitivity, we design a weakly supervised strategy that discretely samples images based on anchor images and human perception to build contrastive relations across color temperatures, enabling learning from discrete labels. Thirdly, we construct a color temperature aesthetics dataset, ICTAA240K, and a benchmark for validation. Additionally, we propose a new metric, Information Entropy-weighted Accuracy (IEA), which weights accuracy by the degree of annotation disagreement to reflect model performance across varying sample difficulties, complementing existing evaluation metrics. Experiments show our method outperforms existing state-of-the-art IAA methods on ICTAA240K, thereby setting an effective roadmap for ICTAA.

AAAI Conference 2025 Conference Paper

An Efficient Framework for Enhancing Discriminative Models via Diffusion Techniques

  • Chunxiao Li
  • Xiaoxiao Wang
  • Boming Miao
  • Chuanlong Xie
  • Zizhe Wang
  • Yao Zhu

Image classification serves as the cornerstone of computer vision, traditionally achieved through discriminative models based on deep neural networks. Recent advancements have introduced classification methods derived from generative models, which offer the advantage of zero-shot classification. However, these methods suffer from two main drawbacks: high computational overhead and inferior performance compared to discriminative models. Inspired by the coordinated cognitive processes of rapid-slow pathway interactions in the human brain during visual signal recognition, we propose the Diffusion-Based Discriminative Model Enhancement Framework (DBMEF). This framework seamlessly integrates discriminative and generative models in a training-free manner, leveraging discriminative models for initial predictions and endowing deep neural networks with rethinking capabilities via diffusion models. Consequently, DBMEF can effectively enhance the classification accuracy and generalization capability of discriminative models in a plug-and-play manner. We have conducted extensive experiments across 17 prevalent deep model architectures with different training methods, including both CNN-based models such as ResNet and Transformer-based models like ViT, to demonstrate the effectiveness of the proposed DBMEF.Specifically, the framework yields a 1.51% performance improvement for ResNet-50 on the ImageNet dataset and 3.02% on the ImageNet-A dataset. In conclusion, our research introduces a novel paradigm for image classification, demonstrating stable improvements across different datasets and neural networks.

JAIR Journal 2025 Journal Article

Improving and Understanding the Power of Satisfaction-Driven Clause Learning

  • Albert Oliveras
  • Chunxiao Li
  • Darryl Wu
  • Jonathan Chung
  • Vijay Ganesh

In this paper, we explain how to improve Satisfaction-Driven Clause Learning (SDCL) SAT solvers by using a MaxSAT-based technique that enables them to learn shorter, and hence better, redundant clauses. A thorough empirical evaluation of an implementation on the MapleSAT solver shows that the resulting system solves Mutilated Chess Board (MCB) problems significantly faster than CDCL solvers, without requiring any alteration to the branching heuristic used by the underlying CDCL SAT solver. Additionally we improve the understanding of the power of these solvers by proving that, given a refutation of a formula that consists of resolution and redundant-clause addition steps, an SDCL solver is able to produce a proof whose size is polynomial with respect to the size of the original refutation.

AAAI Conference 2025 Conference Paper

Uncertainty-aware Knowledge Tracing

  • Weihua Cheng
  • Hanwen Du
  • Chunxiao Li
  • Ersheng Ni
  • Liangdi Tan
  • Tianqi Xu
  • Yongxin Ni

Knowledge Tracing (KT) is crucial in education assessment, which focuses on depicting students' learning states and assessing students' mastery of subjects. With the rise of modern online learning platforms, particularly massive open online courses (MOOCs), an abundance of interaction data has greatly advanced the development of the KT technology. Previous research commonly adopts deterministic representation to capture students' knowledge states, which neglects the uncertainty during student interactions and thus fails to model the true knowledge state in learning process. In light of this, we propose an Uncertainty-Aware Knowledge Tracing model (UKT) which employs stochastic distribution embeddings to represent the uncertainty in student interactions, with a Wasserstein self-attention mechanism designed to capture the transition of state distribution in student learning behaviors. Additionally, we introduce the aleatory uncertainty-aware contrastive learning loss, which strengthens the model's robustness towards different types of uncertainties. Extensive experiments on six real-world datasets demonstrate that UKT not only significantly surpasses existing deep learning-based models in KT prediction, but also shows unique advantages in handling the uncertainty of student interactions.

IJCAI Conference 2024 Conference Paper

Boosting Single Positive Multi-label Classification with Generalized Robust Loss

  • Yanxi Chen
  • Chunxiao Li
  • Xinyang Dai
  • Jinhuan Li
  • Weiyu Sun
  • Yiming Wang
  • Renyuan Zhang
  • Tinghe Zhang

Multi-label learning (MLL) requires comprehensive multi-semantic annotations that is hard to fully obtain, thus often resulting in missing labels scenarios. In this paper, we investigate Single Positive Multi-label Learning (SPML), where each image is associated with merely one positive label. Existing SPML methods only focus on designing losses using mechanisms such as hard pseudo-labeling and robust losses, mostly leading to unacceptable false negatives. To address this issue, we first propose a generalized loss framework based on expected risk minimization to provide soft pseudo labels, and point out that the former losses can be seamlessly converted into our framework. In particular, we design a novel robust loss based on our framework, which enjoys flexible coordination between false positives and false negatives, and can additionally deal with the imbalance between positive and negative samples. Extensive experiments show that our approach can significantly improve SPML performance and outperform the vast majority of state-of-the-art methods on all the four benchmarks. Our code is available at https: //github. com/yan4xi1/GRLoss.

AAAI Conference 2023 Conference Paper

SWBNet: A Stable White Balance Network for sRGB Images

  • Chunxiao Li
  • Xuejing Kang
  • Zhifeng Zhang
  • Anlong Ming

The white balance methods for sRGB images (sRGB-WB) aim to directly remove their color temperature shifts. Despite achieving promising white balance (WB) performance, the existing methods suffer from WB instability, i.e., their results are inconsistent for images with different color temperatures. We propose a stable white balance network (SWBNet) to alleviate this problem. It learns the color temperature-insensitive features to generate white-balanced images, resulting in consistent WB results. Specifically, the color temperatureinsensitive features are learned by implicitly suppressing lowfrequency information sensitive to color temperatures. Then, a color temperature contrastive loss is introduced to facilitate the most information shared among features of the same scene and different color temperatures. This way, features from the same scene are more insensitive to color temperatures regardless of the inputs. We also present a color temperature sensitivity-oriented transformer that globally perceives multiple color temperature shifts within an image and corrects them by different weights. It helps to improve the accuracy of stabilized SWBNet, especially for multiillumination sRGB images. Experiments indicate that our SWBNet achieves stable and remarkable WB performance.

IJCAI Conference 2023 Conference Paper

WBFlow: Few-shot White Balance for sRGB Images via Reversible Neural Flows

  • Chunxiao Li
  • Xuejing Kang
  • Anlong Ming

The sRGB white balance methods aim to correct the nonlinear color cast of sRGB images without accessing raw values. Although existing methods have achieved increasingly better results, their generalization to sRGB images from multiple cameras is still under explored. In this paper, we propose the network named WBFlow that not only performs superior white balance for sRGB images but also generalizes well to multiple cameras. Specifically, we take advantage of neural flow to ensure the reversibility of WBFlow, which enables lossless rendering of color cast sRGB images back to pseudo raw features for linear white balancing and thus achieves superior performance. Furthermore, inspired by camera transformation approaches, we have designed a camera transformation (CT) in pseudo raw feature space to generalize WBFlow for different cameras via few shot learning. By utilizing a few sRGB images from an untrained camera, our WBFlow can perform well on this camera by learning the camera specific parameters of CT. Extensive experiments show that WBFlow achieves superior camera generalization and accuracy on three public datasets as well as our rendered multiple camera sRGB dataset. Our code is available at https: //github. com/ChunxiaoLe/WBFlow.

JMLR Journal 2022 Journal Article

Rethinking Nonlinear Instrumental Variable Models through Prediction Validity

  • Chunxiao Li
  • Cynthia Rudin
  • Tyler H. McCormick

Instrumental variables (IV) are widely used in the social and health sciences in situations where a researcher would like to measure a causal effect but cannot perform an experiment. For valid causal inference in an IV model, there must be external (exogenous) variation that (i) has a sufficiently large impact on the variable of interest (called the relevance assumption) and where (ii) the only pathway through which the external variation impacts the outcome is via the variable of interest (called the exclusion restriction). For statistical inference, researchers must also make assumptions about the functional form of the relationship between the three variables. Current practice assumes (i) and (ii) are met, then postulates a functional form with limited input from the data. In this paper, we describe a framework that leverages machine learning to validate these typically unchecked but consequential assumptions in the IV framework, providing the researcher empirical evidence about the quality of the instrument given the data at hand. Central to the proposed approach is the idea of prediction validity. Prediction validity checks that error terms -- which should be independent from the instrument -- cannot be modeled with machine learning any better than a model that is identically zero. We use prediction validity to develop both one-stage and two-stage approaches for IV, and demonstrate their performance on an example relevant to climate change policy. [abs] [ pdf ][ bib ] &copy JMLR 2022. ( edit, beta )

AAAI Conference 2022 Conference Paper

Transfer Learning for Color Constancy via Statistic Perspective

  • Yuxiang Tang
  • Xuejing Kang
  • Chunxiao Li
  • Zhaowen Lin
  • Anlong Ming

Color Constancy aims to correct image color casts caused by scene illumination. Recently, although the deep learning approaches have remarkably improved on single-camera data, these models still suffer from the seriously insufficient data problem, resulting in shallow model capacity and degradation in multi-camera settings. In this paper, to alleviate this problem, we present a Transfer Learning Color Constancy (TLCC) method that leverages cross-camera RAW data and massive unlabeled sRGB data to support training. Specifically, TLCC consists of the Statistic Estimation Scheme (SE- Scheme) and Color-Guided Adaption Branch (CGA-Branch). SE-Scheme builds a statistic perspective to map the camerarelated illumination labels into camera-agnostic form and produce pseudo labels for sRGB data, which greatly expands data for joint training. CGA-Branch further promotes efficient transfer learning from sRGB to RAW data by extracting color information to regularize the backbone’s features adaptively. Experimental results show the TLCC has overcome the data limitation and model degradation, outperforming the state-of-the-art performance on popular benchmarks. Moreover, the experiments also prove the TLCC is capable of learning new scenes information from sRGB data to improve accuracy on the RAW images with similar scenes.