Arrow Research search

Author name cluster

Joo Hwee Lim

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

AAAI Conference 2026 Conference Paper

Your AI-Generated Image Detector Can Secretly Achieve SOTA Accuracy, If Calibrated

  • Muli Yang
  • Gabriel James Goenawan
  • Henan Wang
  • Huaiyuan Qin
  • Chenghao Xu
  • Yanhua Yang
  • Fen Fang
  • Ying Sun

Despite being trained on balanced datasets, existing AI-generated image detectors often exhibit systematic bias at test time, frequently misclassifying fake images as real. We hypothesize that this behavior stems from distributional shift in fake samples and implicit priors learned during training. Specifically, models tend to overfit to superficial artifacts that do not generalize well across different generation methods, leading to a misaligned decision threshold when faced with test-time distribution shift. To address this, we propose a theoretically grounded post-hoc calibration framework based on Bayesian decision theory. In particular, we introduce a learnable scalar correction to the model’s logits, optimized on a small validation set from the target distribution while keeping the backbone frozen. This parametric adjustment compensates for distributional shift in model output, realigning the decision boundary even without requiring ground-truth labels. Experiments on challenging benchmarks show that our approach significantly improves robustness without retraining, offering a lightweight and principled solution for reliable and adaptive AI-generated image detection in the open world.

ICRA Conference 2021 Conference Paper

Towards Efficient Multiview Object Detection with Adaptive Action Prediction

  • Qianli Xu
  • Fen Fang
  • Nicolas Gauthier
  • Wenyu Liang
  • Yan Wu 0002
  • Liyuan Li
  • Joo Hwee Lim

Active vision is a desirable perceptual feature for robots. Existing approaches usually make strong assumptions about the task and environment, thus are less robust and efficient. This study proposes an adaptive view planning approach to boost the efficiency and robustness of active object detection. We formulate the multi-object detection task as an active multiview object detection problem given the initial location of the objects. Next, we propose a novel adaptive action prediction (A2P) method built on a deep Q-learning network with a dueling architecture. The A2P method is able to perform view planning based on visual information of multiple objects; and adjust action ranges according to the task status. Evaluated on the AVD dataset, A2P leads to 21. 9% increase in detection accuracy in unfamiliar environments, while improving efficiency by 22. 7%. On the T-LESS dataset, multi-object detection boosts efficiency by more than 30% while achieving equivalent detection accuracy.

IROS Conference 2018 Conference Paper

Egocentric Spatial Memory

  • Mengmi Zhang
  • Keng Teck Ma
  • Shih-Cheng Yen
  • Joo Hwee Lim
  • Qi Zhao 0001
  • Jiashi Feng

Egocentric spatial memory (ESM) defines a memory system with encoding, storing, recognizing and recalling the spatial information about the environment from an egocentric perspective. We introduce an integrated deep neural network architecture for modeling ESM. It learns to estimate the occupancy state of the world and progressively construct top-down 2D global maps from egocentric views in a spatially extended environment. During the exploration, our proposed ESM model updates belief of the global map based on local observations using a recurrent neural network. It also augments the local mapping with a novel external memory to encode and store latent representations of the visited places over longterm exploration in large environments which enables agents to perform place recognition and hence, loop closure. Our proposed ESM network contributes in the following aspects: (1) without feature engineering, our model predicts free space based on egocentric views efficiently in an end-to-end manner; (2) different from other deep learning-based mapping system, ESMN deals with continuous actions and states which is vitally important for robotic control in real applications. In the experiments, we demonstrate its accurate and robust global mapping capacities in 3D virtual mazes and realistic indoor environments by comparing with several competitive baselines.