Arrow Research search

Author name cluster

Ali Haider

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

EAAI Journal 2026 Journal Article

Assessment of camouflage in heterogeneous environments through deep learning: Analyzing object patterns and effectiveness

  • Ali Haider
  • Rana Hammad Raza

Camouflage is an attempt to hide an object’s segmentation and texture by blending it into the surrounding environment or mimicking the background’s texture. The core objective of camouflaged object detection (COD) is to identify objects that are fully integrated into their surrounding, as the high similarity between the background and target object significantly complicates detection. Despite substantial research in this domain, achieving robust detection across diverse environments remains a critical challenge. In this paper, we explore the intricate domain of COD and evaluate camouflage techniques across heterogeneous environments, such as urban landscapes, wildlife habitats, and military scenarios. The primary contribution lies in creating the adaptive camouflage dataset (ACD1K) dataset, which contains 1078 meticulously annotated images of human-based camouflaged subjects embedded within their environments. Each image includes detailed object-level annotations and bounding boxes, enabling advancements in computer vision tasks such as detection, classification, and segmentation. We also examine the effectiveness of different camouflage patterns and concealment strategies in diverse environments. Furthermore, we benchmark ACD1K dataset using state-of-the-art (SOTA) COD frameworks, leading to insightful results and highlighting future research directions in this field.

AAAI Conference 2026 Conference Paper

I-INR: Iterative Implicit Neural Representations

  • Ali Haider
  • Muhammad Salman Ali
  • Maryam Qamar
  • Tahir Khalil
  • Soo Ye Kim
  • Jihyong Oh
  • Enzo Tartaglione
  • Sung-Ho Bae

Implicit Neural Representations (INRs) have revolutionized signal processing and computer vision by modeling signals as continuous, differentiable functions parameterized by neural networks. However, INRs are prone to the spectral bias problem, limiting their ability to retain high-frequency information, and often struggle with noise robustness. Motivated by recent trends in iterative refinement processes, we propose Iterative Implicit Neural Representations (I-INRs). This novel plug-and-play framework iteratively refines signal reconstructions to restore high-frequency details, improve noise robustness, and enhance generalization, ultimately delivering superior reconstruction quality. I-INRs integrate seamlessly into existing INR architectures with only a 0.5–2% increase in parameters. During reconstruction, the iterative refinement adds just 0.8–1.6% additional FLOPs over the baseline while delivering a substantial performance boost of up to +2.0 PSNR. Extensive experiments demonstrate that I-INRs consistently outperform WIRE, SIREN, and Gauss across various computer vision tasks, including image fitting, image denoising, and object occupancy prediction.

AAAI Conference 2024 Conference Paper

Descanning: From Scanned to the Original Images with a Color Correction Diffusion Model

  • Junghun Cha
  • Ali Haider
  • Seoyun Yang
  • Hoeyeong Jin
  • Subin Yang
  • A. F. M. Shahab Uddin
  • Jaehyoung Kim
  • Soo Ye Kim

A significant volume of analog information, i.e., documents and images, have been digitized in the form of scanned copies for storing, sharing, and/or analyzing in the digital world. However, the quality of such contents is severely degraded by various distortions caused by printing, storing, and scanning processes in the physical world. Although restoring high-quality content from scanned copies has become an indispensable task for many products, it has not been systematically explored, and to the best of our knowledge, no public datasets are available. In this paper, we define this problem as Descanning and introduce a new high-quality and large-scale dataset named DESCAN-18K. It contains 18K pairs of original and scanned images collected in the wild containing multiple complex degradations. In order to eliminate such complex degradations, we propose a new image restoration model called DescanDiffusion consisting of a color encoder that corrects the global color degradation and a conditional denoising diffusion probabilistic model (DDPM) that removes local degradations. To further improve the generalization ability of DescanDiffusion, we also design a synthetic data generation scheme by reproducing prominent degradations in scanned images. We demonstrate that our DescanDiffusion outperforms other baselines including commercial restoration products, objectively and subjectively, via comprehensive experiments and analyses.