Arrow Research search

Author name cluster

Azad Singh

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

ECAI Conference 2025 Conference Paper

Scale-Aware Adaptive Feature Quantization for Robust Medical Image Representation Learning

  • Azad Singh
  • Deepak Mishra 0003

CNNs have become the standard for medical image interpretation, but concerns persist about their reliability in real-world applications. CNNs can be sensitive to small variations in image quality and vulnerable to adversarial attacks, potentially leading to inaccurate diagnoses. To address these issues, we introduce a novel scale-aware adaptive feature quantization approach. This enhances the robustness and reliability of CNNs by adaptively combining quantized representations from multiple scales, improving performance on low-quality or perturbed images. Our approach uses soft codes and dynamic weighting to adaptively combine features from different scales, creating a more informative final quantized representation. Experimental results on diverse medical datasets, including chest X-rays and dermatoscopic images, demonstrate the effectiveness of our approach. Our method significantly outperforms both standard CNNs and state-of-the-art approaches, with substantial gains across all metrics (AUC, F1 score). These improvements range from 2. 6% to 11%, demonstrating our method’s superior performance and reliability for medical diagnosis in challenging real-world scenarios.

JBHI Journal 2024 Journal Article

MLVICX: Multi-Level Variance-Covariance Exploration for Chest X-Ray Self-Supervised Representation Learning

  • Azad Singh
  • Vandan Gorade
  • Deepak Mishra

Self-supervised learning (SSL) reduces the need for manual annotation in deep learning models for medical image analysis. By learning the representations from unablelled data, self-supervised models perform well on tasks that require little to no fine-tuning. However, for medical images, like chest X-rays, characterised by complex anatomical structures and diverse clinical conditions, a need arises for representation learning techniques that encode fine-grained details while preserving the broader contextual information. In this context, we introduce MLVICX (Multi-Level Variance-Covariance Exploration for Chest X-ray Self-Supervised Representation Learning), an approach to capture rich representations in the form of embeddings from chest X-ray images. Central to our approach is a novel multi-level variance and covariance exploration strategy that effectively enables the model to detect diagnostically meaningful patterns while reducing redundancy. MLVICX promotes the retention of critical medical insights by adapting global and local contextual details and enhancing the variance and covariance of the learned embeddings. We demonstrate the performance of MLVICX in advancing self-supervised chest X-ray representation learning through comprehensive experiments. The performance enhancements we observe across various downstream tasks highlight the significance of the proposed approach in enhancing the utility of chest X-ray embeddings for precision medical diagnosis and comprehensive image analysis. For pertaining, we used the NIH-Chest X-ray dataset. Downstream tasks utilized NIH-Chest X-ray, Vinbig-CXR, RSNA pneumonia, and SIIM-ACR Pneumothorax datasets. Overall, we observe up to 3% performance gain over SOTA SSL approaches in various downstream tasks. Additionally, to demonstrate generalizability of our method, we conducted additional experiments on fundus images and observed superior performance on multiple datasets. Codes are available at GitHub.

AAAI Conference 2022 Short Paper

MBGRLp: Multiscale Bootstrap Graph Representation Learning on Pointcloud (Student Abstract)

  • Vandan Gorade
  • Azad Singh
  • Deepak Mishra

Point cloud has gained a lot of attention with the availability of large amount of point cloud data and increasing applications like city planning and self-driving cars. However, current methods, often rely on labeled information and costly processing, such as converting point cloud to voxel. We propose a self-supervised learning approach to tackle these problems, combating labelling and additional memory cost issues. Our proposed method achieves results comparable to supervised and unsupervised baselines on widely used benchmark datasets for self-supervised point cloud classification like ShapeNet, ModelNet10/40.