Arrow Research search

Author name cluster

Karianto Leman

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
2 author rows

Possible papers

3

AAAI Conference 2026 Conference Paper

Condensed Data Expansion Using Model Inversion for Knowledge Distillation

  • Kuluhan Binici
  • Shivam Aggarwal
  • Cihan Acar
  • Nam Trung Pham
  • Karianto Leman
  • Gim Hee Lee
  • Tulika Mitra

Condensed datasets offer a compact representation of larger datasets, but training models directly on them or using them to enhance model performance through knowledge distillation (KD) can result in suboptimal outcomes due to limited information. To address this, we propose a method that expands condensed datasets using model inversion, a technique for generating synthetic data based on the impressions of a pre-trained model on its training data. This approach is particularly well-suited for KD scenarios, as the teacher model is already pre-trained and retains knowledge of the original training data. By creating synthetic data that complements the condensed samples, we enrich the training set and better approximate the underlying data distribution, leading to improvements in student model accuracy during knowledge distillation. Our method demonstrates significant gains in KD accuracy compared to using condensed datasets alone and outperforms standard model inversion-based KD methods by up to 11.4% across various datasets and model architectures. Importantly, it remains effective even when using as few as one condensed sample per class, and can also enhance performance in few-shot scenarios where only limited real data samples are available.

AAAI Conference 2022 Conference Paper

Robust and Resource-Efficient Data-Free Knowledge Distillation by Generative Pseudo Replay

  • Kuluhan Binici
  • Shivam Aggarwal
  • Nam Trung Pham
  • Karianto Leman
  • Tulika Mitra

Data-Free Knowledge Distillation (KD) allows knowledge transfer from a trained neural network (teacher) to a more compact one (student) in the absence of original training data. Existing works use a validation set to monitor the accuracy of the student over real data and report the highest performance throughout the entire process. However, validation data may not be available at distillation time either, making it infeasible to record the student snapshot that achieved the peak accuracy. Therefore, a practical data-free KD method should be robust and ideally provide monotonically increasing student accuracy during distillation. This is challenging because the student experiences knowledge degradation due to the distribution shift of the synthetic data. A straightforward approach to overcome this issue is to store and rehearse the generated samples periodically, which increases the memory footprint and creates privacy concerns. We propose to model the distribution of the previously observed synthetic samples with a generative network. In particular, we design a Variational Autoencoder (VAE) with a training objective that is customized to learn the synthetic data representations optimally. The student is rehearsed by the generative pseudo replay technique, with samples produced by the VAE. Hence knowledge degradation can be prevented without storing any samples. Experiments on image classification benchmarks show that our method optimizes the expected value of the distilled model accuracy while eliminating the large memory overhead incurred by the sample-storing methods.

IROS Conference 2013 Conference Paper

Sling bag and backpack detection for human appearance semantic in vision system

  • Teck Wee Chua
  • Karianto Leman
  • Hee Lin Wang
  • Nam Trung Pham
  • Richard Chang 0002
  • Dinh Duy Nguyen
  • Jie Zhang 0079

In many intelligent surveillance systems there is a requirement to search for people of interest through archived semantic labels. Other than searching through typical appearance attributes such as clothing color and body height, information such as whether a person carries a bag or not is valuable to provide more relevant targeted search. We propose two novel and fast algorithms for sling bag and backpack detection based on the geometrical properties of bags. The advantage of the proposed algorithms is that it does not require shape information from human silhouettes therefore it can work under crowded condition. In addition, the absence of background subtraction makes the algorithms suitable for mobile platforms such as robots. The system was tested with a low resolution surveillance video dataset. Experimental results demonstrate that our method is promising.