Arrow Research search

Author name cluster

Cihan Acar

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
2 author rows

Possible papers

5

AAAI Conference 2026 Conference Paper

Condensed Data Expansion Using Model Inversion for Knowledge Distillation

  • Kuluhan Binici
  • Shivam Aggarwal
  • Cihan Acar
  • Nam Trung Pham
  • Karianto Leman
  • Gim Hee Lee
  • Tulika Mitra

Condensed datasets offer a compact representation of larger datasets, but training models directly on them or using them to enhance model performance through knowledge distillation (KD) can result in suboptimal outcomes due to limited information. To address this, we propose a method that expands condensed datasets using model inversion, a technique for generating synthetic data based on the impressions of a pre-trained model on its training data. This approach is particularly well-suited for KD scenarios, as the teacher model is already pre-trained and retains knowledge of the original training data. By creating synthetic data that complements the condensed samples, we enrich the training set and better approximate the underlying data distribution, leading to improvements in student model accuracy during knowledge distillation. Our method demonstrates significant gains in KD accuracy compared to using condensed datasets alone and outperforms standard model inversion-based KD methods by up to 11.4% across various datasets and model architectures. Importantly, it remains effective even when using as few as one condensed sample per class, and can also enhance performance in few-shot scenarios where only limited real data samples are available.

ICRA Conference 2021 Conference Paper

Approximating Constraint Manifolds Using Generative Models for Sampling-Based Constrained Motion Planning

  • Cihan Acar
  • Keng Peng Tee

Sampling-based motion planning under task constraints is challenging because the null-measure constraint manifold in the configuration space makes rejection sampling extremely inefficient, if not impossible. This paper presents a learning-based sampling strategy for constrained motion planning problems. We investigate the use of two well-known deep generative models, the Conditional Variational Autoencoder (CVAE) and the Conditional Generative Adversarial Net (CGAN), to generate constraint-satisfying sample configurations. Instead of precomputed graphs, we use generative models conditioned on constraint parameters for approximating the constraint manifold. This approach allows for the efficient drawing of constraint-satisfying samples online without any need for modification of available sampling-based motion planning algorithms. We evaluate the efficiency of these two generative models in terms of their sampling accuracy and coverage of sampling distribution. Simulations and experiments are also conducted for different constraint tasks on two robotic platforms.

IROS Conference 2021 Conference Paper

GloCAL: Glocalized Curriculum-Aided Learning of Multiple Tasks with Application to Robotic Grasping

  • Anil Kurkcu
  • Cihan Acar
  • Domenico Campolo
  • Keng Peng Tee

The domain of robotics is challenging to apply deep reinforcement learning due to the need for large amounts of data and for ensuring safety during learning. Curriculum learning has shown good performance in terms of sample-efficient deep learning. In this paper, we propose an algorithm (named GloCAL) that creates a curriculum for an agent to learn multiple discrete tasks, based on clustering tasks according to their evaluation scores. From the highest-performing cluster, a global task representative of the cluster is identified for learning a global policy that transfers to subsequently formed new clusters, while remaining tasks in the cluster are learnt as local policies. The efficacy and efficiency of our GloCAL algorithm are compared with other approaches in the domain of grasp learning for 49 objects with varied object complexity and grasp difficulty from the EGAD! dataset. The results show that GloCAL is able to learn to grasp 100% of the objects, whereas other approaches achieve at most 86% despite being given 1. 5× longer training time.

ICRA Conference 2021 Conference Paper

State Estimation for Hybrid Wheeled-Legged Robots Performing Mobile Manipulation Tasks

  • Yangwei You
  • Samuel Cheong
  • Tai Pang Chen
  • Yuda Chen
  • Kun Zhang
  • Cihan Acar
  • Fon Lin Lai
  • Albertus H. Adiwahono

This paper introduces a general state estimation framework fusing multiple sensor information for hybrid wheeled-legged robots performing mobile manipulation tasks. At the core of the state estimator is a novel unified odometry for hybrid locomotion which can seamlessly maintain tracking and has no need to switch between stepping and rolling modes. To the best of our knowledge, the proposed odometry is the first work in this area. It is calculated based on the robot kinematics and instantaneous contact points of wheels with sensor inputs from IMU, joint encoders, joint torque sensors estimating wheel contact status, as well as RGB-D camera detecting geometric features of the terrain (e. g. elevation and surface normal vector). Subsequently, the odometry output is utilized as the motion model of a 3D Lidar map-based Monte Carlo Localization module for drift-free state estimation. As part of the framework, visual localization is integrated to provide high precision guidance for the robot movement relative to an object of interest. The proposed approach was verified thoroughly by two experiments conducted on the Pholus robot with OptiTrack measurements as ground truth.

IROS Conference 2021 Conference Paper

Supervised Autonomy for Remote Teleoperation of Hybrid Wheel-Legged Mobile Manipulator Robots

  • Samuel Cheong
  • Tai Pang Chen
  • Cihan Acar
  • Yangwei You
  • Yuda Chen
  • Wan Leong Sim
  • Keng Peng Tee

This paper proposes an improved supervised autonomy framework for remote teleoperation of a quadrupedal bimanual mobile manipulator in an unknown environment, with the usage of advanced perception technology and allowing the operator to easily assist the robot with decision making for executing tasks on the fly. First, the perception system uses lightweight deep neural network-based Single Shot Detector (SSD) MobileNet on RGB images to detect objects and highlight them to the human operator via an intuitive interactive visualization interface. After object and action selections are made by the operator, segmentation of object point cloud and 3D surfaces based on random sample consensus is performed, followed by object pose localization by using keypoint extraction. Based on the localized object, mobile manipulation motion to perform the operator-selected action is planned and executed with the help of a state estimator for the hybrid wheel-legged robot. Thanks to the autonomy of the robot in perception and manipulation, the complexity of teleoperating the robot is reduced to specifying the essential task objectives. Experimental results on the real robot, with full system integration, for 2 task scenarios, namely passage clearing and object retrieval, demonstrate a high average success rate of 92. 2% over a total of 90 trials.