Arrow Research search
Back to IROS

IROS 2023

Transparent Object Tracking with Enhanced Fusion Module

Conference Paper Accepted Paper Artificial Intelligence ยท Robotics

Abstract

Accurate tracking of transparent objects, such as glasses, plays a critical role in many robotic tasks such as robot-assisted living. Due to the adaptive and often reflective texture of such objects, traditional tracking algorithms that rely on general-purpose learned features suffer from reduced performance. Recent research has proposed to instill trans-parency awareness into existing general object trackers by fusing purpose-built features. However, with the existing fusion techniques, the addition of new features causes a change in the latent space making it impossible to incorporate transparency awareness on trackers with fixed latent spaces. For example, many of the current days' transformer-based trackers are fully pre-trained and are sensitive to any latent space perturbations. In this paper, we present a new feature fusion technique that integrates transparency information into a fixed feature space, enabling its use in a broader range of trackers. Our proposed fusion module, composed of a transformer encoder and an MLP module, leverages key query-based transformations to embed the transparency information into the tracking pipeline. We also present a new two-step training strategy for our fusion module to effectively merge transparency features. We propose a new tracker architecture that uses our fusion techniques to achieve superior results for transparent object tracking. Our proposed method achieves competitive results with state-of-the-art trackers on TOTB, which is the largest transparent object tracking benchmark recently released. Our results and the implementation of code will be made publicly available at https://github.com/kalyan0510/TOTEM.

Authors

Keywords

  • Training
  • Perturbation methods
  • Pipelines
  • Object segmentation
  • Glass
  • Transformers
  • Object tracking
  • Transparent Objects
  • Feature Space
  • Training Strategy
  • Latent Space
  • Feature Fusion
  • Information Transparency
  • Fusion Techniques
  • Current Day
  • Transformer Encoder
  • Transparency Features
  • Prediction Model
  • Ablation
  • Training Dataset
  • Bounding Box
  • Original Features
  • Feed-forward Network
  • Design Choices
  • Two-step Approach
  • Fusion Method
  • Metrics Of Success
  • Feature Fusion Module
  • Tracking Problem
  • Video Sequences
  • Backbone Network
  • Feature Branch
  • Pre-trained Network
  • Transformer Model

Context

Venue
IEEE/RSJ International Conference on Intelligent Robots and Systems
Archive span
1988-2025
Indexed papers
26578
Paper id
1144282947437239302