Arrow Research search

Author name cluster

Jinhee Park

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

AAAI Conference 2026 Conference Paper

Neural Collapse-Informed Initialization with Perturbation Injection in Classification-based Metric Learning

  • Jinhee Park
  • Hee bin Yoo
  • MinJun Kim
  • Byoung-Tak Zhang
  • Junseok Kwon

Recent studies have revealed Neural Collapse (NC) in deep classifiers, where last-layer weights and features align into an equiangular tight frame (ETF), concentrating class information along specific embedding directions. However, conventional fine-tuning typically disregards this structure, initializing task-specific classifier heads randomly. To explicitly leverage this phenomenon, we propose a simple yet effective method for metric learning: (1) initializing the classifier head along each class’s NC direction from a pretrained model to preserve the emergent structure, and (2) injecting small isotropic Gaussian noise during finetuning to boost generalization. In addition, we provide a theoretical bound proving that our method explicitly reduces cumulative weight drift from the NC-initialization, compared to standard finetuning. This suggests that our method better preserves the pretrained model’s class-specific structure. Empirically, this structural preservation yields Recall@K gains: reduced weight drift correlates with better performance. Concurrent decreases in the Neural Collapse 1 (NC1) measure confirm that stronger intra‐class cohesion underlies these improvements. Furthermore, we validate the effectiveness of our method on class‐imbalanced benchmarks.

AAAI Conference 2025 Conference Paper

Deep Disentangled Metric Learning

  • Jinhee Park
  • Jisoo Park
  • Dagyeong Na
  • Junseok Kwon

Proxy-based metric learning has enhanced semantic similarity with class representatives and exhibited noteworthy performance in deep metric learning (DML) tasks. While these methods alleviate computational demands by learning instance-to-class relationships rather than instance-to-instance relationships, they often limit features to be class-specific, thereby degrading generalization performance for unseen class. In this paper, we introduce a novel perspective called Disentangled Deep Metric Learning (DDML), grounded in the framework of information bottleneck, which applies class-agnostic regularization to existing DML methods. Unlike conventional NormSoftmax methods, which primarily emphasize distinct class-specific features, our DDML enables a diverse feature representation by seamlessly transitioning between class-specific features with the aid of class-agnostic features. It smooths decision boundaries, allowing unseen classes to have stable semantic representations in the embedding space. To achieve this, we learn disentangled representations of both class-specific and class-agnostic features in the context of DML. Empirical results demonstrate that our method addresses the limitations of conventional approaches. Our method easily integrates into existing proxy-based algorithms, consistently delivering improved performance.