Arrow Research search

Author name cluster

Yihang Lou

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

AAAI Conference 2023 Conference Paper

Decorate the Newcomers: Visual Domain Prompt for Continual Test Time Adaptation

  • Yulu Gan
  • Yan Bai
  • Yihang Lou
  • Xianzheng Ma
  • Renrui Zhang
  • Nian Shi
  • Lin Luo

Continual Test-Time Adaptation (CTTA) aims to adapt the source model to continually changing unlabeled target domains without access to the source data. Existing methods mainly focus on model-based adaptation in a self-training manner, such as predicting pseudo labels for new domain datasets. Since pseudo labels are noisy and unreliable, these methods suffer from catastrophic forgetting and error accumulation when dealing with dynamic data distributions. Motivated by the prompt learning in NLP, in this paper, we propose to learn an image-layer visual domain prompt for target domains while having the source model parameters frozen. During testing, the changing target datasets can be adapted to the source model by reformulating the input data with the learned visual prompts. Specifically, we devise two types of prompts, i.e., domains-specific prompts and domains-agnostic prompts, to extract current domain knowledge and maintain the domain-shared knowledge in the continual adaptation. Furthermore, we design a homeostasis-based adaptation strategy to suppress domain-sensitive parameters in domain-invariant prompts to learn domain-shared knowledge more effectively. This transition from the model-dependent paradigm to the model-free one enables us to bypass the catastrophic forgetting and error accumulation problems. Experiments show that our proposed method achieves significant performance gains over state-of-the-art methods on four widely-used benchmarks, including CIFAR-10C, CIFAR-100C, ImageNet-C, and VLCS datasets.

AAAI Conference 2022 Conference Paper

Evidential Neighborhood Contrastive Learning for Universal Domain Adaptation

  • Liang Chen
  • Yihang Lou
  • Jianzhong He
  • Tao Bai
  • Minghua Deng

Universal domain adaptation (UniDA) aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain without any constraints on the label sets. However, domain shift and category shift make UniDA extremely challenging, mainly attributed to the requirement of identifying both shared “known” samples and private “unknown” samples. Previous methods barely exploit the intrinsic manifold structure relationship between two domains for feature alignment, and they rely on the softmax-based scores with class competition nature to detect underlying “unknown” samples. Therefore, in this paper, we propose a novel evidenTial Neighborhood conTrastive learning framework called TNT to address these issues. Specifically, TNT first proposes a new domain alignment principle: semantically consistent samples should be geometrically adjacent to each other, whether within or across domains. From this criterion, a cross domain multi-sample contrastive loss based on mutual nearest neighbors is designed to achieve common category matching and private category separation. Second, toward accurate “unknown” sample detection, TNT introduces a class competition-free uncertainty score from the perspective of evidential deep learning. Instead of setting a single threshold, TNT learns a category-aware heterogeneous threshold vector to reject diverse “unknown” samples. Extensive experiments on three benchmarks demonstrate that TNT significantly outperforms previous state-of-the-art UniDA methods.

AAAI Conference 2022 Conference Paper

Mutual Nearest Neighbor Contrast and Hybrid Prototype Self-Training for Universal Domain Adaptation

  • Liang Chen
  • Qianjin Du
  • Yihang Lou
  • Jianzhong He
  • Tao Bai
  • Minghua Deng

Universal domain adaptation (UniDA) aims to transfer knowledge learned from a labeled source domain to an unlabeled target domain under domain shift and category shift. Without prior category overlap information, it is challenging to simultaneously align the common categories between two domains and separate their respective private categories. Additionally, previous studies utilize the source classifier’s prediction to obtain various known labels and one generic “unknown” label of target samples. However, overreliance on learned classifier knowledge is inevitably biased to source data, ignoring the intrinsic structure of target domain. Therefore, in this paper, we propose a novel two-stage UniDA framework called MATHS based on the principle of Mutual neArest neighbor conTrast and Hybrid prototype diScrimination. In the first stage, we design an efficient mutual nearest neighbor contrastive learning scheme to achieve feature alignment, which exploits the instance-level affinity relationship to uncover the intrinsic structure of two domains. We introduce a bimodality hypothesis for the maximum discriminative probability distribution to detect the possible target private samples, and present a data-based statistical approach to separate the common and private categories. In the second stage, to obtain more reliable label predictions, we propose an incremental pseudo-classifier for target data only, which is driven by the hybrid representative prototypes. A confidence-guided prototype contrastive loss is designed to optimize the category allocation uncertainty via a selftraining mechanism. Extensive experiments on three benchmarks demonstrate that MATHS outperforms previous stateof-the-arts on most UniDA settings.

AAAI Conference 2022 Conference Paper

Neighborhood Consensus Contrastive Learning for Backward-Compatible Representation

  • Shengsen Wu
  • Liang Chen
  • Yihang Lou
  • Yan Bai
  • Tao Bai
  • Minghua Deng
  • Ling-Yu Duan

In object re-identification (ReID), the development of deep learning techniques often involves model updates and deployment. It is unbearable to re-embedding and re-index with the system suspended when deploying new models. Therefore, backward-compatible representation is proposed to enable “new” features to be compared with “old” features directly, which means that the database is active when there are both “new” and “old” features in it. Thus we can scroll-refresh the database or even do nothing on the database to update. The existing backward-compatible methods either require a strong overlap between old and new training data or simply conduct constraints at the instance level. Thus they are difficult in handling complicated cluster structures and are limited in eliminating the impact of outliers in old embeddings, resulting in a risk of damaging the discriminative capability of new features. In this work, we propose a Neighborhood Consensus Contrastive Learning (NCCL) method. With no assumptions about the new training data, we estimate the subcluster structures of old embeddings. A new embedding is constrained with multiple old embeddings in both embedding space and discrimination space at the sub-class level. The effect of outliers diminished, as the multiple samples serve as “mean teachers”. Besides, we propose a scheme to filter the old embeddings with low credibility, further improving the compatibility robustness. Our method ensures the compatibility without impairing the accuracy of the new model. It can even improve the new model’s accuracy in most scenarios.

IJCAI Conference 2020 Conference Paper

Disentangled Feature Learning Network for Vehicle Re-Identification

  • Yan Bai
  • Yihang Lou
  • Yongxing Dai
  • Jun Liu
  • Ziqian Chen
  • Ling-Yu Duan

Vehicle Re-Identification (ReID) has attracted lots of research efforts due to its great significance to the public security. In vehicle ReID, we aim to learn features that are powerful in discriminating subtle differences between vehicles which are visually similar, and also robust against different orientations of the same vehicle. However, these two characteristics are hard to be encapsulated into a single feature representation simultaneously with unified supervision. Here we propose a Disentangled Feature Learning Network (DFLNet) to learn orientation specific and common features concurrently, which are discriminative at details and invariant to orientations, respectively. Moreover, to effectively use these two types of features for ReID, we further design a feature metric alignment scheme to ensure the consistency of the metric scales. The experiments show the effectiveness of our method that achieves state-of-the-art performance on three challenging datasets.