Arrow Research search

Author name cluster

Riwei Lai

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

TIST Journal 2026 Journal Article

Matryoshka Representation Learning for Recommendation with Layer- and Hardness-Adaptive Negative Sampling

  • Riwei Lai
  • Li Chen
  • Weixin Chen
  • Rui Chen

Representation learning is essential for deep-neural-network-based recommender systems to capture user preferences and item features within fixed-dimensional user and item vectors. Unlike existing representation learning methods that either treat each user preference and item feature uniformly or categorize them into discrete clusters, we argue that in the real world, user preferences and item features are naturally expressed and organized in a hierarchical manner, leading to a new direction for representation learning. In this article, we introduce a novel matryoshka representation learning method for recommendation (MRL4Rec), by which we restructure user and item vectors into matryoshka representations with nested vector spaces to explicitly represent user preferences and item features at different hierarchical layers. We theoretically establish that training with the same triplets for each sliced vector cannot guarantee representation learning with hierarchical structures. Subsequently, we propose the layer- and hardness-adaptive negative sampling (LHANS) mechanism to construct training triplets, which further ensures the soundness of learned matryoshka representations in capturing hierarchical user preferences and item features. The experiments demonstrate that MRL4Rec can consistently and substantially outperform a number of state-of-the-art competitors on several real-life datasets. Our code is publicly available at https://github.com/Riwei-HEU/MRL.

AAAI Conference 2024 Conference Paper

Adaptive Hardness Negative Sampling for Collaborative Filtering

  • Riwei Lai
  • Rui Chen
  • Qilong Han
  • Chi Zhang
  • Li Chen

Negative sampling is essential for implicit collaborative filtering to provide proper negative training signals so as to achieve desirable performance. We experimentally unveil a common limitation of all existing negative sampling methods that they can only select negative samples of a fixed hardness level, leading to the false positive problem (FPP) and false negative problem (FNP). We then propose a new paradigm called adaptive hardness negative sampling (AHNS) and discuss its three key criteria. By adaptively selecting negative samples with appropriate hardnesses during the training process, AHNS can well mitigate the impacts of FPP and FNP. Next, we present a concrete instantiation of AHNS called AHNS_{p<0}, and theoretically demonstrate that AHNS_{p<0} can fit the three criteria of AHNS well and achieve a larger lower bound of normalized discounted cumulative gain. Besides, we note that existing negative sampling methods can be regarded as more relaxed cases of AHNS. Finally, we conduct comprehensive experiments, and the results show that AHNS_{p<0} can consistently and substantially outperform several state-of-the-art competitors on multiple datasets.