Arrow Research search

Author name cluster

Lizhe Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

IJCAI Conference 2025 Conference Paper

CorrDetail: Visual Detail Enhanced Self-Correction for Face Forgery Detection

  • Binjia Zhou
  • Hengrui Lou
  • Lizhe Chen
  • Haoyuan Li
  • Dawei Luo
  • Shuai Chen
  • Jie Lei
  • Zunlei Feng

With the swift progression of image generation technology, the widespread emergence of facial deepfakes poses significant challenges to the field of security, thus amplifying the urgent need for effective deepfake detection. Existing techniques for face forgery detection can broadly be categorized into two primary groups: visual-based methods and multimodal approaches. The former often lacks clear explanations for forgery details, while the latter, which merges visual and linguistic modalities, is more prone to the issue of hallucinations. To address these shortcomings, we introduce a visual detail enhanced self-correction framework, designated CorrDetail, for interpretable face forgery detection. CorrDetail is meticulously designed to rectify authentic forgery details when provided with error-guided questioning, with the aim of fostering the ability to uncover forgery details rather than yielding hallucinated responses. Additionally, to bolster the reliability of its findings, a visual fine-grained detail enhancement module is incorporated, supplying CorrDetail with more precise visual forgery details. Ultimately, a fusion decision strategy is devised to further augment the model's discriminative capacity in handling extreme samples, through the integration of visual information compensation and model bias reduction. Experimental results demonstrate that CorrDetail not only achieves state-of-the-art performance compared to the latest methodologies but also excels in accurately identifying forged details, all while exhibiting robust generalization capabilities.

AAAI Conference 2022 Conference Paper

Self-Supervised Spatiotemporal Representation Learning by Exploiting Video Continuity

  • Hanwen Liang
  • Niamul Quader
  • Zhixiang Chi
  • Lizhe Chen
  • Peng Dai
  • Juwei Lu
  • Yang Wang

Recent self-supervised video representation learning methods have found significant success by exploring essential properties of videos, e. g. speed, temporal order, etc. This work exploits an essential yet under-explored property of videos, the video continuity, to obtain supervision signals for selfsupervised representation learning. Specifically, we formulate three novel continuity-related pretext tasks, i. e. continuity justification, discontinuity localization, and missing section approximation, that jointly supervise a shared backbone for video representation learning. This self-supervision approach, termed as Continuity Perception Network (CPNet), solves the three tasks altogether and encourages the backbone network to learn local and long-ranged motion and context representations. It outperforms prior arts on multiple downstream tasks, such as action recognition, video retrieval, and action localization. Additionally, the video continuity can be complementary to other coarse-grained video properties for representation learning, and integrating the proposed pretext task to prior arts can yield much performance gains.