Arrow Research search

Author name cluster

Jidong Ge

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

AAAI Conference 2026 Conference Paper

CHASE: Contextual History for Adaptive and Simple Exploitation in Large Language Model Jailbreaking

  • Zhiqiang Hao
  • Chuanyi Li
  • Ye Fan
  • Jun Cai
  • Xiao Fu
  • Shangqi Wang
  • Hao Shen
  • Jiao Yin

We propose Contextual History for Adaptive and Simple Exploitation (CHASE), a novel multi-turn method for Large Language Model (LLM) jailbreaking. Rather than directly attack an LLM that may be difficult to jailbreak, CHASE first collects jailbroken histories from an easy-to-jailbreak LLM and then transfers them to the target LLM. Through this history transfer process, CHASE misleads the target LLM into thinking that it is responsible for producing the jailbroken histories and increases the chances of successful jailbreaking by prompting it to continue the conversation. Extensive evaluations on mainstream LLMs show that CHASE consistently achieves higher attack success rates and demands fewer computational resources compared to existing methods.

NeurIPS Conference 2025 Conference Paper

LawShift: Benchmarking Legal Judgment Prediction Under Statute Shifts

  • Zhuo Han
  • Yi Yang
  • Yi Feng
  • Wanhong Huang
  • Ding Xuxing
  • Chuanyi Li
  • Jidong Ge
  • Vincent Ng

Legal Judgment Prediction (LJP) seeks to predict case outcomes given available case information, offering practical value for both legal professionals and laypersons. However, a key limitation of existing LJP models is their limited adaptability to statutory revisions. Current SOTA models are neither designed nor evaluated for statutory revisions. To bridge this gap, we introduce LawShift, a benchmark dataset for evaluating LJP under statutory revisions. Covering 31 fine-grained change types, LawShift enables systematic assessment of SOTA models' ability to handle legal changes. We evaluate five representative SOTA models on LawShift, uncovering significant limitations in their response to legal updates. Our findings show that model architecture plays a critical role in adaptability, offering actionable insights and guiding future research on LJP in dynamic legal contexts.

AAAI Conference 2021 Conference Paper

Delving into Variance Transmission and Normalization: Shift of Average Gradient Makes the Network Collapse

  • Yuxiang Liu
  • Jidong Ge
  • Chuanyi Li
  • Jie Gui

Normalization operations are essential for state-of-the-art neural networks and enable us to train a network from scratch with a large learning rate (LR). We attempt to explain the real effect of Batch Normalization (BN) from the perspective of variance transmission by investigating the relationship between BN and Weights Normalization (WN). In this work, we demonstrate that the problem of the shift of the average gradient will amplify the variance of every convolutional (conv) layer. We propose Parametric Weights Standardization (PWS), a fast and robust to mini-batch size module used for conv filters, to solve the shift of the average gradient. PWS can provide the speed-up of BN. Besides, it has less computation and does not change the output of a conv layer. PWS enables the network to converge fast without normalizing the outputs. This result enhances the persuasiveness of the shift of the average gradient and explains why BN works from the perspective of variance transmission. The code and appendix will be made available on https: //github. com/lyxzzz/PWSConv.