Arrow Research search

Author name cluster

Lihu Pan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

AAAI Conference 2026 Conference Paper

Mind the Gap: Predicting, Explaining and Reducing Time-to-First-Comment (Reply Gap) in Online Mental-Health Communities

  • Guangrui Fan
  • Dandan Liu
  • Lihu Pan

Online peer-support communities are vital for mental health, but their therapeutic benefit hinges on receiving a timely and helpful first reply. Posts that languish unanswered can exacerbate feelings of distress and abandonment. This paper develops and validates an integrated framework to predict, explain, and reduce this ``reply gap" on Reddit. First, using survival analysis on over 91,000 posts (2018–2025), we show that a deep learning model (DySurv) can accurately predict reply times (C-Index = 0.742), with a post's lexico-semantic content being a far stronger predictor than author history. Second, moving from correlation to causation, we use a causal inference framework on 48,612 posts to estimate the effect of different support types. We find that initial replies providing emotional support are most effective, increasing the odds of a positive user response by 49% (OR=1.49), an effect most pronounced for high-risk users. Third, we operationalize these insights in RiskMatch, a recommender system that routes at-risk posts to historically effective helpers. Rigorous counterfactual evaluation using inverse propensity scoring (IPS)—a method that corrects for biases in historical data—demonstrates that our system reduces the median wait time by 26 minutes for the highest-risk quintile. This work provides a validated, data-driven methodology to build more responsive and effective peer-support ecosystems, offering a concrete pathway to ensure fewer calls for help go unanswered.

IJCAI Conference 2025 Conference Paper

Creative Momentum Transfer: How Timing and Labeling of AI Suggestions Shape Iterative Human Ideation

  • Guangrui Fan
  • Dandan Liu
  • Lihu Pan
  • Yishan Huang

Human–AI collaboration is increasingly integral to a variety of domains where creative ideation unfolds in iterative cycles, yet most existing studies evaluate AI-generated concepts in a single step. This paper addresses the gap by investigating “Creative Momentum Transfer”—how the timing (early vs. late) and labeling (AI-labeled vs. unlabeled) of AI prompts shape multi-round human ideation. In a between-subjects experiment (N = 247), participants proposed solutions for plastic pollution over two rounds, with AI suggestions introduced either at the outset or mid-process and labeled explicitly or not. Results reveal that early AI prompts increase overall creativity but induce stronger anchoring, whereas late AI prompts trigger a mid-round pivot that fosters more divergent thinking yet still boosts final outcomes compared to a no-AI control. Labeling amplifies both subjective and objective adoption of AI ideas, although most participants could detect AI sources even when unlabeled. Furthermore, qualitative interviews highlight nuanced perspectives on perceived ownership, authenticity, and the ways in which labeling triggers deeper scrutiny of the AI’s style. By demonstrating that baseline creativity moderates these effects more robustly than trust in AI, this study advances our theoretical understanding of multi-round human–AI synergy while offering design guidelines for next-generation creativity support systems. We discuss how user-centered design can balance rapid convergence (via early AI) with strategic pivot opportunities (via late AI) and weigh transparent labeling against ethical considerations of authorship and user autonomy.