Arrow Research search

Author name cluster

Junhao Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
2 author rows

Possible papers

5

AAAI Conference 2026 Conference Paper

Interpreting Fedspeak with Confidence: A LLM-Based Uncertainty-Aware Framework Guided by Monetary Policy Transmission Paths

  • Rui Yao
  • Qi Chai
  • Jinhai Yao
  • Siyuan Li
  • Junhao Chen
  • Qi Zhang
  • Hao Wang

"Fedspeak", the stylized and often nuanced language used by the U.S. Federal Reserve, encodes implicit policy signals and strategic stances. The Federal Open Market Committee strategically employs Fedspeak as a communication tool to shape market expectations and influence both domestic and global economic conditions. As such, automatically parsing and interpreting Fedspeak presents a high-impact challenge, with significant implications for financial forecasting, algorithmic trading, and data-driven policy analysis. Technically, to enrich the semantic and contextual representation of Fedspeak texts, we incorporate domain-specific reasoning grounded in the monetary policy transmission mechanism. We further introduce a dynamic uncertainty decoding module to assess the confidence of model predictions, thereby enhancing both classification accuracy and model reliability. Experimental results demonstrate that our framework achieves state-of-the-art performance on the policy stance analysis task. Moreover, statistical analysis reveals a significant positive correlation between perceptual uncertainty and model error rates, validating the effectiveness of perceptual uncertainty as a diagnostic signal.

JBHI Journal 2026 Journal Article

T2Net: Tongue Image-Based T2DM Detection via Simulated Clinical Diagnostic Reasoning

  • Yang Liu
  • Peiyu Liu
  • Yanyi Huang
  • Liyun Li
  • Xiaojie Feng
  • Miao Xie
  • Junhao Chen
  • Jiayu Ye

Clinical studies indicate that the progression of Type 2 Diabetes Mellitus (T2DM) is associated with characteristic alterations in tongue features, which may facilitate non-invasive early detection. However, current deep learning–based tongue imaging approaches for diabetes diagnosis remain constrained by limited datasets, subtle feature variations, dependence on clinical expertise, and the lack of quantitative evaluation. To address these issues, we developed an open-source dataset for T2DM tongue diagnosis (DMT) and benchmarked it using multiple baseline models. Building on DMT, we propose T2Net, a tongue image recognition model for T2DM that simulates the clinical diagnostic process. T2Net comprises four core components: local inspection, pathological clue integration, syndrome identification, and diagnostic confidence estimation. First, T2Net automatically extracts key ROIs by combining large-kernel decomposition with multi-scale learning. Then, a multi-order feature interaction module enables effective fusion of tongue image features across scales to capture pathological clues. Meanwhile, we design a context-aware dynamic aggregation convolution to model long-range dependencies, and propose a flexible focal loss to mimic the diagnostic reasoning process of clinicians, enabling brain-inspired inference. Finally, we propose a clustering-based confidence estimation approach to quantitatively evaluate the reliability of model predictions. Experimental results demonstrate that T2Net achieves highly competitive performance on the DMT dataset, outperforming the second-best baseline by 2. 7% in accuracy and 2. 0% in F1 score. Moreover, the quantitative evaluation scores are largely consistent with clinical assessments by physicians.

IROS Conference 2025 Conference Paper

LITE: A Learning-Integrated Topological Explorer for Multi-Floor Indoor Environments

  • Junhao Chen
  • Zhen Zhang
  • Chengrui Zhu
  • Xiaojun Hou
  • Tianyang Hu
  • Huifeng Wu
  • Yong Liu

This work focuses on multi-floor indoor exploration, which remains an open area of research. Compared to traditional methods, recent learning-based explorers have demonstrated significant potential due to their robust environmental learning and modeling capabilities, but most are restricted to 2D environments. In this paper, we proposed a learning-integrated topological explorer, LITE, for multi-floor indoor environments. LITE decomposes the environment into a floor-stair topology, enabling seamless integration of learning or non-learning-based 2D exploration methods for 3D exploration. As we incrementally build floor-stair topology in exploration using YOLO11-based instance segmentation model, the agent can transition between floors through a finite state machine. Additionally, we implement an attention-based 2D exploration policy that utilizes an attention mechanism to capture spatial dependencies between different regions, thereby determining the next global goal for more efficient exploration. Extensive comparison and ablation studies conducted on the HM3D and MP3D datasets demonstrate that our proposed 2D exploration policy significantly outperforms all baseline explorers in terms of exploration efficiency. Furthermore, experiments in several 3D multi-floor environments indicate that our framework is compatible with various 2D exploration methods, facilitating effective multi-floor indoor exploration. Finally, we validate our method in the real world with a quadruped robot, highlighting its strong generalization capabilities.

AAAI Conference 2025 Conference Paper

Putting People in LLMs’ Shoes: Generating Better Answers via Question Rewriter

  • Junhao Chen
  • Bowen Wang
  • Zhouqiang Jiang
  • Yuta Nakashima

Large Language Models (LLMs) have demonstrated significant capabilities, particularly in the domain of question answering (QA). However, their effectiveness in QA is often undermined by the vagueness of user questions. To address this issue, we introduce single-round instance-level prompt optimization, referred to as question rewriter. By enhancing the intelligibility of human questions for black-box LLMs, our question rewriter improves the quality of generated answers. The rewriter is optimized using direct preference optimization based on feedback collected from automatic criteria for evaluating generated answers; therefore, its training does not require costly human annotations. The experiments across multiple black-box LLMs and long-form question answering (LFQA) datasets demonstrate the efficacy of our method. This paper provides a practical framework for training question rewriters and sets a precedent for future explorations in prompt optimization within LFQA tasks.

NeurIPS Conference 2024 Conference Paper

DiReCT: Diagnostic Reasoning for Clinical Notes via Large Language Models

  • Bowen Wang
  • Jiuyang Chang
  • Yiming Qian
  • Guoxin Chen
  • Junhao Chen
  • Zhouqiang Jiang
  • Jiahao Zhang
  • Yuta Nakashima

Large language models (LLMs) have recently showcased remarkable capabilities, spanning a wide range of tasks and applications, including those in the medical domain. Models like GPT-4 excel in medical question answering but may face challenges in the lack of interpretability when handling complex tasks in real clinical settings. We thus introduce the diagnostic reasoning dataset for clinical notes (DiReCT), aiming at evaluating the reasoning ability and interpretability of LLMs compared to human doctors. It contains 511 clinical notes, each meticulously annotated by physicians, detailing the diagnostic reasoning process from observations in a clinical note to the final diagnosis. Additionally, a diagnostic knowledge graph is provided to offer essential knowledge for reasoning, which may not be covered in the training data of existing LLMs. Evaluations of leading LLMs on DiReCT bring out a significant gap between their reasoning ability and that of human doctors, highlighting the critical need for models that can reason effectively in real-world clinical scenarios.