Arrow Research search

Author name cluster

Xiyue Wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

JBHI Journal 2025 Journal Article

Counterfactual Bidirectional Co-Attention Transformer for Integrative Histology-Genomic Cancer Risk Stratification

  • Zheyi Ji
  • Yongxin Ge
  • Chijioke Chukwudi
  • Kaicheng U
  • Sophia Meixuan Zhang
  • Yulong Peng
  • Junyou Zhu
  • Hossam Zaki

Applying deep learning to predict patient prognostic survival outcomes using histological whole-slide images (WSIs) and genomic data is challenging due to the morphological and transcriptomic heterogeneity present in the tumor microenvironment. Existing deep learning-enabled methods often exhibit learning biases, primarily because the genomic knowledge used to guide directional feature extraction from WSIs may be irrelevant or incomplete. This results in a suboptimal and sometimes myopic understanding of the overall pathological landscape, potentially overlooking crucial histological insights. To tackle these challenges, we propose the CounterFactual Bidirectional Co-Attention Transformer framework. By integrating a bidirectional co-attention layer, our framework fosters effective feature interactions between the genomic and histology modalities and ensures consistent identification of prognostic features from WSIs. Using counterfactual reasoning, our model utilizes causality to model unimodal and multimodal knowledge for cancer risk stratification. This approach directly addresses and reduces bias, enables the exploration of ’what-if' scenarios, and offers a deeper understanding of how different features influence survival outcomes. Our framework, validated across eight diverse cancer benchmark datasets from The Cancer Genome Atlas (TCGA), represents a major improvement over current histology-genomic model learning methods. It shows an average 2. 5% improvement in c-index performance over 18 state-of-the-art models in predicting patient prognoses across eight cancer types.

NeurIPS Conference 2022 Conference Paper

SCL-WC: Cross-Slide Contrastive Learning for Weakly-Supervised Whole-Slide Image Classification

  • Xiyue Wang
  • Jinxi Xiang
  • Jun Zhang
  • Sen Yang
  • Zhongyi Yang
  • Ming-Hui Wang
  • Jing Zhang
  • Wei Yang

Weakly-supervised whole-slide image (WSI) classification (WSWC) is a challenging task where a large number of unlabeled patches (instances) exist within each WSI (bag) while only a slide label is given. Despite recent progress for the multiple instance learning (MIL)-based WSI analysis, the major limitation is that it usually focuses on the easy-to-distinguish diagnosis-positive regions while ignoring positives that occupy a small ratio in the entire WSI. To obtain more discriminative features, we propose a novel weakly-supervised classification method based on cross-slide contrastive learning (called SCL-WC), which depends on task-agnostic self-supervised feature pre-extraction and task-specific weakly-supervised feature refinement and aggregation for WSI-level prediction. To enable both intra-WSI and inter-WSI information interaction, we propose a positive-negative-aware module (PNM) and a weakly-supervised cross-slide contrastive learning (WSCL) module, respectively. The WSCL aims to pull WSIs with the same disease types closer and push different WSIs away. The PNM aims to facilitate the separation of tumor-like patches and normal ones within each WSI. Extensive experiments demonstrate state-of-the-art performance of our method in three different classification tasks (e. g. , over 2% of AUC in Camelyon16, 5% of F1 score in BRACS, and 3% of AUC in DiagSet). Our method also shows superior flexibility and scalability in weakly-supervised localization and semi-supervised classification experiments (e. g. , first place in the BRIGHT challenge). Our code will be available at https: //github. com/Xiyue-Wang/SCL-WC.

YNICL Journal 2021 Journal Article

A deep learning algorithm for automatic detection and classification of acute intracranial hemorrhages in head CT scans

  • Xiyue Wang
  • Tao Shen
  • Sen Yang
  • Jun Lan
  • Yanming Xu
  • Minghui Wang
  • Jing Zhang
  • Xiao Han

Acute Intracranial hemorrhage (ICH) is a life-threatening disease that requires emergency medical attention, which is routinely diagnosed using non-contrast head CT imaging. The diagnostic accuracy of acute ICH on CT varies greatly among radiologists due to the difficulty of interpreting subtle findings and the time pressure associated with the ever-increasing workload. The use of artificial intelligence technology may help automate the process and assist radiologists for more prompt and better decision-making. In this work, we design a deep learning approach that mimics the interpretation process of radiologists, and combines a 2D CNN model and two sequence models to achieve accurate acute ICH detection and subtype classification. Being developed using the extensive 2019-RSNA Brain CT Hemorrhage Challenge dataset with over 25000 CT scans, our deep learning algorithm can accurately classify the acute ICH and its five subtypes with AUCs of 0.988 (ICH), 0.984 (EDH), 0.992 (IPH), 0.996 (IVH), 0.985 (SAH), and 0.983 (SDH), respectively, reaching the accuracy level of expert radiologists. Our method won 1st place among 1345 teams from 75 countries in the RSNA challenge. We have further evaluated our algorithm on two independent external validation datasets with 75 and 491 CT scans, respectively, and our method maintained high AUCs of 0.964 and 0.949 for acute ICH detection. These results have demonstrated the high performance and robust generalization ability of our proposed method, which makes it a useful second-read or triage tool that can facilitate routine clinical applications.