Arrow Research search

Author name cluster

Qiegen Liu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

JBHI Journal 2026 Journal Article

TC-KANRecon: High-Quality and Accelerated MRI Reconstruction via Adaptive KAN Mechanisms and Intelligent Feature Scaling

  • Ruiquan Ge
  • Xiao Yu
  • Yifei Chen
  • Fan Jia
  • Shenghao Zhu
  • Dong Zeng
  • Changmiao Wang
  • Qiegen Liu

MRI has become essential in clinical diagnosis due to its high resolution and multiple contrast mechanisms. However, the relatively long acquisition time limits its broader application. To address this issue, this study presents an innovative conditional guided diffusion model, named TC-KANRecon, which incorporates the Multi-Free U-KAN module and a dynamic clipping strategy. TC-KANRecon model aims to accelerate the MRI reconstruction process through deep learning methods while maintaining the reconstruction quality. The MF-UKAN module can effectively balance the tradeoff between image denoising and structure preservation. Specifically, it presents the multi-head attention mechanisms and scalar modulation factors, which significantly enhance the model’s robustness and structure preservation capabilities in complex noise environments. Moreover, the dynamic clipping strategy in TC-KANRecon adjusts the cropping interval according to the sampling steps, thereby mitigating image detail loss while preserving the visual features of the images. Furthermore, the Conditional Guidance Model incorporates full-sampling k-space information, realizing efficient fusion of conditional information, enhancing the model’s ability to process complex data, and improving the realism and detail richness of reconstructed images. Experimental results demonstrate that the proposed method outperforms other MRI reconstruction methods in both qualitative and quantitative evaluations. Notably, TC-KANRecon method exhibits excellent reconstruction results when processing high-noise, low-sampling-rate MRI data.

JBHI Journal 2025 Journal Article

Virtual-mask Informed Prior for Sparse-view Dual-Energy CT Reconstruction

  • Zini Chen
  • Yao Xiao
  • Junyan Zhang
  • Mohan Li
  • Cunfeng Wei
  • Shaoyu Wang
  • Liu Shi
  • Qiegen Liu

Sparse-view sampling in dual-energy computed tomography (DECT) significantly reduces radiation dose and increases imaging speed, yet is highly prone to artifacts. Although diffusion models have demonstrated potential in effectively handling incomplete data, most existing methods in this field focus on the image domain and lack global constraints, which consequently leads to insufficient reconstruction quality. In this study, we propose a dual-domain virtual-mask informed diffusion model for sparse-view reconstruction by leveraging the high inter-channel correlation in DECT. Specifically, the study designs a virtual mask and applies it to the high-energy and low-energy data to perform perturbation operations, thus constructing high-dimensional tensors that serve as the prior information of the diffusion model. In addition, a dual-domain collaboration strategy is adopted to integrate the information of the randomly selected high-frequency components in the wavelet domain with the information in the projection domain, for the purpose of optimizing the global structures and local details. The experimental results show that the method exhibits excellent performance on multiple datasets. Under 30-view sparse sampling conditions, VIP-DECT improves PSNR by at least 1. 02 dB and enhances SSIM by 1. 91%.

JBHI Journal 2023 Journal Article

Variable Augmented Network for Invertible Modality Synthesis and Fusion

  • Yuhao Wang
  • Ruirui Liu
  • Zihao Li
  • Shanshan Wang
  • Cailian Yang
  • Qiegen Liu

As an effective way to integrate the information contained in multiple medical images under different modalities, medical image synthesis and fusion have emerged in various clinical applications such as disease diagnosis and treatment planning. In this paper, an invertible and variable augmented network (iVAN) is proposed for medical image synthesis and fusion. In iVAN, the channel number of the network input and output is the same through variable augmentation technology, and data relevance is enhanced, which is conducive to the generation of characterization information. Meanwhile, the invertible network is used to achieve the bidirectional inference processes. Empowered by the invertible and variable augmentation schemes, iVAN not only be applied to the mappings of multi-input to one-output and multi-input to multi-output, but also to the case of one-input to multi-output. Experimental results demonstrated superior performance and potential task flexibility of the proposed method, compared with existing synthesis and fusion methods.

AAAI Conference 2019 Conference Paper

CISI-net: Explicit Latent Content Inference and Imitated Style Rendering for Image Inpainting

  • Jing Xiao
  • Liang Liao
  • Qiegen Liu
  • Ruimin Hu

Convolutional neural networks (CNNs) have presented their potential in filling large missing areas with plausible contents. To address the blurriness issue commonly existing in the CNN-based inpainting, a typical approach is to conduct texture refinement on the initially completed images by replacing the neural patch in the predicted region using the closest one in the known region. However, such a processing might introduce undesired content change in the predicted region, especially when the desired content does not exist in the known region. To avoid generating such incorrect content, in this paper, we propose a content inference and style imitation network (CISI-net), which explicitly separate the image data into content code and style code. The content inference is realized by performing inference in the latent space to infer the content code of the corrupted images similar to the one from the original images. It can produce more detailed content than a similar inference procedure in the pixel domain, due to the dimensional distribution of content being lower than that of the entire image. On the other hand, the style code is used to represent the rendering of content, which will be consistent over the entire image. The style code is then integrated with the inferred content code to generate the complete image. Experiments on multiple datasets including structural and natural images demonstrate that our proposed approach out-performs the existing ones in terms of content accuracy as well as texture details.