Arrow Research search

Author name cluster

Hongying Liu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

15 papers
1 author row

Possible papers

15

AAAI Conference 2026 Conference Paper

FedAdamW: A Communication-Efficient Optimizer with Convergence and Generalization Guarantees for Federated Large Models

  • Junkang Liu
  • Fanhua Shang
  • Hongying Liu
  • Yuxuan Tian
  • Yuanyuan Liu
  • Jin Liu
  • Kewen Zhu
  • Zhouchen Lin

AdamW has become one of the most effective optimizers for training large-scale models. We have also observed its effectiveness in the context of federated learning (FL). However, directly applying AdamW in federated learning settings poses significant challenges: (1) due to data heterogeneity, AdamW often yields high variance in the second-moment estimate v; (2) the local overfitting of AdamW may cause client drift; and (3) Reinitializing moment estimates (v, m) at each round slows down convergence. To address these challenges, we propose the first Federated AdamW algorithm, called FedAdamW, for training and fine-tuning various large models. FedAdamW aligns local updates with the global update using both a local correction mechanism and decoupled weight decay to mitigate local overfitting. FedAdamW efficiently aggregates the mean of the second-moment estimates to reduce their variance and reinitialize them. Theoretically, we prove that FedAdamW achieves a linear speedup convergence rate of O(p(L∆σ2l )/(SKRε2) + (L∆)/R) without heterogeneity assumption, where S is the number of participating clients per round, K is the number of local iterations, and R is the total number of communication rounds. We also employ PAC-Bayesian generalization analysis to explain the effectiveness of decoupled weight decay in local training. Empirically, we validate the effectiveness of FedAdamW on language and vision Transformer models. Compared to several baselines, FedAdamW significantly reduces communication rounds and improves test accuracy.

JBHI Journal 2026 Journal Article

PathFusion-Net: A Rough Path Theory-Based Deep Learning Model for ECG Arrhythmia Classification

  • Tianlong Feng
  • Qingchen Li
  • Yuanyuan Zhang
  • Yongzhi Liao
  • Di Lu
  • Liping Wang
  • Jianqin Zhao
  • Lei Jiang

This study introduces a novel electrocardiogram (ECG) arrhythmia classification model, PathFusion-Net, which integrates Rough Path Theory with deep learning technologies. The model combines Convolutional Neural Networks (CNN), Long Short-Term Memory (LSTM), Path Signatures, and Path Development to extract spatial morphological features from ECG images and multi-order temporal representations from ECG signals. By adopting an inter-patient split paradigm, our approach more closely reflects real-world clinical diagnostic settings compared to intra-patient methods. The model demonstrates state-of-the-art overall classification performance on both the MIT-BIH Arrhythmia Database and a private clinical dataset, achieving 94. 7% and 95. 1% accuracy, respectively, under the AAMI four-class standard with an inter-patient split paradigm. On the MIT-BIH dataset, the proposed method attains competitive precision and recall across multiple arrhythmia types, including 95. 2% /87. 9% for ventricular ectopic beats (V) and 75. 7% /92. 3% for supraventricular ectopic beats (S), indicating balanced performance across clinically diverse categories. This research highlights the potential of Rough Path Theory in time-series analysis and offers a novel deep learning framework for automated early detection and monitoring of ECG arrhythmias. The code used in this study is available at: https://github.com/Rand2AI/PathFusion-Net.

JBHI Journal 2025 Journal Article

Completed Feature Disentanglement Learning for Multimodal MRIs Analysis

  • Tianling Liu
  • Hongying Liu
  • Fanhua Shang
  • Lequan Yu
  • Tong Han
  • Liang Wan

Multimodal MRIs play a crucial role in clinical diagnosis and treatment. Feature disentanglement (FD)-based methods, aiming at learning superior feature representations for multimodal data analysis, have achieved significant success in multimodal learning (MML). Typically, existing FD-based methods separate multimodal data into modality-shared and modality-specific features, and employ concatenation or attention mechanisms to integrate these features. However, our preliminary experiments indicate that these methods could lead to a loss of shared information among subsets of modalities when the inputs contain more than two modalities, and such information is critical for prediction accuracy. Furthermore, these methods do not adequately interpret the relationships between the decoupled features at the fusion stage. To address these limitations, we propose a novel Complete Feature Disentanglement (CFD) strategy that recovers the lost information during feature decoupling. Specifically, the CFD strategy not only identifies modality-shared and modality-specific features, but also decouples shared features among subsets of multimodal inputs, termed as modality-partial-shared features. We further introduce a new Dynamic Mixture-of-Experts Fusion (DMF) module that dynamically integrates these decoupled features, by explicitly learning the local-global relationships among the features. The effectiveness of our approach is validated through classification tasks on three multimodal MRI datasets. Extensive experimental results demonstrate that our approach outperforms other state-of-the-art MML methods with obvious margins, showcasing its superior performance.

NeurIPS Conference 2025 Conference Paper

QBasicVSR: Temporal Awareness Adaptation Quantization for Video Super-Resolution

  • Zhenwei Zhang
  • Fanhua Shang
  • Hongying Liu
  • Liang Wan
  • Wei Feng
  • Yanming Hui

While model quantization has become pivotal for deploying super-resolution (SR) networks on mobile devices, existing works focus on quantization methods only for image super-resolution. Different from image SR quantization, the temporal error propagation, shared temporal parameterization, and temporal metric mismatch significantly degrade the quantization performance of a video SR model. To address these issues, we propose the first quantization method, QBasicVSR, for video super-resolution. A novel temporal awareness adaptation post-training quantization (PTQ) framework for video super-resolution with the flow-gradient video bit adaptation and temporal shared layer bit adaptation is presented. Moreover, we put forward a novel fine-tuning method for VSR with the supervision of the full-precision model. Our method achieves extraordinary performance with state-of-the-art efficient VSR approaches, delivering up to $\times$200 faster processing speed while utilizing only 1/8 of the GPU resources. Additionally, extensive experiments demonstrate that the proposed method significantly outperforms existing PTQ algorithms on various datasets. For instance, it attains a 2. 53 dB increase on the UDM10 benchmark when quantizing BasicVSR to 4-bit with 100 unlabeled video clips. The code and models will be released on GitHub.

NeurIPS Conference 2025 Conference Paper

Tight High-Probability Bounds for Nonconvex Heavy-Tailed Scenario under Weaker Assumptions

  • Weixin An
  • Yuanyuan Liu
  • Fanhua Shang
  • Han Yu
  • Junkang Liu
  • Hongying Liu

Gradient clipping is increasingly important in centralized learning (CL) and federated learning (FL). Many works focus on its optimization properties under strong assumptions involving Gaussian noise and standard smoothness. However, practical machine learning tasks often only satisfy weaker conditions, such as heavy-tailed noise and $(L_0, L_1)$-smoothness. To bridge this gap, we propose a high-probability analysis for clipped Stochastic Gradient Descent (SGD) under these weaker assumptions. Our findings show a better convergence rate than existing ones can be achieved, and our high-probability analysis does not rely on the bounded gradient assumption. Moreover, we extend our analysis to FL, where a gap remains between expected and high-probability convergence, which the naive clipped SGD cannot bridge. Thus, we design a new \underline{Fed}erated \underline{C}lipped \underline{B}atched \underline{G}radient (FedCBG) algorithm, and prove the convergence and generalization bounds with high probability for the first time. Our analysis reveals the trade-offs between the optimization and generalization performance. Extensive experiments demonstrate that \methodname{} can generalize better to unseen client distributions than state-of-the-art baselines.

AAAI Conference 2025 Conference Paper

Unsupervised Degradation Representation Aware Transform for Real-World Blind Image Super-Resolution

  • Sen Chen
  • Hongying Liu
  • Chaowei Fang
  • Fanhua Shang
  • Yuanyuan Liu
  • Liang Wan
  • Dongmei Jiang
  • Yaowei Wang

Blind image super-resolution (blind SR) aims to restore a high-resolution (HR) image from a low-resolution (LR) image with unknown degradation. Many existing methods explicitly estimate degradation information from various LR images. However, in most cases, image degradations are independent of image content. Their estimations may be influenced by the image content resulting in inaccuracy. Unlike existing works, we design a dual-encoder for degradation representation (DEDR) to preclude the influence of image content from LR images. This benefits in extracting the intrinsic degradation representation more accurately. To the best of our knowledge, this paper is the first work that estimates degradation representation through filtering out image content. Based on the degradation representation extracted by DEDR, we present a novel framework, named degradation representation aware transform network (DRAT) for blind SR. We propose global degradation aware (GDA) blocks to propagate degradation information across spatial and channel dimensions, in which a degradation representation transform module (DRT) is introduced to render features degradation-aware, thereby enhancing the restoration of LR images. Extensive experiments are conducted on three benchmark datasets (including Gaussian 8, DIV2KRK, and real-world datasets) under large scaling factors with complex degradations. The experimental results demonstrate that DRAT surpasses state-of-the-art supervised kernel estimation and unsupervised degradation representation methods.

NeurIPS Conference 2024 Conference Paper

Robust and Faster Zeroth-Order Minimax Optimization: Complexity and Applications

  • Weixin An
  • Yuanyuan Liu
  • Fanhua Shang
  • Hongying Liu

Many zeroth-order (ZO) optimization algorithms have been developed to solve nonconvex minimax problems in machine learning and computer vision areas. However, existing ZO minimax algorithms have high complexity and rely on some strict restrictive conditions for ZO estimations. To address these issues, we design a new unified ZO gradient descent extragradient ascent (ZO-GDEGA) algorithm, which reduces the overall complexity to $\mathcal{O}(d\epsilon^{-6})$ to find an $\epsilon$-stationary point of the function $\psi$ for nonconvex-concave (NC-C) problems, where $d$ is the variable dimension. To the best of our knowledge, ZO-GDEGA is the first ZO algorithm with complexity guarantees to solve stochastic NC-C problems. Moreover, ZO-GDEGA requires weaker conditions on the ZO estimations and achieves more robust theoretical results. As a by-product, ZO-GDEGA has advantages on the condition number for the NC-strongly concave case. Experimentally, ZO-GDEGA can generate more effective poisoning attack data with an average accuracy reduction of 5\%. The improved AUC performance also verifies the robustness of gradient estimations.

AAAI Conference 2024 Conference Paper

SAVSR: Arbitrary-Scale Video Super-Resolution via a Learned Scale-Adaptive Network

  • Zekun Li
  • Hongying Liu
  • Fanhua Shang
  • Yuanyuan Liu
  • Liang Wan
  • Wei Feng

Deep learning-based video super-resolution (VSR) networks have gained significant performance improvements in recent years. However, existing VSR networks can only support a fixed integer scale super-resolution task, and when we want to perform VSR at multiple scales, we need to train several models. This implementation certainly increases the consumption of computational and storage resources, which limits the application scenarios of VSR techniques. In this paper, we propose a novel Scale-adaptive Arbitrary-scale Video Super-Resolution network (SAVSR), which is the first work focusing on spatial VSR at arbitrary scales including both non-integer and asymmetric scales. We also present an omni-dimensional scale-attention convolution, which dynamically adapts according to the scale of the input to extract inter-frame features with stronger representational power. Moreover, the proposed spatio-temporal adaptive arbitrary-scale upsampling performs VSR tasks using both temporal features and scale information. And we design an iterative bi-directional architecture for implicit feature alignment. Experiments at various scales on the benchmark datasets show that the proposed SAVSR outperforms state-of-the-art (SOTA) methods at non-integer and asymmetric scales. The source code is available at https://github.com/Weepingchestnut/SAVSR.

NeurIPS Conference 2023 Conference Paper

A Single-Loop Accelerated Extra-Gradient Difference Algorithm with Improved Complexity Bounds for Constrained Minimax Optimization

  • Yuanyuan Liu
  • Fanhua Shang
  • Weixin An
  • Junhao Liu
  • Hongying Liu
  • Zhouchen Lin

In this paper, we propose a novel extra-gradient difference acceleration algorithm for solving constrained nonconvex-nonconcave (NC-NC) minimax problems. In particular, we design a new extra-gradient difference step to obtain an important quasi-cocoercivity property, which plays a key role to significantly improve the convergence rate in the constrained NC-NC setting without additional structural assumption. Then momentum acceleration is also introduced into our dual accelerating update step. Moreover, we prove that, to find an $\epsilon$-stationary point of the function $f$, our algorithm attains the complexity $\mathcal{O}(\epsilon^{-2})$ in the constrained NC-NC setting, while the best-known complexity bound is $\widetilde{\mathcal{O}}(\epsilon^{-4})$, where $\widetilde{\mathcal{O}}(\cdot)$ hides logarithmic factors compared to $\mathcal{O}(\cdot)$. As the special cases of the constrained NC-NC setting, our algorithm can also obtain the same complexity $\mathcal{O}(\epsilon^{-2})$ for both the nonconvex-concave (NC-C) and convex-nonconcave (C-NC) cases, while the best-known complexity bounds are $\widetilde{\mathcal{O}}(\epsilon^{-2. 5})$ for the NC-C case and $\widetilde{\mathcal{O}}(\epsilon^{-4})$ for the C-NC case. For fair comparison with existing algorithms, we also analyze the complexity bound to find $\epsilon$-stationary point of the primal function $\phi$ for the constrained NC-C problem, which shows that our algorithm can improve the complexity bound from $\widetilde{\mathcal{O}}(\epsilon^{-3})$ to $\mathcal{O}(\epsilon^{-2})$. To the best of our knowledge, this is the first time that the proposed algorithm improves the best-known complexity bounds from $\mathcal{O}(\epsilon^{-4})$ and $\widetilde{\mathcal{O}}(\epsilon^{-3})$ to $\mathcal{O}(\epsilon^{-2})$ in both the NC-NC and NC-C settings.

NeurIPS Conference 2023 Conference Paper

Boosting Adversarial Transferability by Achieving Flat Local Maxima

  • Zhijin Ge
  • Hongying Liu
  • Wang Xiaosen
  • Fanhua Shang
  • Yuanyuan Liu

Transfer-based attack adopts the adversarial examples generated on the surrogate model to attack various models, making it applicable in the physical world and attracting increasing interest. Recently, various adversarial attacks have emerged to boost adversarial transferability from different perspectives. In this work, inspired by the observation that flat local minima are correlated with good generalization, we assume and empirically validate that adversarial examples at a flat local region tend to have good transferability by introducing a penalized gradient norm to the original loss function. Since directly optimizing the gradient regularization norm is computationally expensive and intractable for generating adversarial examples, we propose an approximation optimization method to simplify the gradient update of the objective function. Specifically, we randomly sample an example and adopt a first-order procedure to approximate the curvature of the second-order Hessian matrix, which makes computing more efficient by interpolating two Jacobian matrices. Meanwhile, in order to obtain a more stable gradient direction, we randomly sample multiple examples and average the gradients of these examples to reduce the variance due to random sampling during the iterative process. Extensive experimental results on the ImageNet-compatible dataset show that the proposed method can generate adversarial examples at flat local regions, and significantly improve the adversarial transferability on either normally trained models or adversarially trained models than the state-of-the-art attacks. Our codes are available at: https: //github. com/Trustworthy-AI-Group/PGN.

AAAI Conference 2022 Conference Paper

HNO: High-Order Numerical Architecture for ODE-Inspired Deep Unfolding Networks

  • Lin Kong
  • Wei Sun
  • Fanhua Shang
  • Yuanyuan Liu
  • Hongying Liu

Recently, deep unfolding networks (DUNs) based on optimization algorithms have received increasing attention, and their high efficiency has been confirmed by many experimental and theoretical results. Since this type of networks combines model-based traditional optimization algorithms, they have high interpretability. In addition, ordinary differential equations (ODEs) are often used to explain deep neural networks, and provide some inspiration for designing innovative network models. In this paper, we transform DUNs into first-order ODE forms, and propose a high-order numerical architecture for ODE-inspired deep unfolding networks. To the best of our knowledge, this is the first work to establish the relationship between DUNs and ODEs. Moreover, we take two representative DUNs as examples, apply our architecture to them and design novel DUNs. In theory, we prove the existence, uniqueness of the solution and convergence of the proposed network, and also prove that our network obtains a fast linear convergence rate. Extensive experiments verify the effectiveness and advantages of our architecture.

IJCAI Conference 2021 Conference Paper

Behavior Mimics Distribution: Combining Individual and Group Behaviors for Federated Learning

  • Hua Huang
  • Fanhua Shang
  • Yuanyuan Liu
  • Hongying Liu

Federated Learning (FL) has become an active and promising distributed machine learning paradigm. As a result of statistical heterogeneity, recent studies clearly show that the performance of popular FL methods (e. g. , FedAvg) deteriorates dramatically due to the client drift caused by local updates. This paper proposes a novel Federated Learning algorithm (called IGFL), which leverages both Individual and Group behaviors to mimic distribution, thereby improving the ability to deal with heterogeneity. Unlike existing FL methods, our IGFL can be applied to both client and server optimization. As a by-product, we propose a new attention-based federated learning in the server optimization of IGFL. To the best of our knowledge, this is the first time to incorporate attention mechanisms into federated optimization. We conduct extensive experiments and show that IGFL can significantly improve the performance of existing federated learning methods. Especially when the distributions of data among individuals are diverse, IGFL can improve the classification accuracy by about 13% compared with prior baselines.

AAAI Conference 2021 Conference Paper

Large Motion Video Super-Resolution with Dual Subnet and Multi-Stage Communicated Upsampling

  • Hongying Liu
  • Peng Zhao
  • Zhubo Ruan
  • Fanhua Shang
  • Yuanyuan Liu

Video super-resolution (VSR) aims at restoring a video in low-resolution (LR) and improving it to higher-resolution (HR). Due to the characteristics of video tasks, it is very important that motion information among frames should be well concerned, summarized and utilized for guidance in a VSR algorithm. Especially, when a video contains large motion, conventional methods easily bring incoherent results or artifacts. In this paper, we propose a novel deep neural network with Dual Subnet and Multi-stage Communicated Upsampling (DSMC) for super-resolution of videos with large motion. We design a new module named U-shaped residual dense network with 3D convolution (U3D-RDN) for fine implicit motion estimation and motion compensation (MEMC) as well as coarse spatial feature extraction. And we present a new Multi-Stage Communicated Upsampling (MSCU) module to make full use of the intermediate results of upsampling for guiding the VSR. Moreover, a novel dual subnet is devised to aid the training of our DSMC, whose dual loss helps to reduce the solution space as well as enhance the generalization ability. Our experimental results confirm that our method achieves superior performance on videos with large motion compared to state-of-the-art methods.

AAAI Conference 2021 Conference Paper

Learned Extragradient ISTA with Interpretable Residual Structures for Sparse Coding

  • Yangyang Li
  • Lin Kong
  • Fanhua Shang
  • Yuanyuan Liu
  • Hongying Liu
  • Zhouchen Lin

Recently, the study on learned iterative shrinkage thresholding algorithm (LISTA) has attracted increasing attentions. A large number of experiments as well as some theories have proved the high efficiency of LISTA for solving sparse coding problems. However, existing LISTA methods are all serial connection. To address this issue, we propose a novel extragradient based LISTA (ELISTA), which has a residual structure and theoretical guarantees. Moreover, most LISTA methods use the soft thresholding function, which has been found to cause a large estimation bias. Therefore, we propose a thresholding function for ELISTA instead of soft thresholding. From a theoretical perspective, we prove that our method attains linear convergence. Through ablation experiments, the improvements of our method on the network structure and the thresholding function are verified in practice. Extensive empirical results verify the advantages of our method.