Arrow Research search

Author name cluster

Qing Tao

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

EAAI Journal 2026 Journal Article

Identifying non-small cell lung cancer subtypes by a hybrid representative causal network with computed tomography images

  • Li Liu
  • Xueying Wang
  • Shanshan Huang
  • Zhengqiao Deng
  • Shu Wang
  • Guang Wu
  • Donglai Yang
  • Sixi Zha

Identifying representative causal features from computed tomography (CT) images remains a significant challenge for the subtype classification of non-small cell lung cancer (NSCLC). Existing methods, whether based on radiomics or deep neural networks, often overlook the intricate causal relationships among features, thereby yielding suboptimal or even detrimental diagnostic outcomes. To bridge this gap, we propose a Hybrid Representative Causal Network (HRCL) for NSCLC subtype identification, which explicitly captures the local causal relationships inherent in the interaction between radiomics and features based on deep learning from a holistic perspective. Specifically, a causal network structure is learned to delineate the unique causal configuration of distinct NSCLC subtypes through a variable number of nodes and links. The resultant network adheres to the causal Markov property, thereby ensuring global consistency of all local cause–effect dependencies. Moreover, a hybrid representative feature selector is designed to identify the most salient causal features from the causal network for precise NSCLC subtype classification. Our method achieves an accuracy of 83. 7% on the publicly available P-NSCLC dataset and 90. 3% on the privately collected I-NSCLC dataset. The empirical evaluations demonstrate that our model significantly outperforms the state-of-the-art methods.

AAAI Conference 2023 Conference Paper

Structured BFGS Method for Optimal Doubly Stochastic Matrix Approximation

  • Dejun Chu
  • Changshui Zhang
  • Shiliang Sun
  • Qing Tao

Doubly stochastic matrix plays an essential role in several areas such as statistics and machine learning. In this paper we consider the optimal approximation of a square matrix in the set of doubly stochastic matrices. A structured BFGS method is proposed to solve the dual of the primal problem. The resulting algorithm builds curvature information into the diagonal components of the true Hessian, so that it takes only additional linear cost to obtain the descent direction based on the gradient information without having to explicitly store the inverse Hessian approximation. The cost is substantially fewer than quadratic complexity of the classical BFGS algorithm. Meanwhile, a Newton-based line search method is presented for finding a suitable step size, which in practice uses the existing knowledge and takes only one iteration. The global convergence of our algorithm is established. We verify the advantages of our approach on both synthetic data and real data sets. The experimental results demonstrate that our algorithm outperforms the state-of-the-art solvers and enjoys outstanding scalability.

AAAI Conference 2021 Conference Paper

Gradient Descent Averaging and Primal-dual Averaging for Strongly Convex Optimization

  • Wei Tao
  • Wei Li
  • Zhisong Pan
  • Qing Tao

Averaging scheme has attracted extensive attention in deep learning as well as traditional machine learning. It achieves theoretically optimal convergence and also improves the empirical model performance. However, there is still a lack of sufficient convergence analysis for strongly convex optimization. Typically, the convergence about the last iterate of gradient descent methods, which is referred to as individual convergence, fails to attain its optimality due to the existence of logarithmic factor. In order to remove this factor, we first develop gradient descent averaging (GDA), which is a general projection-based dual averaging algorithm in the strongly convex setting. We further present primal-dual averaging for strongly convex cases (SC-PDA), where primal and dual averaging schemes are simultaneously utilized. We prove that GDA yields the optimal convergence rate in terms of output averaging, while SC-PDA derives the optimal individual convergence. Several experiments on SVMs and deep learning models validate the correctness of theoretical analysis and effectiveness of algorithms.

NeurIPS Conference 2010 Conference Paper

Avoiding False Positive in Multi-Instance Learning

  • Yanjun Han
  • Qing Tao
  • Jue Wang

In multi-instance learning, there are two kinds of prediction failure, i. e. , false negative and false positive. Current research mainly focus on avoding the former. We attempt to utilize the geometric distribution of instances inside positive bags to avoid both the former and the latter. Based on kernel principal component analysis, we define a projection constraint for each positive bag to classify its constituent instances far away from the separating hyperplane while place positive instances and negative instances at opposite sides. We apply the Constrained Concave-Convex Procedure to solve the resulted problem. Empirical results demonstrate that our approach offers improved generalization performance.

IS Journal 2008 Journal Article

Machine Learning: The State of the Art

  • Jue Wang
  • Qing Tao

The two fundamental problems in machine learning (ML) are statistical analysis and algorithm design. The former tells us the principles of the mathematical models that we establish from the observation data. The latter defines the conditions on which implementation of data models and data sets rely. A newly discovered challenge to ML is the Rashomon effect, which means that data are possibly generated from a mixture of heterogeneous sources. A simple classification standard can shed light on emerging forms of ML. This article is part of a special issue on AI in China.