Arrow Research search

Author name cluster

Haiping Lu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
2 author rows

Possible papers

7

AAAI Conference 2020 Conference Paper

Side Information Dependence as a Regularizer for Analyzing Human Brain Conditions across Cognitive Experiments

  • Shuo Zhou
  • Wenwen Li
  • Christopher Cox
  • Haiping Lu

The increasing of public neuroimaging datasets opens a door to analyzing homogeneous human brain conditions across datasets by transfer learning (TL). However, neuroimaging data are high-dimensional, noisy, and with small sample sizes. It is challenging to learn a robust model for data across different cognitive experiments and subjects. A recent TL approach minimizes domain dependence to learn common cross-domain features, via the Hilbert-Schmidt Independence Criterion (HSIC). Inspired by this approach and the multisource TL theory, we propose a Side Information Dependence Regularization (SIDeR) learning framework for TL in brain condition decoding. Specifically, SIDeR simultaneously minimizes the empirical risk and the statistical dependence on the domain side information, to reduce the theoretical generalization error bound. We construct 17 brain decoding TL tasks using public neuroimaging data for evaluation. Comprehensive experiments validate the superiority of SIDeR over ten competing methods, particularly an average improvement of 15. 6% on the TL tasks with multi-source experiments.

AAAI Conference 2017 Conference Paper

Bilinear Probabilistic Canonical Correlation Analysis via Hybrid Concatenations

  • Yang Zhou
  • Haiping Lu
  • Yiu-ming Cheung

Canonical Correlation Analysis (CCA) is a classical technique for two-view correlation analysis, while Probabilistic CCA (PCCA) provides a generative and more general viewpoint for this task. Recently, PCCA has been extended to bilinear cases for dealing with two-view matrices in order to preserve and exploit the matrix structures in PCCA. However, existing bilinear PCCAs impose restrictive model assumptions for matrix structure preservation, sacrificing generative correctness or model flexibility. To overcome these drawbacks, we propose BPCCA, a new bilinear extension of PCCA, by introducing a hybrid joint model. Our new model preserves matrix structures indirectly via hybrid vector-based and matrix-based concatenations. This enables BPCCA to gain more model flexibility in capturing two-view correlations and obtain close-form solutions in parameter estimation. Experimental results on two real-world applications demonstrate the superior performance of BPCCA over competing methods.

AAAI Conference 2017 Conference Paper

Multilinear Regression for Embedded Feature Selection with Application to fMRI Analysis

  • Xiaonan Song
  • Haiping Lu

Embedded feature selection is effective when both prediction and interpretation are needed. The Lasso and its extensions are standard methods for selecting a subset of features while optimizing a prediction function. In this paper, we are interested in embedded feature selection for multidimensional data, wherein (1) there is no need to reshape the multidimensional data into vectors and (2) structural information from multiple dimensions are taken into account. Our main contribution is a new method called Regularized multilinear regression and selection (Remurs) for automatically selecting a subset of features while optimizing prediction for multidimensional data. Both nuclear norm and the 1-norm are carefully incorporated to derive a multi-block optimization algorithm with proved convergence. In particular, Remurs is motivated by fMRI analysis where the data are multidimensional and it is important to find the connections of raw brain voxels with functional activities. Experiments on synthetic and real data show the advantages of Remurs compared to Lasso, Elastic Net, and their multilinear extensions.

IJCAI Conference 2016 Conference Paper

Probabilistic Rank-One Matrix Analysis with Concurrent Regularization

  • Yang Zhou
  • Haiping Lu

As a classical subspace learning method, Probabilistic PCA (PPCA) has been extended to several bilinear variants for dealing with matrix observations. However, they are all based on the Tucker model, leading to a restricted subspace representation and the problem of rotational ambiguity. To address these problems, this paper proposes a bilinear PPCA method named as Probabilistic Rank-One Matrix Analysis (PROMA). PROMA is based on the CP model, which leads to a more flexible subspace representation and does not suffer from rotational ambiguity. For better generalization, concurrent regularization is introduced to regularize the whole matrix subspace, rather than column and row factors separately. Experiments on both synthetic and real-world data demonstrate the superiority of PROMA in subspace estimation and classification as well as the effectiveness of concurrent regularization in regularizing bilinear PPCAs.

IJCAI Conference 2015 Conference Paper

Semi-Orthogonal Multilinear PCA with Relaxed Start

  • Qiquan Shi
  • Haiping Lu

Principal component analysis (PCA) is an unsupervised method for learning low-dimensional features with orthogonal projections. Multilinear PCA methods extend PCA to deal with multidimensional data (tensors) directly via tensor-to-tensor projection or tensor-to-vector projection (TVP). However, under the TVP setting, it is difficult to develop an effective multilinear PCA method with the orthogonality constraint. This paper tackles this problem by proposing a novel Semi-Orthogonal Multilinear PCA (SO-MPCA) approach. SO-MPCA learns low-dimensional features directly from tensors via TVP by imposing the orthogonality constraint in only one mode. This formulation results in more captured variance and more learned features than full orthogonality. For better generalization, we further introduce a relaxed start (RS) strategy to get SO-MPCA-RS by fixing the starting projection vectors, which increases the bias and reduces the variance of the learning model. Experiments on both face (2D) and gait (3D) data demonstrate that SO-MPCA-RS outperforms other competing algorithms on the whole, and the relaxed start strategy is also effective for other TVP-based PCA methods.

IJCAI Conference 2013 Conference Paper

Learning Canonical Correlations of Paired Tensor Sets Via Tensor-to-Vector Projection

  • Haiping Lu

Canonical correlation analysis (CCA) is a useful technique for measuring relationship between two sets of vector data. For paired tensor data sets, we propose a multilinear CCA (MCCA) method. Unlike existing multilinear variations of CCA, MCCA extracts uncorrelated features under two architectures while maximizing paired correlations. Through a pair of tensor-to-vector projections, one architecture enforces zero-correlation within each set while the other enforces zero-correlation between different pairs of the two sets. We take a successive and iterative approach to solve the problem. Experiments on matching faces of different poses show that MCCA outperforms CCA and 2D- CCA, while using much fewer features. In addition, the fusion of two architectures leads to performance improvement, indicating complementary information.