Arrow Research search

Author name cluster

Hong Tao

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

JBHI Journal 2025 Journal Article

Cognitive Load Prediction From Multimodal Physiological Signals Using Multiview Learning

  • Yingxin Liu
  • Yang Yu
  • Hong Tao
  • Zeqi Ye
  • Si Wang
  • Hao Li
  • Dewen Hu
  • Zongtan Zhou

Predicting cognitive load is a crucial issue in the emerging field of human-computer interaction and holds significant practical value, particularly in flight scenarios. Although previous studies have realized efficient cognitive load classification, new research is still needed to adapt the current state-of-the-art multimodal fusion methods. Here, we proposed a feature selection framework based on multiview learning to address the challenges of information redundancy and reveal the common physiological mechanisms underlying cognitive load. Specifically, the multimodal signal features [electroencephalogram (EEG), electrodermal activity (EDA), electrocardiogram (ECG), electrooculogram (EOG), & eye movements] at three cognitive load levels were estimated during multiattribute task battery (MATB) tasks performed by 22 healthy participants and fed into a feature selection-multiview classification with cohesion and diversity (FS-MCCD) framework. The optimized feature set was extracted from the original feature set by integrating the weight of each view and the feature weights to formulate the ranking criteria. The cognitive load prediction model, evaluated using real-time classification results, achieved an average accuracy of 81. 08% and an average F1-score of 80. 94% for three-class classification among 22 participants. Furthermore, the weights of the physiological signal features revealed the physiological mechanisms related to cognitive load. Specifically, heightened cognitive load was linked to amplified $\delta$ and $\theta$ power in the frontal lobe, reduced $\alpha$ power in the parietal lobe, and an increase in pupil diameter. Thus, the proposed multimodal feature fusion framework emphasizes the effectiveness and efficiency of using these features to predict cognitive load.

JMLR Journal 2023 Journal Article

Label Distribution Changing Learning with Sample Space Expanding

  • Chao Xu
  • Hong Tao
  • Jing Zhang
  • Dewen Hu
  • Chenping Hou

With the evolution of data collection ways, label ambiguity has arisen from various applications. How to reduce its uncertainty and leverage its effectiveness is still a challenging task. As two types of representative label ambiguities, Label Distribution Learning (LDL), which annotates each instance with a label distribution, and Emerging New Class (ENC), which focuses on model reusing with new classes, have attached extensive attentions. Nevertheless, in many applications, such as emotion distribution recognition and facial age estimation, we may face a more complicated label ambiguity scenario, i.e., label distribution changing with sample space expanding owing to the new class. To solve this crucial but rarely studied problem, we propose a new framework named as Label Distribution Changing Learning (LDCL) in this paper, together with its theoretical guarantee with generalization error bound. Our approach expands the sample space by re-scaling previous distribution and then estimates the emerging label value via scaling constraint factor. For demonstration, we present two special cases within the framework, together with their optimizations and convergence analyses. Besides evaluating LDCL on most of the existing 13 data sets, we also apply it in the application of emotion distribution recognition. Experimental results demonstrate the effectiveness of our approach in both tackling label ambiguity problem and estimating facial emotion [abs] [ pdf ][ bib ] &copy JMLR 2023. ( edit, beta )

IJCAI Conference 2019 Conference Paper

Simultaneous Representation Learning and Clustering for Incomplete Multi-view Data

  • Wenzhang Zhuge
  • Chenping Hou
  • Xinwang Liu
  • Hong Tao
  • Dongyun Yi

Incomplete multi-view clustering has attracted various attentions from diverse fields. Most existing methods factorize data to learn a unified representation linearly. Their performance may degrade when the relations between the unified representation and data of different views are nonlinear. Moreover, they need post-processing on the unified representations to extract the clustering indicators, which separates the consensus learning and subsequent clustering. To address these issues, in this paper, we propose a Simultaneous Representation Learning and Clustering (SRLC) method. Concretely, SRLC constructs similarity matrices to measure the relations between pair of instances, and learns low-dimensional representations of present instances on each view and a common probability label matrix simultaneously. Thus, the nonlinear information can be reflected by these representations and the clustering results can obtained from label matrix directly. An efficient iterative algorithm with guaranteed convergence is presented for optimization. Experiments on several datasets demonstrate the advantages of the proposed approach.

AAAI Conference 2018 Conference Paper

Reliable Multi-View Clustering

  • Hong Tao
  • Chenping Hou
  • Xinwang Liu
  • Tongliang Liu
  • Dongyun Yi
  • Jubo Zhu

With the advent of multi-view data, multi-view learning (MVL) has become an important research direction in machine learning. It is usually expected that multi-view algorithms can obtain better performance than that of merely using a single view. However, previous researches have pointed out that sometimes the utilization of multiple views may even deteriorate the performance. This will be a stumbling block for the practical use of MVL in real applications, especially for tasks requiring high dependability. Thus, it is eager to design reliable multi-view approaches, such that their performance is never degenerated by exploiting multiple views. This issue is vital but rarely studied. In this paper, we focus on clustering and propose the Reliable Multi-View Clustering (RMVC) method. Based on several candidate multi-view clusterings, RMVC maximizes the worst-case performance gain against the best single view clustering, which is equivalently expressed as no label information available. Specifically, employing the squared χ2 distance for clustering comparison makes the formulation of RMVC easy to solve, and an efficient strategy is proposed for optimization. Theoretically, it can be proved that the performance of RMVC will never be significantly decreased under some assumption. Experimental results on a number of data sets demonstrate that the proposed method can effectively improve the reliability of multi-view clustering.