Arrow Research search

Author name cluster

Guowen Yuan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

AAAI Conference 2023 Conference Paper

CEMA – Cost-Efficient Machine-Assisted Document Annotations

  • Guowen Yuan
  • Ben Kao
  • Tien-Hsuan Wu

We study the problem of semantically annotating textual documents that are complex in the sense that the documents are long, feature rich, and domain specific. Due to their complexity, such annotation tasks require trained human workers, which are very expensive in both time and money. We propose CEMA, a method for deploying machine learning to assist humans in complex document annotation. CEMA estimates the human cost of annotating each document and selects the set of documents to be annotated that strike the best balance between model accuracy and human cost. We conduct experiments on complex annotation tasks in which we compare CEMA against other document selection and annotation strategies. Our results show that CEMA is the most cost-efficient solution for those tasks.

AAAI Conference 2019 Short Paper

Semi-Supervised Feature Selection with Adaptive Discriminant Analysis

  • Weichan Zhong
  • Xiaojun Chen
  • Guowen Yuan
  • Yiqin Li
  • Feiping Nie

In this paper, we propose a novel Adaptive Discriminant Analysis for semi-supervised feature selection, namely SADA. Instead of computing fixed similarities before performing feature selection, SADA simultaneously learns an adaptive similarity matrix S and a projection matrix W with an iterative method. In each iteration, S is computed from the projected distance with the learned W and W is computed with the learned S. Therefore, SADA can learn better projection matrix W by weakening the effect of noise features with the adaptive similarity matrix. Experimental results on 4 data sets show the superiority of SADA compared to 5 semisupervised feature selection methods.

AAAI Conference 2018 Short Paper

A Stratified Feature Ranking Method for Supervised Feature Selection

  • Renjie Chen
  • Xiaojun Chen
  • Guowen Yuan
  • Wenya Sun
  • Qingyao Wu

Most feature selection methods usually select the highest rank features which may be highly correlated with each other. In this paper, we propose a Stratified Feature Ranking (SFR) method for supervised feature selection. In the new method, a Subspace Feature Clustering (SFC) is proposed to identify feature clusters, and a stratified feature ranking method is proposed to rank the features such that the high rank features are lowly correlated. Experimental results show the superiority of SFR.

AAAI Conference 2018 Short Paper

Discriminative Semi-Supervised Feature Selection via Rescaled Least Squares Regression-Supplement

  • Guowen Yuan
  • Xiaojun Chen
  • Chen Wang
  • Feiping Nie
  • Liping Jing

In this paper, we propose a Discriminative Semi-Supervised Feature Selection (DSSFS) method. In this method, a dragging technique is introduced to the Rescaled Linear Square Regression in order to enlarge the distances between different classes. An iterative method is proposed to simultaneously learn the regression coefficients, -draggings matrix and predicting the unknown class labels. Experimental results show the superiority of DSSFS.

IJCAI Conference 2017 Conference Paper

Semi-supervised Feature Selection via Rescaled Linear Regression

  • Xiaojun Chen
  • Guowen Yuan
  • Feiping Nie
  • Joshua Zhexue Huang

With the rapid increase of complex and high-dimensional sparse data, demands for new methods to select features by exploiting both labeled and unlabeled data have increased. Least regression based feature selection methods usually learn a projection matrix and evaluate the importances of features using the projection matrix, which is lack of theoretical explanation. Moreover, these methods cannot find both global and sparse solution of the projection matrix. In this paper, we propose a novel semi-supervised feature selection method which can learn both global and sparse solution of the projection matrix. The new method extends the least square regression model by rescaling the regression coefficients in the least square regression with a set of scale factors, which are used for ranking the features. It has shown that the new model can learn global and sparse solution. Moreover, the introduction of scale factors provides a theoretical explanation for why we can use the projection matrix to rank the features. A simple yet effective algorithm with proved convergence is proposed to optimize the new model. Experimental results on eight real-life data sets show the superiority of the method.