Arrow Research search

Author name cluster

Qiang Gao

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

22 papers
1 author row

Possible papers

22

AAAI Conference 2026 Conference Paper

Beyond Graph Priors: A Co-Evolving Framework Under Uncertainty for Enterprise Resilience Assessment

  • Yanzhe Xie
  • Li Huang
  • Qiang Gao
  • Xueqin Chen
  • Fan Zhou
  • Kunpeng Zhang

Assessing enterprise resilience under uncertainty necessitates capturing both intrinsic attributes and evolving inter-enterprise dependencies. However, real-world enterprise systems pose substantial structural challenges: redundant or loosely correlated links can trigger spurious relational inferences, while missing or latent dependencies often hinder the propagation of informative signals. Moreover, most existing approaches adopt static graph priors or decouple structural refinement from semantic learning, lacking a co-evolutionary paradigm that allows structure and representation to inform one another. We propose CFU, a novel Co-evolving Framework under Uncertainty, which reconceptualizes graph structure as a dynamic and learnable component evolving alongside node semantics. Specifically, CFU begins with a structure-aware contrastive pretraining phase to distill latent relational semantics without supervision. It then performs bidirectional structural refinement, filtering structurally redundant edges through semantic agreement scoring, and uncovering temporally contingent, task-relevant dependencies via similarity-guided inference. These operations are integrated through a dynamic fusion procedure that continuously aligns the evolving topology with the resilience objective. By embedding structural adaptation within the learning loop, CFU enables context-aware resilience assessment across incomplete, ambiguous, and structurally volatile enterprise environments. Ultimately, extensive experiments conducted on real-world datasets demonstrate its superior performance across diverse evaluation scenarios.

AAAI Conference 2026 Conference Paper

Shedding the Facades, Connecting the Domains: Detecting Shifting Multimodal Hate Video with Test-Time Adaptation

  • Jiao Li
  • Jian Lang
  • Xikai Tang
  • Wenzheng Shu
  • Ting Zhong
  • Qiang Gao
  • Yong Wang
  • Leiting Chen

Hate Video Detection (HVD) is crucial for online ecosystems. Existing methods assume identical distributions between training (source) and inference (target) data. However, hateful content often evolves into irregular and ambiguous forms to evade censorship, resulting in substantial semantic drift and rendering previously trained models ineffective. Test-Time Adaptation (TTA) offers a solution by adapting models during inference to narrow the cross-domain gap, while conventional TTA methods target mild distribution shifts and struggle with the severe semantic drift in HVD. To tackle these challenges, we propose SCANNER, the first TTA framework tailored for HVD. Motivated by the insight that, despite the evolving nature of hateful manifestations, their underlying cores remain largely invariant (i.e., targeting is still based on characteristics like gender, race, etc), we leverage these stable cores as a bridge to connect the source and target domains. Specifically, SCANNER initially reveals the stable cores from the ambiguous layout in evolving hateful content via a principled centroid-guided alignment mechanism. To alleviate the impact of outlier-like samples that are weakly correlated with centroids during the alignment process, SCANNER enhances the prior by incorporating a sample-level adaptive centroid alignment strategy, promoting more stable adaptation. Furthermore, to mitigate semantic collapse from overly uniform outputs within clusters, SCANNER introduces an intra-cluster diversity regularization that encourages the cluster-wise semantic richness. Experiments show that SCANNER outperforms all baselines, with an average gain of 4.69% in Macro-F1 over the best.

AAAI Conference 2025 Conference Paper

Adversity-aware Few-shot Named Entity Recognition via Augmentation Learning

  • Li Huang
  • Haowen Liu
  • Qiang Gao
  • Jiajing Yu
  • Guisong Liu
  • Xueqin Chen

Few-shot Named Entity Recognition (NER) spotlights the tag of novel entity types in data-limited scenarios or lower-resource settings. Advances with Pre-trained Language Models (PLMs), including BERT, GPT, and their variants, have driven tremendous strategies to leverage context-dependent representations and exploit predefined relational cues, yielding significant gains in witnessing unseen entities. Nevertheless, a fundamental issue exists in prior efforts regarding their susceptibility to adversarial attacks in the intricate semantic environment. This vulnerability undermines the robustness of semantic representations, exacerbating the challenge of accurate entity identification, especially when transitioning across domains. To this end, we propose an Adversity-aware Augment Learning (AAL) solution for the few-shot NER task, dedicated to retrieving and reinforcing entity prototypes resilient to adversarial inference, thereby enhancing cross-domain semantic coherence. In particular, AAL employs a two-stage paradigm consisting of training and fine-tuning. The process initiates with augmentation learning by leveraging two kinds of prompt learning schemes, then identifies prototypes under the guidance of a variational manner. Furthermore, we devise a domain-oriented prototype refinement to optimize prototype learning under conditions of uncertainty attack, facilitating the effective transfer of common knowledge from source to target domains. The experimental results, encompassing the few-shot NER datasets under both certainty and uncertainty conditions, affirm the superiority of the proposed AAL over several representative baselines, particularly its capability against adversarial attacks.

TIST Journal 2025 Journal Article

MGRL4RE: A Multi-Graph Representation Learning Approach for Urban Region Embedding

  • Meng Chen
  • Zechen Li
  • Hongwei Jia
  • Xin Shao
  • Jun Zhao
  • Qiang Gao
  • Min Yang
  • Yilong Yin

Using multi-modal data to learn region representations has gained popularity for its ability to reveal diverse socioeconomic features in cities. However, many studies focus solely on semantic features from points-of-interest (POIs), neglecting the issue of spatial imbalance. This article introduces a Multi-Graph Representation Learning framework for Region Embedding (MGRL4RE), which leverages both inter-region and intra-region correlations through two main components: multi-graph construction based on various region correlations and multi-graph representation learning. The construction module creates a multi-graph reflecting various correlations among regions, utilizing geo-tagged POIs, region data, and human mobility data. Specifically, we assess a region’s importance relative to its spatial context (neighborhood) and develop spatially invariant semantic features to address spatial imbalance. Furthermore, the representation learning module generates comprehensive and effective region representations via multi-view embedding fusion. Our extensive experiments across various downstream tasks, including land use clustering, region popularity prediction, and crime prediction, confirm that our model significantly outperforms existing state-of-the-art region embedding methods.

AAAI Conference 2025 Conference Paper

Responsive Dynamic Graph Disentanglement for Metro Flow Forecasting

  • Qiang Gao
  • Zizheng Wang
  • Li Huang
  • Goce Trajcevski
  • Guisong Liu
  • Xueqin Chen

The metro flow in Urban Rail Transit Systems (URTS) differs from other urban traffic flows because it is characterized by: (1) highly predetermined scheduling; and (2) interactively dynamic dependencies over the fixed physical infrastructure that vary with spatiotemporal and environmental factors. Notwithstanding the advances in graph neural networks, existing efforts fail to fully capture the characteristics and complex spatiotemporal dynamics specific to metro flow, as the innate graph-aware interactions underlying a metro flow are frequently affected by an amalgamation of: intrinsic connectivity, environmental associations, and flow-activated correlation, which usually dynamically evolve over time while containing redundant signals. We propose ReDyNet, a novel Responsive Dynamic Graph Neural Network to accurately understand the spatiotemporal dynamics of metro flow and external factors. Specifically, it employs a responsive mechanism that adapts to variations in metro flow and external influences, ensuring the construction of an appropriate dynamic graph. In addition, ReDyNet follows the merits of information bottleneck (IB) theory with redundancy disentanglement to enhance the clarity and precision of contextual spatial signals. Our experiments conducted on three real-world metro passenger flow datasets demonstrate that the proposed ReDyNet outperforms several representative baselines.

AAAI Conference 2024 Short Paper

Disentanglement-Guided Spatial-Temporal Graph Neural Network for Metro Flow Forecasting (Student Abstract)

  • Jinyu Hong
  • Ping Kuang
  • Qiang Gao
  • Fan Zhou

In recent intelligent transportation applications, metro flow forecasting has received much attention from researchers. Most prior arts endeavor to explore spatial or temporal dependencies while ignoring the key characteristic patterns underlying historical flows, e.g., trend and periodicity. Although the multiple granularity distillations or spatial dependency correlation can promote the flow estimation. However, the potential noise and spatial dynamics are under-explored. To this end, we propose a novel Disentanglement-Guided Spatial-Temporal Graph Neural Network or DGST to address the above concerns. It contains a Disentanglement Pre-training procedure for characteristic pattern disentanglement learning, a Characteristic Pattern Prediction for different future characteristic explorations, and a Spatial-Temporal Correlation for spatial-temporal dynamic learning. Experiments on a real-world dataset demonstrate the superiority of our DGST.

IJCAI Conference 2024 Conference Paper

Enhancing Fine-Grained Urban Flow Inference via Incremental Neural Operator

  • Qiang Gao
  • Xiaolong Song
  • Li Huang
  • Goce Trajcevski
  • Fan Zhou
  • Xueqin Chen

Fine-grained urban flow inference (FUFI), which involves inferring fine-grained flow maps from their coarse-grained counterparts, is of tremendous interest in the realm of sustainable urban traffic services. To address the FUFI, existing solutions mainly concentrate on investigating spatial dependencies, introducing external factors, reducing excessive memory costs, etc. , -- while rarely considering the catastrophic forgetting (CF) problem. Motivated by recent operator learning, we present an Urban Neural Operator solution with Incremental learning (UNOI), primarily seeking to learn grained-invariant solutions for FUFI in addition to addressing CF. Specifically, we devise an urban neural operator (UNO) in UNOI that learns mappings between approximation spaces by treating the different-grained flows as continuous functions, allowing a more flexible capture of spatial correlations. Furthermore, the phenomenon of CF behind time-related flows could hinder the capture of flow dynamics. Thus, UNOI mitigates CF concerns as well as privacy issues by placing UNO blocks in two incremental settings, i. e. , flow-related and task-related. Experimental results on large-scale real-world datasets demonstrate the superiority of our proposed solution against the baselines.

TIST Journal 2024 Journal Article

Inferring Real Mobility in Presence of Fake Check-ins Data

  • Qiang Gao
  • Hongzhu Fu
  • Kunpeng Zhang
  • Goce Trajcevski
  • Xu Teng
  • Fan Zhou

Understanding human mobility has become an important aspect of location-based services in tasks such as personalized recommendation and individual moving pattern recognition, enabled by the large volumes of data from geo-tagged social media (GTSM). Prior studies mainly focus on analyzing human historical footprints collected by GTSM and assuming the veracity of the data, which need not hold when some users are not willing to share their real footprints due to privacy concerns—thereby affecting reliability/authenticity. In this study, we address the problem of Inferring Real Mobility (IRMo) of users, from their unreliable historical traces. Tackling IRMo is a non-trivial task due to the: (1) sparsity of check-in data; (2) suspicious counterfeit check-in behaviors; and (3) unobserved dependencies in human trajectories. To address these issues, we develop a novel Graph-enhanced Attention model called IRMoGA, which attempts to capture underlying mobility patterns and check-in correlations by exploiting the unreliable spatio-temporal data. Specifically, we incorporate the attention mechanism (rather than solely relying on traditional recursive models) to understand the regularity of human mobility, while employing a graph neural network to understand the mutual interactions from human historical check-ins and leveraging prior knowledge to alleviate the inferring bias. Our experiments conducted on four real-world datasets demonstrate the superior performance of IRMoGA over several state-of-the-art baselines, e.g., up to 39.16% improvement regarding the Recall score on Foursquare.

AAAI Conference 2024 Short Paper

Spatial-Temporal Augmentation for Crime Prediction (Student Abstract)

  • Hongzhu Fu
  • Fan Zhou
  • Qing Guo
  • Qiang Gao

Crime prediction stands as a pivotal concern within the realm of urban management due to its potential threats to public safety. While prior research has predominantly focused on unraveling the intricate dependencies among urban regions and temporal dynamics, the challenges posed by the scarcity and uncertainty of historical crime data have not been thoroughly investigated. This study introduces an innovative spatial-temporal augmented learning framework for crime prediction, namely STAug. In STAug, we devise a CrimeMix to improve the ability of generalization. Furthermore, we harness a spatial-temporal aggregation to capture and incorporate multiple correlations covering the temporal, spatial, and crime-type aspects. Experiments on two real-world datasets underscore the superiority of STAug over several baselines.

JBHI Journal 2023 Journal Article

A Novel Domain Adversarial Networks Based on 3D-LSTM and Local Domain Discriminator for Hearing-Impaired Emotion Recognition

  • Zekun Tian
  • Dahua Li
  • Yi Yang
  • Fazheng Hou
  • Zhiyi Yang
  • Yu Song
  • Qiang Gao

Recent research on emotion recognition suggests that deep network-based adversarial learning has an ability to solve the cross-subject problem of emotion recognition. This study constructed a hearing-impaired electroencephalography (EEG) emotion dataset containing three emotions (positive, neutral, and negative) in 15 subjects. The emotional domain adversarial neural network (EDANN) was carried out to identify hearing-impaired subjects’ emotions by learning hidden emotion information between the labeled data and the data with no-label. For the input data, we propose a spatial filter matrix to reduce the overfitting of the training data. A feature extraction network 3DLSTM-ConvNET was used to extract comprehensive emotional information from the time, frequency, and spatial dimensions. Moreover, emotion local domain discriminator and emotion film group local domain discriminator were added to reduce the distribution distance between the same kinds of emotions and different film groups, respectively. According to the experimental results, the average accuracy of subject-dependent is 0. 984 (STD: 0. 011), and that of subject-independent is 0. 679 (STD: 0. 140). In addition, by analyzing the discrimination characteristics, we found that the brain regions with emotional recognition in the hearing-impaired are distributed in the wider areas of the parietal and occipital lobes, which may be caused by visual processing.

AAAI Conference 2023 Short Paper

Cross-Regional Fraud Detection via Continual Learning (Student Abstract)

  • Yujie Li
  • Yuxuan Yang
  • Qiang Gao
  • Xin Yang

Detecting fraud is an urgent task to avoid transaction risks. Especially when expanding a business to new cities or new countries, developing a totally new model will bring the cost issue and result in forgetting previous knowledge. This study proposes a novel solution based on heterogeneous trade graphs, namely HTG-CFD, to prevent knowledge forgetting of cross-regional fraud detection. Specifically, a novel heterogeneous trade graph is meticulously constructed from original transactions to explore the complex semantics among different types of entities and relationships. Motivated by continual learning, we present a practical and task-oriented forgetting prevention method to alleviate knowledge forgetting in the context of cross-regional detection. Extensive experiments demonstrate that HTG-CFD promotes performance in both cross-regional and single-regional scenarios.

NeurIPS Conference 2023 Conference Paper

Enhancing Knowledge Transfer for Task Incremental Learning with Data-free Subnetwork

  • Qiang Gao
  • Xiaojun Shan
  • Yuchen Zhang
  • Fan Zhou

As there exist competitive subnetworks within a dense network in concert with Lottery Ticket Hypothesis, we introduce a novel neuron-wise task incremental learning method, namely Data-free Subnetworks (DSN), which attempts to enhance the elastic knowledge transfer across the tasks that sequentially arrive. Specifically, DSN primarily seeks to transfer knowledge to the new coming task from the learned tasks by selecting the affiliated weights of a small set of neurons to be activated, including the reused neurons from prior tasks via neuron-wise masks. And it also transfers possibly valuable knowledge to the earlier tasks via data-free replay. Especially, DSN inherently relieves the catastrophic forgetting and the unavailability of past data or possible privacy concerns. The comprehensive experiments conducted on four benchmark datasets demonstrate the effectiveness of the proposed DSN in the context of task-incremental learning by comparing it to several state-of-the-art baselines. In particular, DSN enables the knowledge transfer to the earlier tasks, which is often overlooked by prior efforts.

AAAI Conference 2023 Short Paper

Mobility Prediction via Sequential Trajectory Disentanglement (Student Abstract)

  • Jinyu Hong
  • Fan Zhou
  • Qiang Gao
  • Ping Kuang
  • Kunpeng Zhang

Accurately predicting human mobility is a critical task in location-based recommendation. Most prior approaches focus on fusing multiple semantics trajectories to forecast the future movement of people, and fail to consider the distinct relations in underlying context of human mobility, resulting in a narrow perspective to comprehend human motions. Inspired by recent advances in disentanglement learning, we propose a novel self-supervised method called SelfMove for next POI prediction. SelfMove seeks to disentangle the potential time-invariant and time-varying factors from massive trajectories, which provides an interpretable view to understand the complex semantics underlying human mobility representations. To address the data sparsity issue, we present two realistic trajectory augmentation approaches to help understand the intrinsic periodicity and constantly changing intents of humans. In addition, a POI-centric graph structure is proposed to explore both homogeneous and heterogeneous collaborative signals behind historical trajectories. Experiments on two real-world datasets demonstrate the superiority of SelfMove compared to the state-of-the-art baselines.

IJCAI Conference 2023 Conference Paper

Open Anomalous Trajectory Recognition via Probabilistic Metric Learning

  • Qiang Gao
  • Xiaohan Wang
  • Chaoran Liu
  • Goce Trajcevski
  • Li Huang
  • Fan Zhou

Typically, trajectories considered anomalous are the ones deviating from usual (e. g. , traffic-dictated) driving patterns. However, this closed-set context fails to recognize the unknown anomalous trajectories, resulting in an insufficient self-motivated learning paradigm. In this study, we investigate the novel Anomalous Trajectory Recognition problem in an Open-world scenario (ATRO) and introduce a novel probabilistic Metric learning model, namely ATROM, to address it. Specifically, ATROM can detect the presence of unknown anomalous behavior in addition to identifying known behavior. It has a Mutual Interaction Distillation that uses contrastive metric learning to explore the interactive semantics regarding the diverse behavioral intents and a Probabilistic Trajectory Embedding that forces the trajectories with distinct behaviors to follow different Gaussian priors. More importantly, ATROM offers a probabilistic metric rule to discriminate between known and unknown behavioral patterns by taking advantage of the approximation of multiple priors. Experimental results on two large-scale trajectory datasets demonstrate the superiority of ATROM in addressing both known and unknown anomalous patterns.

JBHI Journal 2023 Journal Article

SECT: A Method of Shifted EEG Channel Transformer for Emotion Recognition

  • Zhongli Bai
  • Fazheng Hou
  • Kaixuan Sun
  • Qingzhou Wu
  • Mu Zhu
  • Zemin Mao
  • Yu Song
  • Qiang Gao

Recently, electroencephalographic (EEG) emotion recognition attract attention in the field of human-computer interaction (HCI). However, most of the existing EEG emotion datasets primarily consist of data from normal human subjects. To enhance diversity, this study aims to collect EEG signals from 30 hearing-impaired subjects while they watch video clips displaying six different emotions (happiness, inspiration, neutral, anger, fear, and sadness). The frequency domain feature matrix of EEG signals, which comprise power spectral density (PSD) and differential entropy (DE), were up-sampled using cubic spline interpolation to capture the correlation among different channels. To select emotion representation information from both global and localized brain regions, a novel method called Shifted EEG Channel Transformer (SECT) was proposed. The SECT method consists of two layers: the first layer utilizes the traditional channel Transformer (CT) structure to process information from global brain regions, while the second layer acquires localized information from centrally symmetrical and reorganized brain regions by shifted channel Transformer (S-CT). We conducted a subject-dependent experiment, and the accuracy of the PSD and DE features reached 82. 51% and 84. 76%, respectively, for the six kinds of emotion classification. Moreover, subject-independent experiments were conducted on a public dataset, yielding accuracies of 85. 43% (3-classification, SEED), 66. 83% (2-classification on Valence, DEAP), and 65. 31% (2-classification on Arouse, DEAP), respectively.

AAAI Conference 2022 Conference Paper

Dynamic Manifold Learning for Land Deformation Forecasting

  • Fan Zhou
  • Rongfan Li
  • Qiang Gao
  • Goce Trajcevski
  • Kunpeng Zhang
  • Ting Zhong

Landslides refer to occurrences of massive ground movements due to geological (and meteorological) factors, and can have disastrous impact on property, economy, and even lead to loss of life. The advances of remote sensing provide accurate and continuous terrain monitoring, enabling the study and analysis of land deformation which, in turn, can be used for possible landslides forecast. Prior studies either rely on independent observations for displacement prediction or model static land characteristics without considering the subtle interactions between different locations and the dynamic changes of the surface conditions. We present DyLand – Dynamic Manifold Learning with Normalizing Flows for Land deformation prediction – a novel framework for learning dynamic structures of terrain surface and improving the performance of land deformation prediction. DyLand models the spatial connections of InSAR measurements and estimates conditional distributions of deformations on the terrain manifold with a novel normalizing flow-based method. Instead of modeling the stable terrains, it incorporates surface permutations and captures the innate dynamics of the land surface while allowing for tractable likelihood estimates on the manifold. Our extensive evaluations on curated InSAR datasets from continuous monitoring of slopes prone to landslides show that DyLand outperforms existing bechmarking models.

JBHI Journal 2022 Journal Article

Investigating of Deaf Emotion Cognition Pattern By EEG and Facial Expression Combination

  • Yi Yang
  • Qiang Gao
  • Yu Song
  • Xiaolin Song
  • Zemin Mao
  • Junjie Liu

With the development of sensor technology and learning algorithms, multimodal emotion recognition has attracted widespread attention. Many existing studies on emotion recognition mainly focused on normal people. Besides, due to hearing loss, deaf people cannot express emotions by words, which may have a greater need for emotion recognition. In this paper, the deep belief network (DBN) was utilized to classify three category emotions through the electroencephalograph (EEG) and facial expressions. Signals from 15 deaf subjects were recorded when they watched the emotional movie clips. Our system uses a 1-s window without overlap to segment the EEG signals in five frequency bands, then the differential entropy (DE) feature is extracted. The DE feature of EEG and facial expression images plays as multimodal input for subject-dependent emotion recognition. To avoid feature redundancy, the top 12 major EEG electrode channels (FP2, FP1, FT7, FPZ, F7, T8, F8, CB2, CB1, FT8, T7, TP8) in the gamma band and 30 facial expression features (the areas around the eyes and eyebrow) which are selected by the largest weight values. The results show that the classification accuracy is 99. 92% by feature selection in deaf emotion reignition. Moreover, investigations on brain activities reveal deaf brain activity changes mainly in the beta and gamma bands, and the brain regions that are affected by emotions are mainly distributed in the prefrontal and outer temporal lobes.

IJCAI Conference 2019 Conference Paper

InteractionNN: A Neural Network for Learning Hidden Features in Sparse Prediction

  • Xiaowang Zhang
  • Qiang Gao
  • Zhiyong Feng

In this paper, we present a neural network (InteractionNN) for sparse predictive analysis where hidden features of sparse data can be learned by multilevel feature interaction. To characterize multilevel interaction of features, InteractionNN consists of three modules, namely, nonlinear interaction pooling, layer-lossing, and embedding. Nonlinear interaction pooling (NI pooling) is a hierarchical structure and, by shortcut connection, constructs low-level feature interactions from basic dense features to elementary features. Layer-lossing is a feed-forward neural network where high-level feature interactions can be learned from low-level feature interactions via correlation of all layers with target. Moreover, embedding is to extract basic dense features from sparse features of data which can help in reducing our proposed model computational complex. Finally, our experiment evaluates on the two benchmark datasets and the experimental results show that InteractionNN performs better than most of state-of-the-art models in sparse regression.

IJCAI Conference 2018 Conference Paper

Trajectory-User Linking via Variational AutoEncoder

  • Fan Zhou
  • Qiang Gao
  • Goce Trajcevski
  • Kunpeng Zhang
  • Ting Zhong
  • Fengli Zhang

Trajectory-User Linking (TUL) is an essential task in Geo-tagged social media (GTSM) applications, enabling personalized Point of Interest (POI) recommendation and activity identification. Existing works on mining mobility patterns often model trajectories using Markov Chains (MC) or recurrent neural networks (RNN) -- either assuming independence between non-adjacent locations or following a shallow generation process. However, most of them ignore the fact that human trajectories are often sparse, high-dimensional and may contain embedded hierarchical structures. We tackle the TUL problem with a semi-supervised learning framework, called TULVAE (TUL via Variational AutoEncoder), which learns the human mobility in a neural generative architecture with stochastic latent variables that span hidden states in RNN. TULVAE alleviates the data sparsity problem by leveraging large-scale unlabeled data and represents the hierarchical and structural semantics of trajectories with high-dimensional latent variables. Our experiments demonstrate that TULVAE improves efficiency and linking performance in real GTSM datasets, in comparison to existing methods.

IJCAI Conference 2017 Conference Paper

Identifying Human Mobility via Trajectory Embeddings

  • Qiang Gao
  • Fan Zhou
  • Kunpeng Zhang
  • Goce Trajcevski
  • Xucheng Luo
  • Fengli Zhang

Understanding human trajectory patterns is an important task in many location based social networks (LBSNs) applications, such as personalized recommendation and preference-based route planning. Most of the existing methods classify a trajectory (or its segments) based on spatio-temporal values and activities, into some predefined categories, e. g. , walking or jogging. We tackle a novel trajectory classification problem: we identify and link trajectories to users who generate them in the LBSNs, a problem called Trajectory-User Linking (TUL). Solving the TUL problem is not a trivial task because: (1) the number of the classes (i. e. , users) is much larger than the number of motion patterns in the common trajectory classification problems; and (2) the location based trajectory data, especially the check-ins, are often extremely sparse. To address these challenges, a Recurrent Neural Networks (RNN) based semi-supervised learning model, called TULER (TUL via Embedding and RNN) is proposed, which exploits the spatio-temporal data to capture the underlying semantics of user mobility patterns. Experiments conducted on real-world datasets demonstrate that TULER achieves better accuracy than the existing methods.