Arrow Research search

Author name cluster

Yiqiang Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

22 papers
1 author row

Possible papers

22

AAAI Conference 2026 Conference Paper

State Mamba: Spatiotemporal EEG State-Space Model with Dynamic Brain Alignment for Cross-Subject Representation

  • Weining Weng
  • Yang Gu
  • Yuan Ma
  • Yuchen Liu
  • Yingwei Zhang
  • Yiqiang Chen

Cross-subject EEG decoding remains a fundamental challenge due to substantial inter-subject variability in brain activity, which hinders the development of subject-independent EEG models. Despite progress in extracting cross-subject invariant features, existing studies neglect the shared neural responses that arise under similar cognitive or emotional states across individuals, limiting their ability to learn generalized and consistent EEG representations. To address the challenges, we propose State Mamba, a novel spatiotemporal EEG state-space model that explicitly models and aligns neural responses and their spatiotemporal state transitions to learn consistent and generalizable representations across subjects. Innovatively, State Mamba theoretically formulates a multi-channel Mamba architecture that jointly models spatial and temporal brain state transitions, supporting principled analysis of neural responses. To enhance spatiotemporal feature coupling, we introduce the LGANN module, which adopts global-local attention to integrate long- and short-term brain activity into a compact EEG representation. Furthermore, we design two self-supervised pretext tasks to extract consistent neural patterns across subjects: (1) representation alignment to align EEG representation, and (2) pattern alignment to align their transition rules under identical conditions, jointly promoting subject-invariant EEG representations. Extensive experiments on three benchmark datasets, FACED, DEAP, and ISRUC, demonstrate the superior performance of State Mamba in cross-subject emotion and sleep recognition tasks, validating its robust generalization capability.

AAAI Conference 2025 Conference Paper

Mitigating Pervasive Modality Absence Through Multimodal Generalization and Refinement

  • Wuliang Huang
  • Yiqiang Chen
  • Xinlong Jiang
  • Chenlong Gao
  • Teng Zhang
  • Qian Chen
  • Yifan Wang

The performance of multimodal models often deteriorates when modality absence occurs. The absence disrupts the learned inter-modal correlations, resulting in biased multimodal representations. This challenge is especially pronounced when the absence is pervasive, affecting both the training and inference phases. Recent studies have attempted to reconstruct the missing information; however, most of them require complete supervision, which is seldom available in scenarios of pervasive absence. The quality of reconstruction remains a critical issue. Alternatively, others aim to learn robust representations from the available modalities but the substantial variations and biases are not fully addressed. This paper introduces the Multimodal Generalization and Refinement (MGR) framework to mitigate the issue of pervasive modality absence. MGR begins by acquiring generalized multimodal representations and iteratively refines them to recognize and calibrate the biased representations. Initially, multimodal samples with absence are embedded through foundation models, and MGR integrates independent unimodal features to further enhance generalization. Additionally, a novel mixed-context prompt is adopted to identify biases in both features and correlations. A redistribution operation can then refine these biases through graph pooling, culminating in robust and calibrated multimodal representations, which are suitable for downstream tasks. Comprehensive experiments on four benchmark datasets demonstrate that the proposed MGR framework outperforms state-of-the-art methods, effectively mitigating the impact of pervasive modality absence.

JBHI Journal 2025 Journal Article

PhysCL: Knowledge-Aware Contrastive Learning of Physiological Signal Models for Cuff-Less Blood Pressure Estimation

  • Renju Liu
  • Jianfei Shen
  • Yang Gu
  • Yiqiang Chen
  • Jiling Zhang
  • Qingyu Wu
  • Chenyang Xu
  • Feiyi Fan

Training deep learning models for photoplethysmography(PPG)-based cuff-less blood pressure estimation often requires a substantial amount of labeled data collected through sophisticated medical instruments, posing significant challenges in practical applications. To address this issue, we propose Physiological Knowledge-Aware Contrastive Learning (PhysCL), a novel approach designed to reduce the dependence on labeled PPG data while improving blood pressure estimation accuracy. Specifically, PhysCL tackles the semantic consistency problem in contrastive learning by introducing a knowledge-aware augmentation bank, which generates positive physiological signal pairs using knowledge-based constraints during the contrastive pair generation. Additionally, we propose a contrastive feature reconstruction method to enhance feature diversity and prevent model collapse through feature re-sampling and re-weighting. We evaluate PhysCL on data from 106 subjects across the MIMIC III, MIMIC IV, and UQVS datasets under cross-dataset validation settings, comparing it against state-of-the-art contrastive learning methods and blood pressure estimation models. PhysCL achieves an average mean absolute error of 9. 5/5. 9 mmHg (systolic/diastolic) across the three datasets, using only 2% labeled data combined with 98% unlabeled data for pre-training and 5 samples for personalization, which represents a 6. 2% /4. 3% improvement, respectively, over the current best supervised methods. The ablation study provides further convincing evidence that the unlabeled data can be utilized to improve the existing cuff-less blood pressure estimation models and shed light on unsupervised contrastive learning for physiological signals.

TIST Journal 2025 Journal Article

Survey on Knowledge Distillation for Large Language Models: Methods, Evaluation, and Application

  • Chuanpeng Yang
  • Yao Zhu
  • Wang Lu
  • Yidong Wang
  • Qian Chen
  • Chenlong Gao
  • Bingjie Yan
  • Yiqiang Chen

Large Language Models (LLMs) have showcased exceptional capabilities in various domains, attracting significant interest from both academia and industry. Despite their impressive performance, the substantial size and computational demands of LLMs pose considerable challenges for practical deployment, particularly in environments with limited resources. The endeavor to compress language models while maintaining their accuracy has become a focal point of research. Among the various methods, knowledge distillation has emerged as an effective technique to enhance inference speed without greatly compromising performance. This article presents a thorough survey from three aspects: method, evaluation, and application, exploring knowledge distillation techniques tailored specifically for LLMs. Specifically, we divide the methods into white-box KD and black-box KD to better illustrate their differences. Furthermore, we also explored the evaluation tasks and distillation effects between different distillation methods and proposed directions for future research. Through in-depth understanding of the latest advancements and practical applications, this survey provides valuable resources for researchers, paving the way for sustained progress in this field.

AAAI Conference 2025 Conference Paper

Ultra-High Resolution Segmentation via Boundary-Enhanced Patch-Merging Transformer

  • Haopeng Sun
  • Yingwei Zhang
  • Lumin Xu
  • Sheng Jin
  • Yiqiang Chen

Segmentation of ultra-high resolution (UHR) images is a critical task with numerous applications, yet it poses significant challenges due to high spatial resolution and rich fine details. Recent approaches adopt a dual-branch architecture, where a global branch learns long-range contextual information and a local branch captures fine details. However, they struggle to handle the conflict between global and local information while adding significant extra computational cost. Inspired by the human visual system's ability to rapidly orient attention to important areas with fine details and filter out irrelevant information, we propose a novel UHR segmentation method called Boundary-enhanced Patch-merging Transformer (BPT). BPT consists of two key components: (1) Patch-Merging Transformer (PMT) for dynamically allocating tokens to informative regions to acquire global and local representations, and (2) Boundary-Enhanced Module (BEM) that leverages boundary information to enrich fine details. Extensive experiments on multiple UHR image segmentation benchmarks demonstrate that our BPT outperforms previous state-of-the-art methods without introducing extra computational overhead.

AAAI Conference 2025 Conference Paper

VersaFusion: A Versatile Diffusion-Based Framework for Fine-Grained Image Editing and Enhancement

  • Haocun Ye
  • Xinlong Jiang
  • Chenlong Gao
  • Bingyu Wang
  • Wuliang Huang
  • Yiqiang Chen

Text-to-image (T2I) diffusion models have achieved remarkable progress in generating realistic images from textual descriptions. However, ensuring consistent high-quality image generation with complete backgrounds, object appearance, and optimal texture rendering remains challenging. This paper presents a novel fine-grained pixel-level image editing method based on pre-trained diffusion models. The proposed dual-branch architecture, consisting of Guidance and Generation branches, employs U-Net Denoisers and Self-Attention mechanisms. An improved DDIM-like inversion method obtains the latent representation, followed by multiple denoising steps. Cross-branch interactions, such as KV Replacement, Classifier Guidance, and Feature Correspondence, enable precise control while preserving image fidelity. The iterative refinement and reconstruction process facilitates finegrained editing control, supporting attribute modification, image outpainting, style transfer, and face synthesis with Clickand-Drag style editing using masks. Experimental results demonstrate the effectiveness of the proposed approach in enhancing the quality and controllability of T2I-generated images, surpassing existing methods while maintaining attractive computational complexity for practical real-world applications.

TIST Journal 2024 Journal Article

Exploring Structure Incentive Domain Adversarial Learning for Generalizable Sleep Stage Classification

  • Shuo Ma
  • Yingwei Zhang
  • Yiqiang Chen
  • Tao Xie
  • Shuchao Song
  • Ziyu Jia

Sleep stage classification is crucial for sleep state monitoring and health interventions. In accordance with the standards prescribed by the American Academy of Sleep Medicine, a sleep episode follows a specific structure comprising five distinctive sleep stages that collectively form a sleep cycle. Typically, this cycle repeats about five times, providing an insightful portrayal of the subject’s physiological attributes. The progress of deep learning and advanced domain generalization methods allows automatic and even adaptive sleep stage classification. However, applying models trained with visible subject data to invisible subject data remains challenging due to significant individual differences among subjects. Motivated by the periodic category-complete structure of sleep stage classification, we propose a Structure Incentive Domain Adversarial learning (SIDA) method that combines the sleep stage classification method with domain generalization to enable cross-subject sleep stage classification. SIDA includes individual domain discriminators for each sleep stage category to decouple subject dependence differences among different categories and fine-grained learning of domain-invariant features. Furthermore, SIDA directly connects the label classifier and domain discriminators to promote the training process. Experiments on three benchmark sleep stage classification datasets demonstrate that the proposed SIDA method outperforms other state-of-the-art sleep stage classification and domain generalization methods and achieves the best cross-subject sleep stage classification results.

IJCAI Conference 2024 Conference Paper

FedES: Federated Early-Stopping for Hindering Memorizing Heterogeneous Label Noise

  • Bixiao Zeng
  • Xiaodong Yang
  • Yiqiang Chen
  • Zhiqi Shen
  • Hanchao Yu
  • Yingwei Zhang

Federated learning (FL) facilitates collaborative model training across distributed clients while maintaining privacy. Federated noisy label learning (FNLL) is more of a challenge for data inaccessibility and noise heterogeneity. Existing works primarily assume clients are either noisy or clean, which may lack the flexibility to adapt to diverse label noise across different clients, especially when entirely clean or noisy clients are not the majority. To address this, we propose a general noise-robust federated learning framework called Federated Early-Stopping (FedES), which adaptively updates critical parameters of each local model based on their noise rates, thereby avoiding overfitting to noisy labels. FedES is composed of two stages: federated noise estimation and parameter-adaptive local updating \& global aggregation. We introduce a signed distance based on local and global gradients during a federated round to estimate clients' noise rates without requiring additional information. Based on this measure, we employ various degrees of early-stopping during local updating on the clients, and further, a noise-aware global aggregation is employed to achieve noise-robust learning. Extensive experiments conducted on varying synthetic and real-world label noise demonstrate the superior performance of FedES over the state-of-the-art methods.

AAAI Conference 2022 Short Paper

Class-Wise Adaptive Self Distillation for Federated Learning on Non-IID Data (Student Abstract)

  • Yuting He
  • Yiqiang Chen
  • Xiaodong Yang
  • Yingwei Zhang
  • Bixiao Zeng

Federated learning (FL) enables multiple clients to collaboratively train a globally generalized model while keeping local data decentralized. A key challenge in FL is to handle the heterogeneity of data distributions among clients. The local model will shift the global feature when fitting local data, which results in forgetting the global knowledge. Following the idea of knowledge distillation, the global model’s prediction can be utilized to help local models preserve the global knowledge in FL. However, when the global model hasn’t converged completely, its predictions tend to be less reliable on certain classes, which may results in distillation’s misleading of local models. In this paper, we propose a classwise adaptive self distillation (FedCAD) mechanism to ameliorate this problem. We design class-wise adaptive terms to soften the influence of distillation loss according to the global model’s performance on each class and therefore avoid the misleading. Experiments show that our method outperforms other state-of-the-art FL algorithms on benchmark datasets.

TIST Journal 2022 Journal Article

CLC: A Consensus-based Label Correction Approach in Federated Learning

  • Bixiao Zeng
  • Xiaodong Yang
  • Yiqiang Chen
  • Hanchao Yu
  • Yingwei Zhang

Federated learning (FL) is a novel distributed learning framework where multiple participants collaboratively train a global model without sharing any raw data to preserve privacy. However, data quality may vary among the participants, the most typical of which is label noise. The incorrect label would significantly damage the performance of the global model. In FL, the inaccessibility of raw data makes this issue more challenging. Previously published studies are limited to using a task-specific benchmark-trained model to evaluate the relevance between the benchmark dataset in the server and the local one on the participants’ side. However, such approaches have failed to exploit the cooperative nature of FL itself and are not practical. This paper proposes a Consensus-based Label Correction approach (CLC) in FL, which tries to correct the noisy labels using the developed consensus method among the FL participants. The consensus-defined class-wise information is used to identify the noisy labels and correct them with pseudo-labels. Extensive experiments are conducted on several public datasets in various settings. The experimental results prove the advantage over the state-of-art methods. The link to the source code is https://github.com/bixiao-zeng/CLC.git.

TIST Journal 2022 Journal Article

Domain Generalization for Activity Recognition via Adaptive Feature Fusion

  • Xin Qin
  • Jindong Wang
  • Yiqiang Chen
  • Wang Lu
  • Xinlong Jiang

Human activity recognition requires the efforts to build a generalizable model using the training datasets with the hope to achieve good performance in test datasets. However, in real applications, the training and testing datasets may have totally different distributions due to various reasons such as different body shapes, acting styles, and habits, damaging the model’s generalization performance. While such a distribution gap can be reduced by existing domain adaptation approaches, they typically assume that the test data can be accessed in the training stage, which is not realistic. In this article, we consider a more practical and challenging scenario: domain-generalized activity recognition (DGAR) where the test dataset cannot be accessed during training. To this end, we propose Adaptive Feature Fusion for Activity Recognition (AFFAR), a domain generalization approach that learns to fuse the domain-invariant and domain-specific representations to improve the model’s generalization performance. AFFAR takes the best of both worlds where domain-invariant representations enhance the transferability across domains and domain-specific representations leverage the model discrimination power from each domain. Extensive experiments on three public HAR datasets show its effectiveness. Furthermore, we apply AFFAR to a real application, i.e., the diagnosis of Children’s Attention Deficit Hyperactivity Disorder (ADHD), which also demonstrates the superiority of our approach.

TMLR Journal 2022 Journal Article

Domain-invariant Feature Exploration for Domain Generalization

  • Wang Lu
  • Jindong Wang
  • Haoliang Li
  • Yiqiang Chen
  • Xing Xie

Deep learning has achieved great success in the past few years. However, the performance of deep learning is likely to impede in face of non-IID situations. Domain generalization (DG) enables a model to generalize to an unseen test distribution, i.e., to learn domain-invariant representations. In this paper, we argue that domain-invariant features should be originating from both internal and mutual sides. Internal invariance means that the features can be learned with a single domain and the features capture intrinsic semantics of data, i.e., the property within a domain, which is agnostic to other domains. Mutual invariance means that the features can be learned with multiple domains (cross-domain) and the features contain common information, i.e., the transferable features w.r.t. other domains. We then propose DIFEX for Domain-Invariant Feature EXploration. DIFEX employs a knowledge distillation framework to capture the high-level Fourier phase as the internally-invariant features and learn cross-domain correlation alignment as the mutually-invariant features. We further design an exploration loss to increase the feature diversity for better generalization. Extensive experiments on both time-series and visual benchmarks demonstrate that the proposed DIFEX achieves state-of-the-art performance.

IJCAI Conference 2020 Conference Paper

Bridging Cross-Tasks Gap for Cognitive Assessment via Fine-Grained Domain Adaptation

  • Yingwei Zhang
  • Yiqiang Chen
  • Hanchao Yu
  • Zeping Lv
  • Qing Li
  • Xiaodong Yang

Discriminating pathologic cognitive decline from the expected decline of normal aging is an important research topic for elderly care and health monitoring. However, most cognitive assessment methods only work when data distributions of the training set and testing set are consistent. Enabling existing cognitive assessment models to adapt to the data in new cognitive assessment tasks is a significant challenge. In this paper, we propose a novel domain adaptation method, namely the Fine-Grained Adaptation Random Forest (FAT), to bridge the cognitive assessment gap when the data distribution is changed. FAT is composed of two essential parts 1) information gain based model evaluation strategy (IGME) and 2) domain adaptation tree growing mechanism (DATG). IGME is used to evaluate every individual tree, and DATG is used to transfer the source model to the target domain. To evaluate the performance of FAT, we conduct experiments in real clinical environments. Experimental results demonstrate that FAT is significantly more accurate and efficient compared with other state-of-the-art methods.

IS Journal 2020 Journal Article

FedHealth: A Federated Transfer Learning Framework for Wearable Healthcare

  • Yiqiang Chen
  • Xin Qin
  • Jindong Wang
  • Chaohui Yu
  • Wen Gao

With the rapid development of computing technology, wearable devices make it easy to get access to people's health information. Smart healthcare achieves great success by training machine learning models on a large quantity of user personal data. However, there are two critical challenges. First, user data often exist in the form of isolated islands, making it difficult to perform aggregation without compromising privacy security. Second, the models trained on the cloud fail on personalization. In this article, we propose FedHealth, the first federated transfer learning framework for wearable healthcare to tackle these challenges. FedHealth performs data aggregation through federated learning, and then builds relatively personalized models by transfer learning. Wearable activity recognition experiments and real Parkinson's disease auxiliary diagnosis application have evaluated that FedHealth is able to achieve accurate and personalized healthcare without compromising privacy and security. FedHealth is general and extensible in many healthcare applications.

AAAI Conference 2020 Conference Paper

Instance-Wise Dynamic Sensor Selection for Human Activity Recognition

  • Xiaodong Yang
  • Yiqiang Chen
  • Hanchao Yu
  • Yingwei Zhang
  • Wang Lu
  • Ruizhe Sun

Human Activity Recognition (HAR) is an important application of smart wearable/mobile systems for many humancentric problems such as healthcare. The multi-sensor synchronous measurement has shown better performance for HAR than a single sensor. However, the multi-sensor setting increases the costs of data transmission, computation and energy. Therefore, the efficient sensor selection to balance recognition accuracy and sensor cost is the critical challenge. In this paper, we propose an Instance-wise Dynamic Sensor Selection (IDSS) method for HAR. Firstly, we formalize this problem as minimizing both activity classification loss and sensor number by dynamically selecting a sparse subset for each instance. Then, IDSS solves the above minimization problem via Markov Decision Process whose policy for sensor selection is learned by exploiting the instancewise states using Imitation Learning. In order to optimize the parameters of the activity classification model and the sensor selection policy, an algorithm named Mutual DAgger is proposed to alternatively enhance their learning process. To evaluate the performance of IDSS, we conduct experiments on three real-world HAR datasets. The experimental results show that IDSS can effectively reduce the overall sensor number without losing accuracy and outperforms the state-of-theart methods regarding the combined measurement of accuracy and sensor number.

TIST Journal 2020 Journal Article

Transfer Learning with Dynamic Distribution Adaptation

  • Jindong Wang
  • Yiqiang Chen
  • Wenjie Feng
  • Han Yu
  • Meiyu Huang
  • Qiang Yang

Transfer learning aims to learn robust classifiers for the target domain by leveraging knowledge from a source domain. Since the source and the target domains are usually from different distributions, existing methods mainly focus on adapting the cross-domain marginal or conditional distributions. However, in real applications, the marginal and conditional distributions usually have different contributions to the domain discrepancy. Existing methods fail to quantitatively evaluate the different importance of these two distributions, which will result in unsatisfactory transfer performance. In this article, we propose a novel concept called Dynamic Distribution Adaptation (DDA), which is capable of quantitatively evaluating the relative importance of each distribution. DDA can be easily incorporated into the framework of structural risk minimization to solve transfer learning problems. On the basis of DDA, we propose two novel learning algorithms: (1) Manifold Dynamic Distribution Adaptation (MDDA) for traditional transfer learning, and (2) Dynamic Distribution Adaptation Network (DDAN) for deep transfer learning. Extensive experiments demonstrate that MDDA and DDAN significantly improve the transfer learning performance and set up a strong baseline over the latest deep and adversarial methods on digits recognition, sentiment analysis, and image classification. More importantly, it is shown that marginal and conditional distributions have different contributions to the domain divergence, and our DDA is able to provide good quantitative evaluation of their relative importance, which leads to better performance. We believe this observation can be helpful for future research in transfer learning.

IJCAI Conference 2019 Conference Paper

Agent-based Decision Support for Pain Management in Primary Care Settings

  • Xu Guo
  • Han Yu
  • Chunyan Miao
  • Yiqiang Chen

The lack of systematic pain management training and support among primary care physicians (PCPs) limits their ability to provide quality care for patients with pain. Here, we demonstrate an Agent-based Clinical Decision Support System to empower PCPs to leverage knowledge from pain specialists. The system learns a general-purpose representation space on patients, automatically diagnoses pain, recommends therapy and medicine, and suggests a referral program to PCPs in their decision-making tasks.

AAAI Conference 2016 Conference Paper

Multi-Agent System Development MADE Easy

  • Zhiqi Shen
  • Han Yu
  • Chunyan Miao
  • Siyao Li
  • Yiqiang Chen

Agent-Oriented Software Engineering (AOSE) is an emerging software engineering paradigm that advocates the application of best practices in the development of Multi-Agent Systems (MAS) through the use of agents and organizations of agents. This paper outlines the MADE system, which provides an interactive platform for people who are not wellversed in AOSE to contribute to the rapid prototyping of MASs with ease.

TIST Journal 2015 Journal Article

Accurate and Robust Moving-Object Segmentation for Telepresence Systems

  • Meiyu Huang
  • Yiqiang Chen
  • Wen Ji
  • Chunyan Miao

Moving-object segmentation is the key issue of Telepresence systems. With monocular camera--based segmentation methods, desirable segmentation results are hard to obtain in challenging scenes with ambiguous color, illumination changes, and shadows. Approaches based on depth sensors often cause holes inside the object and missegmentations on the object boundary due to inaccurate and unstable estimation of depth data. This work proposes an adaptive multi-cue decision fusion method based on Kinect (which integrates a depth sensor with an RGB camera). First, the algorithm obtains an initial foreground mask based on the depth cue. Second, the algorithm introduces a postprocessing framework to refine the segmentation results, which consists of two main steps: (1) automatically adjusting the weight of two weak decisions to identify foreground holes based on the color and contrast cue separately; and (2) refining the object boundary by integrating the motion probability weighted temporal prior, color likelihood, and smoothness constraint. The extensive experiments we conducted demonstrate that our method can segment moving objects accurately and robustly in various situations in real time.

AAAI Conference 2015 Conference Paper

Efficient Task Sub-Delegation for Crowdsourcing

  • Han Yu
  • Chunyan Miao
  • Zhiqi Shen
  • Cyril Leung
  • Yiqiang Chen
  • Qiang Yang

Reputation-based approaches allow a crowdsourcing system to identify reliable workers to whom tasks can be delegated. In crowdsourcing systems that can be modeled as multi-agent trust networks consist of resource constrained trustee agents (i. e. , workers), workers may need to further sub-delegate tasks to others if they determine that they cannot complete all pending tasks before the stipulated deadlines. Existing reputation-based decision-making models cannot help workers decide when and to whom to sub-delegate tasks. In this paper, we proposed a reputation aware task sub-delegation (RTS) approach to bridge this gap. By jointly considering a worker’s reputation, workload, the price of its effort and its trust relationships with others, RTS can be implemented as an intelligent agent to help workers make sub-delegation decisions in a distributed manner. The resulting task allocation maximizes social welfare through efficient utilization of the collective capacity of a crowd, and provides provable performance guarantees. Experimental comparisons with state-of-the-art approaches based on the Epinions trust network demonstrate significant advantages of RTS under high workload conditions.

IS Journal 2013 Journal Article

Extreme Learning Machines [Trends & Controversies]

  • Erik Cambria
  • Guang-Bin Huang
  • Liyanaarachchi Lekamalage Chamara Kasun
  • Hongming Zhou
  • Chi Man Vong
  • Jiarun Lin
  • Jianping Yin
  • Zhiping Cai

This special issue includes eight original works that detail the further developments of ELMs in theories, applications, and hardware implementation. In "Representational Learning with ELMs for Big Data, " Liyanaarachchi Lekamalage Chamara Kasun, Hongming Zhou, Guang-Bin Huang, and Chi Man Vong propose using the ELM as an auto-encoder for learning feature representations using singular values. In "A Secure and Practical Mechanism for Outsourcing ELMs in Cloud Computing, " Jiarun Lin, Jianping Yin, Zhiping Cai, Qiang Liu, Kuan Li, and Victor C. M. Leung propose a method for handling large data applications by outsourcing to the cloud that would dramatically reduce ELM training time. In "ELM-Guided Memetic Computation for Vehicle Routing, " Liang Feng, Yew-Soon Ong, and Meng-Hiot Lim consider the ELM as an engine for automating the encapsulation of knowledge memes from past problem-solving experiences. In "ELMVIS: A Nonlinear Visualization Technique Using Random Permutations and ELMs, " Anton Akusok, Amaury Lendasse, Rui Nian, and Yoan Miche propose an ELM method for data visualization based on random permutations to map original data and their corresponding visualization points. In "Combining ELMs with Random Projections, " Paolo Gastaldo, Rodolfo Zunino, Erik Cambria, and Sergio Decherchi analyze the relationships between ELM feature-mapping schemas and the paradigm of random projections. In "Reduced ELMs for Causal Relation Extraction from Unstructured Text, " Xuefeng Yang and Kezhi Mao propose combining ELMs with neuron selection to optimize the neural network architecture and improve the ELM ensemble's computational efficiency. In "A System for Signature Verification Based on Horizontal and Vertical Components in Hand Gestures, " Beom-Seok Oh, Jehyoung Jeon, Kar-Ann Toh, Andrew Beng Jin Teoh, and Jaihie Kim propose a novel paradigm for hand signature biometry for touchless applications without the need for handheld devices. Finally, in "An Adaptive and Iterative Online Sequential ELM-Based Multi-Degree-of-Freedom Gesture Recognition System, " Hanchao Yu, Yiqiang Chen, Junfa Liu, and Guang-Bin Huang propose an online sequential ELM-based efficient gesture recognition algorithm for touchless human-machine interaction.

IJCAI Conference 2011 Conference Paper

Cross-People Mobile-Phone Based Activity Recognition

  • Zhongtang Zhao
  • Yiqiang Chen
  • Junfa Liu
  • Zhiqi Shen
  • Mingjie Liu

Activity recognition using mobile phones has great potential in many applications including mobile healthcare. In order to let a person easily know whether he is in strict compliance with the doctor's exercise prescription and adjust his exercise amount accordingly, we can use a smart-phone based activity reporting system to accurately recognize a range of daily activities and report the duration of each activity. A triaxial accelerometer embedded in the smart phone is used for the classification of several activities, such as staying still, walking, running, and going upstairs and downstairs. The model learnt from a specific person often cannot yield accurate results when used on a different person. To solve the cross-people activity recognition problem, we propose an algorithm known as TransEMDT (Transfer learning EMbedded Decision Tree) that integrates a decision tree and the k-means clustering algorithm for personalized activity-recognition model adaptation. Tested on a real-world data set, the results show that our algorithm outperforms several traditional baseline algorithms.