Arrow Research search

Author name cluster

Wenji Mao

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

16 papers
1 author row

Possible papers

16

IS Journal 2026 Journal Article

A Dynamic Framework to Integrate Deep Reinforcement Learning with Hierarchical Symbolic Plans

  • Xuelong Liu
  • Nuo Chen
  • Wenji Mao
  • Daniel Zeng

Neuro-symbolic framework has become one of the mainstream paradigms in intelligent system design. For intelligent decision-making, Reinforcement Learning (RL) and automated planning are the representative neural and symbolic techniques, respectively, which can facilitate each other. Despite the rapid development and wide applications of deep RL, its drawbacks on sample efficiency and convergence in sparse-reward environments have become the major obstacles to hinder its advancement. To address these issues, in this paper, we propose a neuro-symbolic framework to integrate deep RL with hierarchical plans. Specifically, we develop a selective Monte-Carlo Tree Search algorithm, in which hierarchical plans are dynamically constructed during the learning process. The constructed plans, in turn, provide the high-level guidance for RL to constrain the subtasks leading to goal attainment, thus reducing useless/redundant exploration in RL. Experiments on five challenging scenarios show that our framework achieves better sample efficiency and faster convergence compared to the state-of-the-art approaches.

IS Journal 2024 Journal Article

Efficient Spiking Variational Graph Autoencoders for Unsupervised Graph Representation Learning Tasks

  • Hanxuan Yang
  • Qingchao Kong
  • Ruike Zhang
  • Wenji Mao

Variational graph autoencoders (VGAEs) are popular artificial neural network (ANN)-based models for unsupervised graph representation learning tasks, including link prediction and graph generation, which are critical in many real-world applications. Despite the promising results of VGAEs on these tasks, existing VGAEs typically suffer from extremely high energy cost. Recently, spiking neural networks (SNNs) have emerged as energy-efficient alternatives for applications on graph-structured data, while they are typically trained under supervised settings using label information. To leverage the energy efficiency of SNNs for unsupervised graph learning tasks, in this article, we propose an SNN-based spiking VGAE (S-VGAE) to efficiently learn spiking node representations using graph structural information. We conduct extensive experiments on two typical unsupervised graph learning tasks using benchmark datasets. The results demonstrate that our method can significantly save energy consumption with little or no loss on performances compared to both ANN- and SNN-based baselines.

IS Journal 2022 Journal Article

An Orthogonal Subspace Decomposition Method for Cross-Modal Retrieval

  • Zhixiong Zeng
  • Nan Xu
  • Wenji Mao
  • Daniel Zeng

As a general characteristic observed in the real-world datasets, multimodal data are usually partially associated, which comprise the commonly shared information across modalities (i. e. , modality-shared information) and the specific information only exists in a single modality (i. e. , modality-specific information). Cross-modal retrieval methods typically use these information in multimodal data as a whole and project them into a common representation space to calculate the similarity measure. In fact, only modality-shared information can be well aligned in the learning of common representations, whereas modality-specific information usually brings about interference term and decreases the performance of cross-modal retrieval. The explicit distinction and utilization of these two kinds of multimodal information are important to cross-modal retrieval, but rarely studied in previous research. In this article, we explicitly distinguish and utilize modality-shared and modality-specific features for learning better common representations, and propose an orthogonal subspace decomposition method for cross-modal retrieval, named orthogonal subspace decomposition method. Specifically, we introduce a structure preservation loss to ensure modality-shared information to be well preserved, and optimize the intramodal discrimination loss and intermodal invariance loss to learn the semantic discriminative features for cross-modal retrieval. We conduct comprehensive experiments on four widely used benchmark datasets, and the experimental results demonstrate the effectiveness of our proposed method.

IS Journal 2021 Journal Article

MDA: Multimodal Data Augmentation Framework for Boosting Performance on Sentiment/Emotion Classification Tasks

  • Nan Xu
  • Wenji Mao
  • Penghui Wei
  • Daniel Zeng

Multimodal data analysis has drawn increasing attention with the explosive growth of multimedia data. Although traditional unimodal data analysis tasks have accumulated abundant labeled datasets, there are few labeled multimodal datasets due to the difficulty and complexity of multimodal data annotation, nor is it easy to directly transfer unimodal knowledge to multimodal data. Unfortunately, there is little related data augmentation work in multimodal domain, especially for image–text data. In this article, to address the scarcity problem of labeled multimodal data, we propose a Multimodal Data Augmentation framework for boosting the performance on multimodal image–text classification task. Our framework learns a cross-modality matching network to select image–text pairs from existing unimodal datasets as the multimodal synthetic dataset, and uses this dataset to enhance the performance of classifiers. We take the multimodal sentiment analysis and multimodal emotion analysis as the experimental tasks and the experimental results show the effectiveness of our framework for boosting the performance on multimodal classification task.

AAAI Conference 2019 Conference Paper

A Topic-Aware Reinforced Model for Weakly Supervised Stance Detection

  • Penghui Wei
  • Wenji Mao
  • Guandan Chen

Analyzing public attitudes plays an important role in opinion mining systems. Stance detection aims to determine from a text whether its author is in favor of, against, or neutral towards a given target. One challenge of this task is that a text may not explicitly express an attitude towards the target, but existing approaches utilize target content alone to build models. Moreover, although weakly supervised approaches have been proposed to ease the burden of manually annotating largescale training data, such approaches are confronted with noisy labeling problem. To address the above two issues, in this paper, we propose a Topic-Aware Reinforced Model (TARM) for weakly supervised stance detection. Our model consists of two complementary components: (1) a detection network that incorporates target-related topic information into representation learning for identifying stance effectively; (2) a policy network that learns to eliminate noisy instances from auto-labeled data based on off-policy reinforcement learning. Two networks are alternately optimized to improve each other’s performances. Experimental results demonstrate that our proposed model TARM outperforms the state-of-the-art approaches.

AAAI Conference 2019 Conference Paper

Multi-Interactive Memory Network for Aspect Based Multimodal Sentiment Analysis

  • Nan Xu
  • Wenji Mao
  • Guandan Chen

As a fundamental task of sentiment analysis, aspect-level sentiment analysis aims to identify the sentiment polarity of a specific aspect in the context. Previous work on aspect-level sentiment analysis is text-based. With the prevalence of multimodal user-generated content (e. g. text and image) on the Internet, multimodal sentiment analysis has attracted increasing research attention in recent years. In the context of aspectlevel sentiment analysis, multimodal data are often more important than text-only data, and have various correlations including impacts that aspect brings to text and image as well as the interactions associated with text and image. However, there has not been any related work carried out so far at the intersection of aspect-level and multimodal sentiment analysis. To fill this gap, we are among the first to put forward the new task, aspect based multimodal sentiment analysis, and propose a novel Multi-Interactive Memory Network (MIMN) model for this task. Our model includes two interactive memory networks to supervise the textual and visual information with the given aspect, and learns not only the interactive influences between cross-modality data but also the self influences in single-modality data. We provide a new publicly available multimodal aspect-level sentiment dataset to evaluate our model, and the experimental results demonstrate the effectiveness of our proposed model for this new task.

IS Journal 2011 Journal Article

Cyber-Physical-Social Systems for Command and Control

  • Zhong Liu
  • Dong-sheng Yang
  • Ding Wen
  • Wei-ming Zhang
  • Wenji Mao

The cyber-physical-social systems (CPSS) provide an ideal paradigm for the design and construction of command and control organization. This article presents the concept of a CPSS for command and control and discusses its operational process and self-synchronization mechanism in CPSS.

IS Journal 2011 Journal Article

Social and Economic Computing

  • Wenji Mao
  • Alexander Tuzhilin
  • Jonathan Gratch

Social and economic computing is a cross-disciplinary field focusing on the development of computing technologies that consider social and economic contexts. Social computing and economic computing not only share a number of computing technologies, they also benefit and fertilize each other in computational theories, models, and design. This special issue presents some representative research in social and economic computing from several perspectives.

IS Journal 2010 Journal Article

Social Learning

  • Qiang Yang
  • Zhi-Hua Zhou
  • Wenji Mao
  • Wei Li
  • Nathan Nan Liu

In recent years, social behavioral data have been exponentially expanding due to the tremendous success of various outlets on the social Web (aka Web 2. 0) such as Facebook, Digg, Twitter, Wikipedia, and Delicious. As a result, there's a need for social learning to support the discovery, analysis, and modeling of human social behavioral data. The goal is to discover social intelligence, which encompasses a spectrum of knowledge that characterizes human interaction, communication, and collaborations. The social Web has thus become a fertile ground for machine learning and data mining research. This special issue gathers the state-of-the-art research in social learning and is devoted to exhibiting some of the best representative works in this area.

IS Journal 2007 Journal Article

Social Computing: From Social Informatics to Social Intelligence

  • Fei-Yue Wang
  • Kathleen M. Carley
  • Daniel Zeng
  • Wenji Mao

Social computing represents a new computing paradigm and an interdisciplinary research and application field. Undoubtedly, it strongly influences system and software developments in the years to come. We expect that social computing's scope continues to expand and its applications multiply. From both theoretical and technological perspectives, social computing technologies moves beyond social information processing towards emphasizing social intelligence. As we've discussed, the move from social informatics to social intelligence is achieved by modeling and analyzing social behavior, by capturing human social dynamics, and by creating artificial social agents and generating and managing actionable social knowledge