Arrow Research search

Author name cluster

Chongjun Wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

28 papers
2 author rows

Possible papers

28

AAAI Conference 2025 Conference Paper

Normalize Then Propagate: Efficient Homophilous Regularization for Few-Shot Semi-Supervised Node Classification

  • Baoming Zhang
  • MingCai Chen
  • Jianqing Song
  • Shuangjie Li
  • Jie Zhang
  • Chongjun Wang

Graph Neural Networks (GNNs) have demonstrated remarkable ability in semi-supervised node classification. However, most existing GNNs rely heavily on a large amount of labeled data for training, which is labor-intensive and requires extensive domain knowledge. In this paper, we first analyze the restrictions of GNNs generalization from the perspective of supervision signals in the context of few-shot semi-supervised node classification. To address these challenges, we propose a novel algorithm named NormProp, which utilizes the homophily assumption of unlabeled nodes to generate additional supervision signals, thereby enhancing the generalization against label scarcity. The key idea is to efficiently capture both the class information and the consistency of aggregation during message passing, via decoupling the direction and Euclidean norm of node representations. Moreover, we conduct a theoretical analysis to determine the upper bound of Euclidean norm, and then propose homophilous regularization to constraint the consistency of unlabeled nodes. Extensive experiments demonstrate that NormProp achieve state-of-the-art performance under low-label rate scenarios with low computational complexity.

AAAI Conference 2025 Conference Paper

Robust Logit Adjustment for Learning with Long-Tailed Noisy Data

  • MingCai Chen
  • Yuntao Du
  • Wenyu Jiang
  • Baoming Zhang
  • Shuai Feng
  • Yi Xin
  • Chongjun Wang

Learning with noisy labels (LNL) methods have enabled the deployment of machine learning systems with imperfectly labeled data. However, these methods often struggle to identify noise in the presence of long-tailed (LT) class distributions, where the memorization effect becomes class-dependent. Conversely, LT methods are suboptimal under label noise, as it hinders access to accurate label frequency statistics. This study aims to address the long-tailed noisy data by bridging the methodological gap between LNL and LT approaches. We propose a direct solution, termed Robust Logit Adjustment, which estimates ground-truth labels through label refurbishment, thereby mitigating the impact of label noise. Simultaneously, our method incorporates the distribution of training-time corrected target labels into the LT method logit adjustment, providing class-rebalanced supervision. Extensive experiments on both synthetic and real-world long-tailed noisy datasets demonstrate the superior performance of our method.

AAAI Conference 2024 Conference Paper

CASE: Exploiting Intra-class Compactness and Inter-class Separability of Feature Embeddings for Out-of-Distribution Detection

  • Shuai Feng
  • Pengsheng Jin
  • Chongjun Wang

Detecting out-of-distribution (OOD) inputs is critical for reliable machine learning, but deep neural networks often make overconfident predictions, even for OOD inputs that deviate from the distribution of training data. Prior methods relied on the widely used softmax cross-entropy (CE) loss that is adequate for classifying in-distribution (ID) samples but not optimally designed for OOD detection. To address this issue, we propose CASE, a simple and effective OOD detection method by explicitly improving intra-class Compactness And inter-class Separability of feature Embeddings. To enhance the separation between ID and OOD samples, CASE uses a dual-loss framework, which includes a separability loss that maximizes the inter-class Euclidean distance to promote separability among different class centers, along with a compactness loss that minimizes the intra-class Euclidean distance to encourage samples to be close to their class centers. In particular, the class centers are defined as a free optimization parameter of the model and updated by gradient descent, which is simple and further enhances the OOD detection performance. Extensive experiments demonstrate the superiority of CASE, which reduces the average FPR95 by 37.11% and improves the average AUROC by 15.89% compared to the baseline method using a softmax confidence score on the more challenging CIFAR-100 model.

ICLR Conference 2024 Conference Paper

DOS: Diverse Outlier Sampling for Out-of-Distribution Detection

  • Wenyu Jiang
  • Hao Cheng 0014
  • Mingcai Chen
  • Chongjun Wang
  • Hongxin Wei

Modern neural networks are known to give overconfident predictions for out-of-distribution inputs when deployed in the open world. It is common practice to leverage a surrogate outlier dataset to regularize the model during training, and recent studies emphasize the role of uncertainty in designing the sampling strategy for outlier datasets. However, the OOD samples selected solely based on predictive uncertainty can be biased towards certain types, which may fail to capture the full outlier distribution. In this work, we empirically show that diversity is critical in sampling outliers for OOD detection performance. Motivated by the observation, we propose a straightforward and novel sampling strategy named DOS (Diverse Outlier Sampling) to select diverse and informative outliers. Specifically, we cluster the normalized features at each iteration, and the most informative outlier from each cluster is selected for model training with absent category loss. With DOS, the sampled outliers efficiently shape a globally compact decision boundary between ID and OOD data. Extensive experiments demonstrate the superiority of DOS, reducing the average FPR95 by up to 25.79% on CIFAR-100 with TI-300K.

AAAI Conference 2024 Conference Paper

FedCompetitors: Harmonious Collaboration in Federated Learning with Competing Participants

  • Shanli Tan
  • Hao Cheng
  • Xiaohu Wu
  • Han Yu
  • Tiantian He
  • Yew Soon Ong
  • Chongjun Wang
  • Xiaofeng Tao

Federated learning (FL) provides a privacy-preserving approach for collaborative training of machine learning models. Given the potential data heterogeneity, it is crucial to select appropriate collaborators for each FL participant (FL-PT) based on data complementarity. Recent studies have addressed this challenge. Similarly, it is imperative to consider the inter-individual relationships among FL-PTs where some FL-PTs engage in competition. Although FL literature has acknowledged the significance of this scenario, practical methods for establishing FL ecosystems remain largely unexplored. In this paper, we extend a principle from the balance theory, namely “the friend of my enemy is my enemy”, to ensure the absence of conflicting interests within an FL ecosystem. The extended principle and the resulting problem are formulated via graph theory and integer linear programming. A polynomial-time algorithm is proposed to determine the collaborators of each FL-PT. The solution guarantees high scalability, allowing even competing FL-PTs to smoothly join the ecosystem without conflict of interest. The proposed framework jointly considers competition and data heterogeneity. Extensive experiments on real-world and synthetic data demonstrate its efficacy compared to five alternative approaches, and its ability to establish efficient collaboration networks among FL-PTs.

NeurIPS Conference 2024 Conference Paper

Similarity-Navigated Conformal Prediction for Graph Neural Networks

  • Jianqing Song
  • Jianguo Huang
  • Wenyu Jiang
  • Baoming Zhang
  • Shuangjie Li
  • Chongjun Wang

Graph Neural Networks have achieved remarkable accuracy in semi-supervised node classification tasks. However, these results lack reliable uncertainty estimates. Conformal prediction methods provide a theoretical guarantee for node classification tasks, ensuring that the conformal prediction set contains the ground-truth label with a desired probability (e. g. , 95\%). In this paper, we empirically show that for each node, aggregating the non-conformity scores of nodes with the same label can improve the efficiency of conformal prediction sets while maintaining valid marginal coverage. This observation motivates us to propose a novel algorithm named $\textit{Similarity-Navigated Adaptive Prediction Sets}$ (SNAPS), which aggregates the non-conformity scores based on feature similarity and structural neighborhood. The key idea behind SNAPS is that nodes with high feature similarity or direct connections tend to have the same label. By incorporating adaptive similar nodes information, SNAPS can generate compact prediction sets and increase the singleton hit ratio (correct prediction sets of size one). Moreover, we theoretically provide a finite-sample coverage guarantee of SNAPS. Extensive experiments demonstrate the superiority of SNAPS, improving the efficiency of prediction sets and singleton hit ratio while maintaining valid coverage.

ECAI Conference 2023 Conference Paper

A Flexible Debiasing Framework for Fair Heterogeneous Information Network Embedding

  • Meng Cao 0004
  • Mingcai Chen
  • Jianqing Song
  • Chen-Xuan Fang
  • Chongjun Wang

Heterogeneous Information Networks (HINs) are prevalent in real-world systems. Recent advances in network embedding provide an effective way of encoding HINs into low-dimensional vectors. However, there is a growing concern that existing HIN embedding algorithms may suffer from the problem of generating biased representations, resulting in discrimination against certain demographic groups. In this paper, we propose a flexible debiasing framework for fair HIN embedding to address this issue. Specifically, we first formalize measurements and the definition of fairness in HIN embedding. Then, we propose a debiasing framework named FairHGNN, including a novel meta-path sampling method that focuses on mitigating the bias in random walks, and a fairness constraint with Wasserstein distance to alleviate the algorithmic bias in Graph Neural Networks (GNNs). Experimental results on real-world datasets validate the efficacy of FairHGNN in promoting fairness and maintaining good utility.

TMLR Journal 2023 Journal Article

Bidirectional View based Consistency Regularization for Semi-Supervised Domain Adaptation

  • Yuntao Du
  • 娟 江
  • Hongtao Luo
  • Haiyang Yang
  • MingCai Chen
  • Chongjun Wang

Distinguished from unsupervised domain adaptation (UDA), semi-supervised domain adaptation (SSDA) could access a few labeled target samples during learning additionally. Although achieving remarkable progress, target supervised information is easily overwhelmed by massive source supervised information, as there are many more labeled source samples than those in the target domain. In this work, we propose a novel method BVCR that better utilizes the supervised information by three schemes, i.e., modeling, exploration, and interaction. In the modeling scheme, BVCR models the source supervision and target supervision separately to avoid target supervised information being overwhelmed by source supervised information and better utilize the target supervision. Besides, as both supervised information naturally offer distinct views for the target domain, the exploration scheme performs intra-domain consistency regularization to better explore target information with bidirectional views. Moreover, as both views are complementary to each other, the interaction scheme introduces inter-domain consistency regularization to activate information interaction bidirectionally. Thus, the proposed method is elegantly symmetrical by design and easy to implement. Extensive experiments are conducted, and the results show the effectiveness of the proposed method.

IJCAI Conference 2023 Conference Paper

Exploring Leximin Principle for Fair Core-Selecting Combinatorial Auctions: Payment Rule Design and Implementation

  • Hao Cheng
  • Shufeng Kong
  • Yanchen Deng
  • Caihua Liu
  • Xiaohu Wu
  • Bo An
  • Chongjun Wang

Core-selecting combinatorial auctions (CAs) restrict the auction result in the core such that no coalitions could improve their utilities by engaging in collusion. The minimum-revenue-core (MRC) rule is a widely used core-selecting payment rule to maximize the total utilities of all bidders. However, the MRC rule can suffer from severe unfairness since it ignores individuals' utilities. To address this limitation, we propose to explore the leximin principle to achieve fairness in core-selecting CAs since the leximin principle prefers to maximize the utility of the worst-off; the resulting bidder-leximin-optimal (BLO) payment rule is then theoretically analyzed and an effective algorithm is further provided to compute the BLO outcome. Moreover, we conduct extensive experiments to show that our algorithm returns fairer utility distributions and is faster than existing algorithms of core-selecting payment rules.

AAAI Conference 2023 Conference Paper

READ: Aggregating Reconstruction Error into Out-of-Distribution Detection

  • Wenyu Jiang
  • Yuxin Ge
  • Hao Cheng
  • MingCai Chen
  • Shuai Feng
  • Chongjun Wang

Detecting out-of-distribution (OOD) samples is crucial to the safe deployment of a classifier in the real world. However, deep neural networks are known to be overconfident for abnormal data. Existing works directly design score function by mining the inconsistency from classifier for in-distribution (ID) and OOD. In this paper, we further complement this inconsistency with reconstruction error, based on the assumption that an autoencoder trained on ID data cannot reconstruct OOD as well as ID. We propose a novel method, READ (Reconstruction Error Aggregated Detector), to unify inconsistencies from classifier and autoencoder. Specifically, the reconstruction error of raw pixels is transformed to latent space of classifier. We show that the transformed reconstruction error bridges the semantic gap and inherits detection performance from the original. Moreover, we propose an adjustment strategy to alleviate the overconfidence problem of autoencoder according to a fine-grained characterization of OOD data. Under two scenarios of pre-training and retraining, we respectively present two variants of our method, namely READ-MD (Mahalanobis Distance) only based on pre-trained classifier and READ-ED (Euclidean Distance) which retrains the classifier. Our methods do not require access to test time OOD data for fine-tuning hyperparameters. Finally, we demonstrate the effectiveness of the proposed methods through extensive comparisons with state-of-the-art OOD detection algorithms. On a CIFAR-10 pre-trained WideResNet, our method reduces the average FPR@95TPR by up to 9.8% compared with previous state-of-the-art.

AAAI Conference 2023 Conference Paper

Two Wrongs Don’t Make a Right: Combating Confirmation Bias in Learning with Label Noise

  • MingCai Chen
  • Hao Cheng
  • Yuntao Du
  • Ming Xu
  • Wenyu Jiang
  • Chongjun Wang

Noisy labels damage the performance of deep networks. For robust learning, a prominent two-stage pipeline alternates between eliminating possible incorrect labels and semi-supervised training. However, discarding part of noisy labels could result in a loss of information, especially when the corruption has a dependency on data, e.g., class-dependent or instance-dependent. Moreover, from the training dynamics of a representative two-stage method DivideMix, we identify the domination of confirmation bias: pseudo-labels fail to correct a considerable amount of noisy labels, and consequently, the errors accumulate. To sufficiently exploit information from noisy labels and mitigate wrong corrections, we propose Robust Label Refurbishment (Robust LR)—a new hybrid method that integrates pseudo-labeling and confidence estimation techniques to refurbish noisy labels. We show that our method successfully alleviates the damage of both label noise and confirmation bias. As a result, it achieves state-of-the-art performance across datasets and noise types, namely CIFAR under different levels of synthetic noise and mini-WebVision and ANIMAL-10N with real-world noise.

JAAMAS Journal 2022 Journal Article

A Bayesian optimal social law synthesizing mechanism for strategical agents

  • Jun Wu
  • Jie Cao
  • Chongjun Wang

Abstract One of the effective and well studied approaches for coordinating multiagent systems is to synthesize social laws which restrict the behavior of individual agents. We show that when rational behavior of the agents and private information are considered, the optimal social law synthesizing problem naturally evolves into a setting which can be handled by the framework of algorithmic mechanism design. We focus on the Bayesian case in this paper, that is, the probability distribution of each agent’s private cost is known by the public. In this case, our problem closely relates to path/spanning-tree auctions and Myerson’s optimal auction mechanism, but the optimization objective is new, that is, we focus on profit maximization instead of payment maximization in a reverse auction, and as far as we know in this setting no existing mechanism can be directly applied. By studying this problem: We further extend the logic-based framework of social law optimization problem to the strategic case, and show that it becomes a new problem of algorithmic mechanism design; We find out a mechanism that is incentive compatible, individually rational and maximize the expected profit for all input cost profiles; However, we can show that this mechanism is computational intractable (FP \(^{NP}\) -complete); So, we try to specify Computation Tree Logic semantics as a set of linear-integer constraints, design a Integer-Linear Programming based algorithm for computing the proposed mechanism, enabling it to be handled by current ILP solvers, and finally find out a tractable 2-approximation mechanism.

AAAI Conference 2022 Conference Paper

Semi-supervised Learning with Multi-Head Co-Training

  • MingCai Chen
  • Yuntao Du
  • Yi Zhang
  • Shuwei Qian
  • Chongjun Wang

Co-training, extended from self-training, is one of the frameworks for semi-supervised learning. Without natural split of features, single-view co-training works at the cost of training extra classifiers, where the algorithm should be delicately designed to prevent individual classifiers from collapsing into each other. To remove these obstacles which deter the adoption of single-view co-training, we present a simple and efficient algorithm Multi-Head Co-Training. By integrating base learners into a multi-head structure, the model is in a minimal amount of extra parameters. Every classification head in the unified model interacts with its peers through a “Weak and Strong Augmentation” strategy, in which the diversity is naturally brought by the strong data augmentation. Therefore, the proposed method facilitates single-view co-training by 1). promoting diversity implicitly and 2). only requiring a small extra computational overhead. The effectiveness of Multi- Head Co-Training is demonstrated in an empirical study on standard semi-supervised learning benchmarks.

AAAI Conference 2022 Conference Paper

Tailor Versatile Multi-Modal Learning for Multi-Label Emotion Recognition

  • Yi Zhang
  • Mingyuan Chen
  • Jundong Shen
  • Chongjun Wang

Multi-modal Multi-label Emotion Recognition (MMER) aims to identify various human emotions from heterogeneous visual, audio and text modalities. Previous methods mainly focus on projecting multiple modalities into a common latent space and learning an identical representation for all labels, which neglects the diversity of each modality and fails to capture richer semantic information for each label from different perspectives. Besides, associated relationships of modalities and labels have not been fully exploited. In this paper, we propose versaTile multi-modAl learning for multI-labeL emOtion Recognition (TAILOR), aiming to refine multi-modal representations and enhance discriminative capacity of each label. Specifically, we design an adversarial multi-modal refinement module to sufficiently explore the commonality among different modalities and strengthen the diversity of each modality. To further exploit label-modal dependence, we devise a BERT-like cross-modal encoder to gradually fuse private and common modality representations in a granularity descent way, as well as a label-guided decoder to adaptively generate a tailored representation for each label with the guidance of label semantics. In addition, we conduct experiments on the benchmark MMER dataset CMU-MOSEI in both aligned and unaligned settings, which demonstrate the superiority of TAILOR over the state-of-the-arts.

NeurIPS Conference 2022 Conference Paper

Unsupervised Point Cloud Completion and Segmentation by Generative Adversarial Autoencoding Network

  • Changfeng Ma
  • Yang Yang
  • Jie Guo
  • Fei Pan
  • Chongjun Wang
  • Yanwen Guo

Most existing point cloud completion methods assume the input partial point cloud is clean, which is not practical in practice, and are Most existing point cloud completion methods assume the input partial point cloud is clean, which is not the case in practice, and are generally based on supervised learning. In this paper, we present an unsupervised generative adversarial autoencoding network, named UGAAN, which completes the partial point cloud contaminated by surroundings from real scenes and cutouts the object simultaneously, only using artificial CAD models as assistance. The generator of UGAAN learns to predict the complete point clouds on real data from both the discriminator and the autoencoding process of artificial data. The latent codes from generator are also fed to discriminator which makes encoder only extract object features rather than noises. We also devise a refiner for generating better complete cloud with a segmentation module to separate the object from background. We train our UGAAN with one real scene dataset and evaluate it with the other two. Extensive experiments and visualization demonstrate our superiority, generalization and robustness. Comparisons against the previous method show that our method achieves the state-of-the-art performance on unsupervised point cloud completion and segmentation on real data.

AAAI Conference 2020 Conference Paper

A Multi-Unit Profit Competitive Mechanism for Cellular Traffic Offloading

  • Jun Wu
  • Yu Qiao
  • Lei Zhang
  • Chongjun Wang
  • Meilin Liu

Cellular traffic offloading is nowadays an important problem in mobile networking. We model it as a procurement problem where each agent sells multi-units of a homogeneous item with privately known capacity and unit cost, and the auctioneer’s demand valuation function is symmetric submodular. Based on the framework of random sampling and profit extraction, we aim to design a prior-free mechanism which guarantees a profit competitive to the omniscient single-price auction. However, the symmetric submodular demand valuation function and 2-parameter setting present new challenges. By adopting the highest feasible clear price, we successfully design a truthful profit extractor, and then we propose a mechanism which is proved to be truthful, individually rational and constant-factor competitive in a fixed market.

ECAI Conference 2020 Conference Paper

Common and Discriminative Semantic Pursuit for Multi-Modal Multi-Label Learning

  • Yi Zhang 0073
  • Jundong Shen
  • Zhecheng Zhang
  • Chongjun Wang

Multi-modal multi-label (MMML) learning provides an important framework to learn complex objects with diverse representations and annotations. Most existing multi-modal multi-label learning approaches focus on exploiting shared information of all modalities, but neglect specific information of each modality. Besides, how to effectively utilize relationship among modalities is also a challenging issue. In this paper, we propose a novel MMML learning approach called Common and Discriminative Semantic Pursuit (CoDiSP), which learns low-dimensional common representation with all modalities, and extracts discriminative information of each modality by enforcing orthogonal constraint. Meanwhile, the common representation is used as a new modality and added to the specific modal sequence. Furthermore, CoDiSP learns deep models with adaptive depth and exploits label correlations simultaneously based on the extracted modal sequence. Finally, extensive experiments on several benchmark MMML datasets show superior performance of CoDiSP compared with other state-of-the-art approaches.

JAAMAS Journal 2020 Journal Article

Fast core pricing algorithms for path auction

  • Hao Cheng
  • Wentao Zhang
  • Chongjun Wang

Abstract Path auction is held in a graph, where each edge stands for a commodity and the weight of this edge represents the prime cost. Bidders own some edges and make bids for their edges. The auctioneer needs to purchase a sequence of edges to form a path between two specific vertices. Path auction can be considered as a kind of combinatorial reverse auctions. Core-selecting mechanism is a prevalent mechanism for combinatorial auction. However, pricing in core-selecting combinatorial auction is computationally expensive, one important reason is the exponential core constraints. The same is true of path auction. To solve this computation problem, we simplify the constraint set and get the optimal set with only polynomial constraints in this paper. Based on our constraint set, we put forward two fast core pricing algorithms for the computation of bidder-Pareto-optimal core outcome. Among all the algorithms, our new algorithms have remarkable runtime performance. Finally, we validate our algorithms on real-world datasets and obtain excellent results.

ECAI Conference 2020 Conference Paper

Homogeneous Online Transfer Learning with Online Distribution Discrepancy Minimization

  • Yuntao Du 0001
  • Zhiwen Tan
  • Qian Chen
  • Yi Zhang 0073
  • Chongjun Wang

Transfer learning has been demonstrated to be successful and essential in diverse applications, which transfers knowledge from related but different source domains to the target domain. Online transfer learning(OTL) is a more challenging problem where the target data arrive in an online manner. Most OTL methods combine source classifier and target classifier directly by assigning a weight to each classifier, and adjust the weights constantly. However, these methods pay little attention to reducing the distribution discrepancy between domains. In this paper, we propose a novel online transfer learning method which seeks to find a new feature representation, so that the marginal distribution and conditional distribution discrepancy can be online reduced simultaneously. We focus on online transfer learning with multiple source domains and use the Hedge strategy to leverage knowledge from source domains. We analyze the theoretical properties of the proposed algorithm and provide an upper mistake bound. Comprehensive experiments on two real-world datasets show that our method outperforms state-of-the-art methods by a large margin.

IJCAI Conference 2019 Conference Paper

Learn to Select via Hierarchical Gate Mechanism for Aspect-Based Sentiment Analysis

  • Xiangying Ran
  • Yuanyuan Pan
  • Wei Sun
  • Chongjun Wang

Aspect-based sentiment analysis (ABSA) is a fine-grained task. Recurrent Neural Network (RNN) model armed with attention mechanism seems a natural fit for this task, and actually it achieves the state-of-the-art performance recently. However, previous attention mechanisms proposed for ABSA may attend irrelevant words and thus downgrade the performance, especially when dealing with long and complex sentences with multiple aspects. In this paper, we propose a novel architecture named Hierarchical Gate Memory Network (HGMN) for ABSA: firstly, we employ the proposed hierarchical gate mechanism to learn to select the related part about the given aspect, which can keep the original sequence structure of sentence at the same time. After that, we apply Convolutional Neural Network (CNN) on the final aspect-specific memory. We conduct extensive experiments on the SemEval 2014 and Twitter dataset, and results demonstrate that our model outperforms attention based state-of-the-art baselines.

AAMAS Conference 2019 Conference Paper

Multi-unit Budget Feasible Mechanisms for Cellular Traffic Offloading

  • Jun Wu
  • Yuan Zhang
  • Yu Qiao
  • Lei Zhang
  • Chongjun Wang
  • Junyuan Xie

Cellular traffic offloading is nowadays an important problem in mobile networking. Since the offloading resource owners (agents) are self-interested and have private costs, it is highly challenging to design procurement mechanisms that motivate agents to reveal their true costs and achieve guaranteed performance under the constraint of a strict budget. In this paper, we model cellular traffic offloading as a multi-unit budget feasible procurement auction design problem with diminishing return valuations. We design a novel greedy-based randomized mechanism, and prove it is budget-feasible, truthful, individually rational and a (3 + 2 ln 𝑁)-approximation, where 𝑁 is the total number of available resource units. We also propose a deterministic mechanism which achieves (2 + ln 𝑁 + √︀ 2 + 3 ln 𝑁 + ln2 𝑁) - approximation. We prove no budget-feasible and truthful mechanism can do better than ln 𝑁-approximation in our setting, thus our mechanism approaches the optimal to a constant factor. In addition to solving the cellular traffic offloading problem, our work successfully extends solvable valuation class of greedy-based multi-unit budget-feasible mechanism with performance guarantees from the concave-additive valuations to more general local diminishing return valuations.

AAAI Conference 2018 Conference Paper

Multi-Entity Aspect-Based Sentiment Analysis With Context, Entity and Aspect Memory

  • Jun Yang
  • Runqi Yang
  • Chongjun Wang
  • Junyuan Xie

Inspired by recent works in Aspect-Based Sentiment Analysis(ABSA) on product reviews and faced with more complex posts on social media platforms mentioning multiple entities as well as multiple aspects, we define a novel task called Multi-Entity Aspect-Based Sentiment Analysis (ME-ABSA). This task aims at fine-grained sentiment analysis of (entity, aspect) combinations, making the well-studied ABSA task a special case of it. To address the task, we propose an innovative method that models Context memory, Entity memory and Aspect memory, called CEA method. Our experimental results show that our CEA method achieves a significant gain over several baselines, including the state-of-the-art method for the ABSA task, and their enhanced versions, on datasets for ME-ABSA and ABSA tasks. The in-depth analysis illustrates the significant advantage of the CEA method over baseline methods for several hard-to-predict post types. Furthermore, we show that the CEA method is capable of generalizing to new (entity, aspect) combinations with little loss of accuracy. This observation indicates that data annotation in real applications can be largely simplified.

AAMAS Conference 2018 Conference Paper

Optimal Constraint Collection for Core-Selecting Path Mechanism

  • Hao Cheng
  • Lei Zhang
  • Yi Zhang
  • Jun Wu
  • Chongjun Wang

In path auctions, strategic bidders make bids for commodities. Each edge of the graph stands for a commodity and the weight on the edge represents the prime cost. Auctioneer needs to purchase a sequence of edges in order to get a path from one vertex to another at a low cost. Path auctions can be considered as a kind of combinatorial reverse-auctions. Computing prices in core-selecting combinatorial auctions is a computationally hard problem, the same is true in core-selecting path auctions. This problem can be solved by core constraint generation(CCG) algorithm. However, we find that there are many redundant constraints and the constraint collection can be conciser in core-selecting path mechanism. In this paper, 1) we put forward a new approach to get the constraint collection, and reduce the constraint number from exponential O(2n) to polynomial O(n2), where n is the network diameter; 2) we prove that the new constraint collection is not only equivalent to the original collection, but also has no redundant constraint in the worst case; 3) we validate our approach on real-world datasets and obtain excellent results. Furthermore, we provide new insights to think over the core-selecting mechanism in combinatorial auctions.

AAMAS Conference 2017 Conference Paper

Mechanism Design for Social Law Synthesis under Incomplete Information

  • Jun Wu
  • Lei Zhang
  • Chongjun Wang
  • Junyuan Xie

For the social law synthesis problem, when the agents are rational in the sense of game theory and hold some information we need as private information, it naturally evolves into a setting that is perfectly addressed by the framework of algorithmic mechanism design. In this strategic setting, we are not only required to find out the feasible social law for the objective, but also required to formulate the right payment to the agents to induce incentive compatibility and individual rationality. We design a mechanism for this setting, prove that it satisfies all the required formal properties, and characterize the conditions for the existence of feasible mechanisms. Moreover, we show that the upper-bound of the total payment of the proposed mechanism is high.

AAMAS Conference 2017 Conference Paper

Synthesizing Optimal Social Laws for Strategical Agents via Bayesian Mechanism Design

  • Jun Wu
  • Lei Zhang
  • Chongjun Wang
  • Junyuan Xie

When rational behavior of the agents and private information are considered, the optimal social law synthesizing problem naturally evolves into a setting which can be handled by the framework of algorithmic mechanism design. We focus on the Bayesian case in this paper, that is, the probability distribution of each agent’s cost is known. It is easy to see that in this case our problem closely relates to path/spanning-tree auctions and Myerson’s optimal auction mechanism, but the optimization objective is new, that is, we focus on profit maximization instead of payment maximization. By studying this problem: we further extend the logic-based framework of social law optimization problem to the strategic case, and show that it becomes a new problem of algorithmic mechanism design; we find out a mechanism that is incentive compatible, individually rational and maximizes the expected profit for all input cost profiles; however, we can show that this mechanism is computational intractable; so, we finally find out a tractable constant-factor approximation mechanism. CCS Concepts •Computing methodologies → Multi-agent systems;

ECAI Conference 2016 Conference Paper

False-Name-Proof Mechanisms for Path Auctions in Social Networks

  • Lei Zhang 0086
  • Haibin Chen
  • Jun Wu 0015
  • Chongjun Wang
  • Junyuan Xie

We study path auction mechanisms for buying path between two given nodes in a social network, where edges are owned by strategic agents. The well known VCG mechanism is the unique solution that guarantees both truthfulness and efficiency. However, in social network environments, the mechanism is vulnerable to false-name manipulations where agents can profit from placing multiple bids under fictitious names. Moreover, the VCG mechanism often leads to high overpayment. In this paper, we present core-selecting path mechanisms that are robust against false-name bids and address the overpayment problem. Specifically, we provide a new formulation for the core, which greatly reduces the number of core constraints. Based on the new formulation, we present a Vickery-nearest pricing rule, which finds the core payment profile that minimizes the L∞ distance to the VCG payment profile. We prove that the Vickery-nearest core payments can be computed in polynomial time by solving linear programs. Our experiment results on real network datasets and reported cost dataset show that our Vickery-nearest core-selecting path mechanism can reduce VCG's overpayment by about 20%.

AAMAS Conference 2013 Conference Paper

On the Complexity of Undominated Core and Farsighted Solution Concepts in Coalitional Games

  • Yusen Zhan
  • Jun Wu
  • Chongjun Wang
  • Meilin Liu
  • Junyuan Xie

In this paper, we study the computational complexity of solution concepts in the context of coalitional games. Firstly, we distinguish two different kinds of core, the undominated core and excess core, and investigate the difference and relationship between them. Secondly, we thoroughly investigate the computational complexity of undominated core and three farsighted solution concepts—farsighted core, farsighted stable set and largest consistent set.