Arrow Research search

Author name cluster

Wenjie Wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

26 papers
2 author rows

Possible papers

26

AAAI Conference 2026 Conference Paper

Beyond Missing Data Imputation: Information-Theoretic Coupling of Missingness and Class Imbalance for Optimal Irregular Time Series Classification

  • Xin Qin
  • Mengna Liu
  • Wenjie Wang
  • Shuxin Li
  • Tianjiao Li
  • Xiufeng Liu
  • Xu Cheng

Irregular time series (IRTS) are prevalent in real-world applications, where uneven sampling and missing data pose fundamental challenges to deep learning-based feature modeling. Although existing methods attempt to retain timestamp information, they often overlook the structured patterns embedded within the missingness itself, and tend to perform poorly when confronted with class imbalance exacerbated by data incompleteness. Specifically, temporal irregularity hinders the modeling of long-range dependencies and local patterns, while sparse observations limit representational capacity, disproportionately impairing minority classes and leading to severe classification bias. To address these deeply coupled challenges, we propose SPECTRA (Structured Pattern and Enriched Context-aware Temporal Representation Architecture), a unified framework for robust IRTS classification. SPECTRA introduces a frequency-guided observation encoder that reconstructs temporal dependencies in a stable manner, mitigating spectral distortion and information corruption. Complementarily, a missingness pattern encoder explicitly captures the dynamic evolution of missing data and leverages it as a discriminative signal. In addition, a prototype-constrained classification paradigm directly optimizes the geometric structure of the feature space, enhancing intra-class compactness and alleviating generalization bottlenecks caused by class imbalance. Extensive experiments on three public IRTS datasets—P12, P19, and PAM—demonstrate the superior performance of SPECTRA under both missing and imbalanced conditions.

AAAI Conference 2026 Conference Paper

Navigating Through Paper Flood: Advancing LLM-Based Paper Evaluation Through Domain-Aware Retrieval and Latent Reasoning

  • Wuqiang Zheng
  • Yiyan Xu
  • Xinyu Lin
  • Chongming Gao
  • Wenjie Wang
  • Fuli Feng

With the rapid and continuous increase in academic publications, identifying high-quality research has become an increasingly pressing challenge. While recent methods leveraging Large Language Models (LLMs) for automated paper evaluation have shown great promise, they are often constrained by outdated domain knowledge and limited reasoning capabilities. In this work, we present PaperEval, a novel LLM-based framework for automated paper evaluation that addresses these limitations through two key components: 1) a domain-aware paper retrieval module that retrieves relevant concurrent work to support contextualized assessments of novelty and contributions, and 2) a latent reasoning mechanism that enables deep understanding of complex motivations and methodologies, along with comprehensive comparison against concurrently related work, to support more accurate and reliable evaluation. To guide the reasoning process, we introduce a progressive ranking optimization strategy that encourages the LLM to iteratively refine its predictions with an emphasis on relative comparison. Experiments on two datasets demonstrate that PaperEval consistently outperforms existing methods in both academic impact and paper quality evaluation. In addition, we deploy PaperEval in a real-world paper recommendation system for filtering high-quality papers, which has gained strong engagement on social media---amassing over 8,000 subscribers and attracting over 10,000 views for many filtered high-quality papers---demonstrating the practical effectiveness of PaperEval.

AAAI Conference 2026 Conference Paper

OmniBench: A Comprehensive Benchmark Integrating Real-World, Time-sensitive, and Multi-Hop Questions with a Multi-Dimensional Hybrid Evaluation Framework

  • Wenjie Wang
  • Yufeng Jiang
  • Ge Sun
  • Chenghang Dong
  • Zheng Jun
  • Li Mengjie
  • Lixin Chen
  • Huan Wang

Recently, with the increasing capabilities of Large Language Models (LLMs), AI applications have gradually emerged to solve various problems in people's daily lives, so accurately measuring their performance and reliability is paramount. However, existing benchmarks predominantly rely on closed-ended, multiple-choice or short-answer question formats. While useful for assessment, these formats exhibit a significant gap compared to the diverse and open-ended nature of questions posed by real-world users. To bridge this gap, we produce OmniBench, a comprehensive open-domain benchmark. OmniBench is uniquely composed of authentic, user-generated questions harvested from real-world interactions on various websites and applications, covering 16 rigorously defined knowledge domains and 5 crucial user intents derived from a large-scale analysis of the mass corpus. Crucially, we propose three automated data construction pipelines that enable the continuous and periodic updating of the benchmark dataset. This approach not only ensures that the questions can keep up with current events, but also effectively mitigates the critical issue of data contamination prevalent in static benchmarks. Moreover, a multi-dimensional hybrid evaluation framework named OmniEval is proposed for evaluating the responses. This framework combines diverse metrics and evaluation methods to capture nuanced aspects of answer performance. Extensive validation demonstrates that this evaluation framework exhibits strong alignment with human judgments, ensuring the reliability of the benchmark results.

NeurIPS Conference 2025 Conference Paper

Auto-Search and Refinement: An Automated Framework for Gender Bias Mitigation in Large Language Models

  • Yue Xu
  • Chengyan Fu
  • Li Xiong
  • Sibei Yang
  • Wenjie Wang

Pre-training large language models (LLMs) on vast text corpora enhances natural language processing capabilities but risks encoding social biases, particularly gender bias. While parameter-modification methods like fine-tuning mitigate bias, they are resource-intensive, unsuitable for closed-source models, and lack adaptability to evolving societal norms. Instruction-based approaches offer flexibility but often compromise general performance on normal tasks. To address these limitations, we propose $\textit{FaIRMaker}$, an automated and model-independent framework that employs an $\textbf{auto-search and refinement}$ paradigm to adaptively generate Fairwords, which act as instructions to reduce gender bias and enhance response quality. $\textit{FaIRMaker}$ enhances the debiasing capacity by enlarging the Fairwords search space while preserving the utility and making it applicable to closed-source models by training a sequence-to-sequence model that adaptively refines Fairwords into effective debiasing instructions when facing gender-related queries and performance-boosting prompts for neutral inputs. Extensive experiments demonstrate that $\textit{FaIRMaker}$ effectively mitigates gender bias while preserving task integrity and ensuring compatibility with both open- and closed-source LLMs.

AAAI Conference 2025 Conference Paper

CrAM: Credibility-Aware Attention Modification in LLMs for Combating Misinformation in RAG

  • Boyi Deng
  • Wenjie Wang
  • Fengbin Zhu
  • Qifan Wang
  • Fuli Feng

Retrieval-Augmented Generation (RAG) can alleviate hallucinations of Large Language Models (LLMs) by referencing external documents. However, the misinformation in external documents may mislead LLMs' generation. To address this issue, we explore the task of "credibility-aware RAG", in which LLMs automatically adjust the influence of retrieved documents based on their credibility scores to counteract misinformation. To this end, we introduce a plug-and-play method named Credibility-aware Attention Modification (CrAM). CrAM identifies influential attention heads in LLMs and adjusts their attention weights based on the credibility of the documents, thereby reducing the impact of low-credibility documents. Experiments on Natual Questions and TriviaQA using Llama2-13B, Llama3-8B, and Qwen1.5-7B show that CrAM improves the RAG performance of LLMs against misinformation pollution by over 20%, even surpassing supervised fine-tuning methods.

NeurIPS Conference 2025 Conference Paper

LoTA-QAF: Lossless Ternary Adaptation for Quantization-Aware Fine-Tuning

  • Junyu Chen
  • Junzhuo Li
  • Zhen Peng
  • Wenjie Wang
  • Yuxiang Ren
  • Long Shi
  • Xuming Hu

Quantization and fine-tuning are crucial for deploying large language models (LLMs) on resource-constrained edge devices. However, fine-tuning quantized models presents significant challenges, primarily stemming from: First, the mismatch in data types between the low-precision quantized weights (e. g. , 4-bit) and the high-precision adaptation weights (e. g. , 16-bit). This mismatch limits the computational efficiency advantage offered by quantized weights during inference. Second, potential accuracy degradation when merging these high-precision adaptation weights into the low-precision quantized weights, as the adaptation weights often necessitate approximation or truncation. Third, as far as we know, no existing methods support the lossless merging of adaptation while adjusting all quantized weights. To address these challenges, we introduce lossless ternary adaptation for quantization-aware fine-tuning (LoTA-QAF). This is a novel fine-tuning method specifically designed for quantized LLMs, enabling the lossless merging of ternary adaptation weights into quantized weights and the adjustment of all quantized weights. LoTA-QAF operates through a combination of: i) A custom-designed ternary adaptation (TA) that aligns ternary weights with the quantization grid and uses these ternary weights to adjust quantized weights. ii) A TA-based mechanism that enables the lossless merging of adaptation weights. iii) Ternary signed gradient descent (t-SignSGD) for updating the TA weights. We apply LoTA-QAF to Llama-3. 1/3. 3 and Qwen-2. 5 model families and validate its effectiveness on several downstream tasks. On the MMLU benchmark, our method effectively recovers performance for quantized models, surpassing 16-bit LoRA by up to 5. 14\%. For task-specific fine-tuning, 16-bit LoRA achieves superior results, but LoTA-QAF still outperforms other methods. Code is available in github. com/KingdalfGoodman/LoTA-QAF.

AAAI Conference 2025 Conference Paper

MMJ-Bench: A Comprehensive Study on Jailbreak Attacks and Defenses for Vision Language Models

  • Fenghua Weng
  • Yue Xu
  • Chengyan Fu
  • Wenjie Wang

As deep learning advances, Large Language Models (LLMs) and their multimodal counterparts, Vision-Language Models (VLMs), have shown exceptional performance in many real-world tasks. However, VLMs face significant security challenges, such as jailbreak attacks, where attackers attempt to bypass the model’s safety alignment to elicit harmful responses. The threat of jailbreak attacks on VLMs arises from both the inherent vulnerabilities of LLMs and the multiple information channels that VLMs process. While various attacks and defenses have been proposed, there is a notable gap in unified and comprehensive evaluations, as each method is evaluated on different dataset and metrics, making it impossible to compare the effectiveness of each method. To address this gap, we introduce MMJ-Bench, a unified pipeline for evaluating jailbreak attacks and defense techniques for VLMs. Through extensive experiments, we assess the effectiveness of various attack methods against SoTA VLMs and evaluate the impact of defense mechanisms on both defense effectiveness and model utility for normal tasks. Our comprehensive evaluation contribute to the field by offering a unified and systematic evaluation framework and the first public-available benchmark for VLM jailbreak research. We also demonstrate several insightful findings that highlights directions for future studies.

AAAI Conference 2025 Conference Paper

Optimize Incompatible Parameters Through Compatibility-aware Knowledge Integration

  • Zheqi Lv
  • Keming Ye
  • Zishu Wei
  • Qi Tian
  • Shengyu Zhang
  • Wenqiao Zhang
  • Wenjie Wang
  • Kun Kuang

Deep neural networks have become foundational to advancements in multiple domains, including recommendation systems, natural language processing, and so on. Despite their successes, these models often contain incompatible parameters that can be underutilized or detrimental to model performance, particularly when faced with specific, varying data distributions. Existing research excels in removing such parameters or merging the outputs of multiple different pretrained models. However, the former focuses on efficiency rather than performance, while the latter requires several times more computing and storage resources to support inference. In this paper, we set the goal to explicitly improve these incompatible parameters by leveraging the complementary strengths of different models, thereby directly enhancing the models without any additional parameters. Specifically, we propose Compatibility-aware Knowledge Integration (CKI), which consists of Parameter Compatibility Assessment and Parameter Splicing, which are used to evaluate the knowledge content of multiple models and integrate the knowledge into one model, respectively. The integrated model can be used directly for inference or for further fine-tuning. Extensive experiments on various recommendation and language datasets show that CKI can effectively optimize incompatible parameters under multiple tasks and settings to break through the training limit of the original model without increasing the inference cost.

NeurIPS Conference 2025 Conference Paper

Practical Kernel Selection for Kernel-based Conditional Independence Test

  • Wenjie Wang
  • Mingming Gong
  • Biwei Huang
  • James Bailey
  • Bo Han
  • Kun Zhang
  • Feng Liu

Conditional independence (CI) testing is a fundamental yet challenging task in modern statistics and machine learning. One pivotal class of methods for assessing conditional independence encompasses kernel-based approaches, known for assessing CI by detecting general conditional dependence without imposing strict assumptions on relationships or data distributions. As with any method utilizing kernels, selecting appropriate kernels is crucial for precise identification. However, it remains underexplored in kernel-based CI methods, where the kernels are often determined manually or heuristically. In this paper, we analyze and propose a kernel parameter selection approach for the kernel-based conditional independence test (KCI). The kernel parameters are selected based on the ratio of the statistic to the asymptotic variance, which approximates the test power for the given parameters at large sample sizes. The search procedure is grid-based, allowing for parallelization with manageable additional computation time. We theoretically demonstrate the consistency of the proposed criterion and conduct extensive experiments on both synthetic and real data to show the effectiveness of our method.

AAAI Conference 2025 Conference Paper

Pre-trained Behavioral Model for Malicious User Prediction on Social Platform

  • Meng Jiang
  • Wenjie Wang
  • Shaofeng Hu
  • Kaishen Ou
  • Zhenjing Zheng
  • Fuli Feng

The proliferation of malicious users on social platforms poses significant financial and psychological threats, with activities ranging from scams to the dissemination of illicit content. Existing malicious user prediction comprises supervised and self-supervised learning methods. However, the former relies on extensive labeled malicious users for training, while the latter typically focuses on one form of malicious activity and depends heavily on manually crafted rules and features during pre-training. Moreover, existing pre-training methods fail to effectively capture the crucial repetitive and sporadic behavior patterns of malicious users. To address these limitations, we propose a Malicious User Behavior Pre-training framework (MaP) to build pre-trained behavior models. MaP integrates malicious pattern recognition with behavior consistency augmentation and local disruption augmentation strategies for contrastive learning to capture repetitive and sporadic malicious patterns, respectively. We instantiate MaP on a billion-level behavior pre-training scenario within an industry context. Both online and offline evaluations validate the superior performance of MaP in malicious user detection and classification.

NeurIPS Conference 2025 Conference Paper

R$^2$ec: Towards Large Recommender Models with Reasoning

  • Runyang You
  • Yongqi Li
  • Xinyu Lin
  • Xin Zhang
  • Wenjie Wang
  • Wenjie Li
  • Liqiang Nie

Large recommender models have extended LLMs as powerful recommenders via encoding or item generation, and recent breakthroughs in LLM reasoning synchronously motivate the exploration of reasoning in recommendation. In this work, we propose R$^2$ec, a unified large recommender model with intrinsic reasoning capability. R$^2$ec introduces a dual-head architecture that supports both reasoning chain generation and efficient item prediction in a single model, significantly reducing inference latency. To overcome the lack of annotated reasoning data, we design RecPO, a reinforcement learning framework that optimizes reasoning and recommendation jointly with a novel fused reward mechanism. Extensive experiments on three datasets demonstrate that R$^2$ec outperforms traditional, LLM-based, and reasoning-augmented recommender baselines, while further analyses validate its competitive efficiency among conventional LLM-based recommender baselines and strong adaptability to diverse recommendation scenarios. Code and checkpoints available at https: //github. com/YRYangang/RRec.

NeurIPS Conference 2024 Conference Paper

FOOGD: Federated Collaboration for Both Out-of-distribution Generalization and Detection

  • Xinting Liao
  • Weiming Liu
  • Pengyang Zhou
  • Fengyuan Yu
  • Jiahe Xu
  • Jun Wang
  • Wenjie Wang
  • Chaochao Chen

Federated learning (FL) is a promising machine learning paradigm that collaborates with client models to capture global knowledge. However, deploying FL models in real-world scenarios remains unreliable due to the coexistence of in-distribution data and unexpected out-of-distribution (OOD) data, such as covariate-shift and semantic-shift data. Current FL researches typically address either covariate-shift data through OOD generalization or semantic-shift data via OOD detection, overlooking the simultaneous occurrence of various OOD shifts. In this work, we propose FOOGD, a method that estimates the probability density of each client and obtains reliable global distribution as guidance for the subsequent FL process. Firstly, SM3D in FOOGD estimates score model for arbitrary distributions without prior constraints, and detects semantic-shift data powerfully. Then SAG in FOOGD provides invariant yet diverse knowledge for both local covariate-shift generalization and client performance generalization. In empirical validations, FOOGD significantly enjoys three main advantages: (1) reliably estimating non-normalized decentralized distributions, (2) detecting semantic shift data via score values, and (3) generalizing to covariate-shift data by regularizing feature extractor. The project is open in https: //github. com/XeniaLLL/FOOGD-main. git.

AAAI Conference 2024 Conference Paper

GOODAT: Towards Test-Time Graph Out-of-Distribution Detection

  • Luzhi Wang
  • Dongxiao He
  • He Zhang
  • Yixin Liu
  • Wenjie Wang
  • Shirui Pan
  • Di Jin
  • Tat-Seng Chua

Graph neural networks (GNNs) have found widespread application in modeling graph data across diverse domains. While GNNs excel in scenarios where the testing data shares the distribution of their training counterparts (in distribution, ID), they often exhibit incorrect predictions when confronted with samples from an unfamiliar distribution (out-of-distribution, OOD). To identify and reject OOD samples with GNNs, recent studies have explored graph OOD detection, often focusing on training a specific model or modifying the data on top of a well-trained GNN. Despite their effectiveness, these methods come with heavy training resources and costs, as they need to optimize the GNN-based models on training data. Moreover, their reliance on modifying the original GNNs and accessing training data further restricts their universality. To this end, this paper introduces a method to detect Graph Out-of-Distribution At Test-time (namely GOODAT), a data-centric, unsupervised, and plug-and-play solution that operates independently of training data and modifications of GNN architecture. With a lightweight graph masker, GOODAT can learn informative subgraphs from test samples, enabling the capture of distinct graph patterns between OOD and ID samples. To optimize the graph masker, we meticulously design three unsupervised objective functions based on the graph information bottleneck principle, motivating the masker to capture compact yet informative subgraphs for OOD detection. Comprehensive evaluations confirm that our GOODAT method outperforms state-of-the-art benchmarks across a variety of real-world datasets.

AAAI Conference 2024 Conference Paper

IGAMT: Privacy-Preserving Electronic Health Record Synthesization with Heterogeneity and Irregularity

  • Wenjie Wang
  • Pengfei Tang
  • Jian Lou
  • Yuanming Shao
  • Lance Waller
  • Yi-an Ko
  • Li Xiong

Integrating electronic health records (EHR) into machine learning-driven clinical research and hospital applications is important, as it harnesses extensive and high-quality patient data to enhance outcome predictions and treatment personalization. Nonetheless, due to privacy and security concerns, the secondary purpose of EHR data is consistently governed and regulated, primarily for research intentions, thereby constraining researchers' access to EHR data. Generating synthetic EHR data with deep learning methods is a viable and promising approach to mitigate privacy concerns, offering not only a supplementary resource for downstream applications but also sidestepping the confidentiality risks associated with real patient data. While prior efforts have concentrated on EHR data synthesis, significant challenges persist in the domain of generating synthetic EHR data: balancing the heterogeneity of real EHR including temporal and non-temporal features, addressing the missing values and irregular measures, and ensuring the privacy of the real data used for model training. Existing works in this domain only focused on solving one or two aforementioned challenges. In this work, we propose IGAMT, an innovative framework to generate privacy-preserved synthetic EHR data that not only maintain high quality with heterogeneous features, missing values, and irregular measures but also balances the privacy-utility trade-off. Extensive experiments prove that IGAMT significantly outperforms baseline architectures in terms of visual resemblance and comparable performance in downstream applications. Ablation case studies also prove the effectiveness of the techniques applied in IGAMT.

ICML Conference 2024 Conference Paper

Optimal Kernel Choice for Score Function-based Causal Discovery

  • Wenjie Wang
  • Biwei Huang
  • Feng Liu 0003
  • Xinge You
  • Tongliang Liu
  • Kun Zhang 0001
  • Mingming Gong

Score-based methods have demonstrated their effectiveness in discovering causal relationships by scoring different causal structures based on their goodness of fit to the data. Recently, Huang et al. proposed a generalized score function that can handle general data distributions and causal relationships by modeling the relations in reproducing kernel Hilbert space (RKHS). The selection of an appropriate kernel within this score function is crucial for accurately characterizing causal relationships and ensuring precise causal discovery. However, the current method involves manual heuristic selection of kernel parameters, making the process tedious and less likely to ensure optimality. In this paper, we propose a kernel selection method within the generalized score function that automatically selects the optimal kernel that best fits the data. Specifically, we model the generative process of the variables involved in each step of the causal graph search procedure as a mixture of independent noise variables. Based on this model, we derive an automatic kernel selection method by maximizing the marginal likelihood of the variables involved in each search step. We conduct experiments on both synthetic data and real-world benchmarks, and the results demonstrate that our proposed method outperforms heuristic kernel selection methods.

AAAI Conference 2024 Conference Paper

Temporally and Distributionally Robust Optimization for Cold-Start Recommendation

  • Xinyu Lin
  • Wenjie Wang
  • Jujia Zhao
  • Yongqi Li
  • Fuli Feng
  • Tat-Seng Chua

Collaborative Filtering (CF) recommender models highly depend on user-item interactions to learn CF representations, thus falling short of recommending cold-start items. To address this issue, prior studies mainly introduce item features (e.g., thumbnails) for cold-start item recommendation. They learn a feature extractor on warm-start items to align feature representations with interactions, and then leverage the feature extractor to extract the feature representations of cold-start items for interaction prediction. Unfortunately, the features of cold-start items, especially the popular ones, tend to diverge from those of warm-start ones due to temporal feature shifts, preventing the feature extractor from accurately learning feature representations of cold-start items. To alleviate the impact of temporal feature shifts, we consider using Distributionally Robust Optimization (DRO) to enhance the generation ability of the feature extractor. Nonetheless, existing DRO methods face an inconsistency issue: the worse-case warm-start items emphasized during DRO training might not align well with the cold-start item distribution. To capture the temporal feature shifts and combat this inconsistency issue, we propose a novel temporal DRO with new optimization objectives, namely, 1) to integrate a worst-case factor to improve the worst-case performance, and 2) to devise a shifting factor to capture the shifting trend of item features and enhance the optimization of the potentially popular groups in cold-start items. Substantial experiments on three real-world datasets validate the superiority of our temporal DRO in enhancing the generalization ability of cold-start recommender models.

IJCAI Conference 2021 Conference Paper

Norm-guided Adaptive Visual Embedding for Zero-Shot Sketch-Based Image Retrieval

  • Wenjie Wang
  • Yufeng Shi
  • Shiming Chen
  • Qinmu Peng
  • Feng Zheng
  • Xinge You

Zero-shot sketch-based image retrieval (ZS-SBIR), which aims to retrieve photos with sketches under the zero-shot scenario, has shown extraordinary talents in real-world applications. Most existing methods leverage language models to generate class-prototypes and use them to arrange the locations of all categories in the common space for photos and sketches. Although great progress has been made, few of them consider whether such pre-defined prototypes are necessary for ZS-SBIR, where locations of unseen class samples in the embedding space are actually determined by visual appearance and a visual embedding actually performs better. To this end, we propose a novel Norm-guided Adaptive Visual Embedding (NAVE) model, for adaptively building the common space based on visual similarity instead of language-based pre-defined prototypes. To further enhance the representation quality of unseen classes for both photo and sketch modality, modality norm discrepancy and noisy label regularizer are jointly employed to measure and repair the modality bias of the learned common embedding. Experiments on two challenging datasets demonstrate the superiority of our NAVE over state-of-the-art competitors.

ECAI Conference 2016 Conference Paper

An Intelligent System for Personalized Conference Event Recommendation and Scheduling

  • Aldy Gunawan
  • Hoong Chuin Lau
  • Pradeep Varakantham
  • Wenjie Wang

Many conference mobile apps today lack the intelligent feature to automatically generates optimal schedules based on delegates' preferences. This entails two major challenges: (a) identifying preferences of users; and (b) given the preferences, generating a schedule that optimizes his preferences. In this paper, we specifically focus on academic conferences, where users are prompted to input their preferred keywords. Our key contribution is an integrated conference scheduling agent that automatically recognizes user preferences based on keywords, provides a list of recommended talks and optimizes user schedule based on these preferences. To demonstrate the utility of our integrated conference scheduling agent, we first demonstrated the app in the International Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2015) and conducted a survey to collect some data, which are used to verify the results presented in this paper. It is able to provide well calibrated results with respect to precision, accuracy and recall. We also tested the app in the 2015 WI-IAT International Conference (Singapore). The android and web-based apps have been demonstrated and deployed in AAMAS 2016 (Singapore) with positive responses from the users.

AAMAS Conference 2016 Conference Paper

PRESS: Personalized Event Scheduling Recommender System (Demonstration)

  • Hoong Chuin Lau
  • Aldy Gunawan
  • Pradeep Varakantham
  • Wenjie Wang

This paper presents a personalized event scheduling recommender system, PRESS, for a large conference setting with multiple parallel tracks. PRESS is a mobile application that gathers personalized information from a user and recommends talks/demos to be attend. The input from a user include a list of keyword preferences and (optionally) preferred talks. We use the MALLET topic model package to analyze the set of conference papers and classify them based on automatically identified topics. We propose an algorithm to generate a list of recommended papers based on the user keywords and the MALLET topics. An optimization model is then applied to obtain a feasible schedule. The recommended set is matched against the selected papers by the user which we obtained from a survey conducted at AAMAS- 15 in Istanbul, Turkey. We show that PRESS is able to provide reasonable accuracy, precision and recall rates. PRESS will be deployed live during AAMAS-16 in Singapore.

JAAMAS Journal 2014 Journal Article

An iterative approach for makespan-minimized multi-agent path planning in discrete space

  • Wenjie Wang
  • Wooi Boon Goh

Abstract Makespan-minimized multi-agent path planning (MAPP) seeks to minimize the time taken by the slowest of n agents to reach its destination and this is essentially a minimax-constrained optimization problem. In this work, an iterative max-min improvement (IMMI) algorithm is proposed to approximate the optimal solution of the makespan-minimized MAPP problem. At each iteration, a linear maximization problem is solved using a simplex method followed by a computationally hard MAPP minimization problem that is solved using a local search approach. To keep the local search from being trapped in an unfeasible solution, a Guided Local Search technique is proposed. Comparative results with other MAPP algorithms suggest that the proposed IMMI algorithm strikes a good tradeoff between the ability to find feasible solutions that can be traversed quickly and the computational time incurred in determining these paths.

AAMAS Conference 2013 Conference Paper

Time Optimized Multi-Agent Path Planning Using Guided Iterative Prioritized Planning

  • Wenjie Wang
  • Wooi Boon Goh

This paper proposes the guided iterative prioritized planning (GIPP) algorithm to address the problem of moving multiple mobile agents to their respective destinations in a shortest timerelated cost. Compared to other MAPP algorithms, the GIPP algorithm strikes a good balance between various performance criteria such as finding feasible solutions, completing the task promptly and low computational cost.

AAMAS Conference 2011 Conference Paper

Spatio-Temporal A* Algorithms for Offline Multiple Mobile Robot Path Planning

  • Wenjie Wang
  • Wooi Boon Goh

This paper presents an offline collision-free path planning algorithm for multiple mobile robots using a 2D spatial-time map. In this decoupled approach, a centralized planner uses a Spatio-Temporal A* algorithm to find the lowest time cost path for each robot in a sequentially order based on its assigned priority. Improvements in viable path solutions using wait time insertion and adaptive priority reassignment strategies are discussed.