Arrow Research search

Author name cluster

Fu Lee Wang

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

10 papers
1 author row

Possible papers

10

NeurIPS Conference 2025 Conference Paper

DeblurDiff: Real-Word Image Deblurring with Generative Diffusion Models

  • Lingshun Kong
  • Jiawei Zhang
  • Dongqing Zou
  • Fu Lee Wang
  • Jimmy S. REN
  • Xiaohe Wu
  • Jiangxin Dong
  • Jinshan Pan

Diffusion models have achieved significant progress in image generation and the pre-trained Stable Diffusion (SD) models are helpful for image deblurring by providing clear image priors. However, directly using a blurry image or a pre-deblurred one as a conditional control for SD will either hinder accurate structure extraction or make the results overly dependent on the deblurring network. In this work, we propose a Latent Kernel Prediction Network (LKPN) to achieve robust real-world image deblurring. Specifically, we co-train the LKPN in the latent space with conditional diffusion. The LKPN learns a spatially variant kernel to guide the restoration of sharp images in the latent space. By applying element-wise adaptive convolution (EAC), the learned kernel is utilized to adaptively process the blurry feature, effectively preserving the information of the blurry input. This process thereby more effectively guides the generative process of SD, enhancing both the deblurring efficacy and the quality of detail reconstruction. Moreover, the results at each diffusion step are utilized to iteratively estimate the kernels in LKPN to better restore the sharp latent by EAC in the subsequent step. This iterative refinement enhances the accuracy and robustness of the deblurring process. Extensive experimental results demonstrate that the proposed method outperforms state-of-the-art image deblurring methods on both benchmark and real-world images.

IS Journal 2025 Journal Article

Exploring ChatGPT-Based Augmentation Strategies for Contrastive Aspect-Based Sentiment Analysis

  • Lingling Xu
  • Haoran Xie
  • S. Joe Qin
  • Fu Lee Wang
  • Xiaohui Tao

Aspect-based sentiment analysis (ABSA) involves identifying sentiment toward specific aspect terms in a sentence and allows us to uncover people’s nuanced perspectives and attitudes on particular aspects of a product, service, or topic. However, the scarcity of labeled data poses a significant challenge to training high-quality models. To address this issue, we explore the potential of data augmentation using ChatGPT, a well-performing large language model, to enhance the sentiment classification performance toward aspect terms. Specifically, we explore three data augmentation strategies based on ChatGPT: context-focused, aspect-focused, and context–aspect data augmentation techniques. Context-focused data augmentation focuses on changing the word expression of context words in the sentence while keeping aspect terms unchanged. In contrast, aspect-focused data augmentation aims to change aspect terms but keep context words unchanged. Context–aspect data augmentation integrates these two data augmentations to generate augmented samples. Furthermore, we incorporate contrastive learning into the ABSA tasks to improve performance. Extensive experiments show that all three data augmentation techniques lead to performance improvements, with the context–aspect data augmentation strategy performing best and surpassing the performance of the baseline models.

IS Journal 2025 Journal Article

Leveraging ChatGPT-Based Augmentation and Contrastive Learning for Chinese Massive Open Online Course Sentiment Analysis

  • Xieling Chen
  • Haoran Xie
  • S. Joe Qin
  • Lingling Xu
  • Xiaohui Tao
  • Fu Lee Wang

This study addresses the unique challenges of sentiment analysis in Chinese massive open online course (MOOC) reviews, where pedagogically embedded language, intra-sentence sentiment shifts, and class imbalance complicate classification tasks. To tackle these domain-specific issues, we integrated ChatGPT-based data augmentation with contrastive learning within a Bidirectional Encoder Representations from Transformers (BERT)–Chinese framework. We evaluated ChatGPT-based augmentation (GPTaug), similar word replacement, and random word deletion under a dual-loss setup that combines supervised cross-entropy and InfoNCE (information noise-constrastive estimation) contrastive learning, focusing on how they enhance model performance across sentiment categories. The results revealed that the integration of contrastive learning with data augmentation strategies substantially improved sentiment classification in Chinese MOOC reviews. Especially, GPTaug demonstrated robust and balanced performance across polarity categories, particularly enhancing the detection of underrepresented neutral sentiments. These findings suggest that generative augmentation, when aligned with contrastive objectives, mitigates data sparsity and semantic ambiguity in educational sentiment analysis.

NeurIPS Conference 2025 Conference Paper

PairEdit: Learning Semantic Variations for Exemplar-based Image Editing

  • Haoguang Lu
  • Jiacheng Chen
  • Zhenguo Yang
  • Aurele Gnanha
  • Fu Lee Wang
  • Qing Li
  • Xudong Mao

Recent advancements in text-guided image editing have achieved notable success by leveraging natural language prompts for fine-grained semantic control. However, certain editing semantics are challenging to specify precisely using textual descriptions alone. A practical alternative involves learning editing semantics from paired source-target examples. Existing exemplar-based editing methods still rely on text prompts describing the change within paired examples or learning implicit text-based editing instructions. In this paper, we introduce PairEdit, a novel visual editing method designed to effectively learn complex editing semantics from a limited number of image pairs or even a single image pair, without using any textual guidance. We propose a target noise prediction that explicitly models semantic variations within paired images through a guidance direction term. Moreover, we introduce a content-preserving noise schedule to facilitate more effective semantic learning. We also propose optimizing distinct LoRAs to disentangle the learning of semantic variations from content. Extensive qualitative and quantitative evaluations demonstrate that PairEdit successfully learns intricate semantics while significantly improving content consistency compared to baseline methods. Code is available at https: //github. com/xudonmao/PairEdit.

NeurIPS Conference 2024 Conference Paper

AttnDreamBooth: Towards Text-Aligned Personalized Text-to-Image Generation

  • Lianyu Pang
  • Jian Yin
  • Baoquan Zhao
  • Feize Wu
  • Fu Lee Wang
  • Qing Li
  • Xudong Mao

Recent advances in text-to-image models have enabled high-quality personalized image synthesis based on user-provided concepts with flexible textual control. In this work, we analyze the limitations of two primary techniques in text-to-image personalization: Textual Inversion and DreamBooth. When integrating the learned concept into new prompts, Textual Inversion tends to overfit the concept, while DreamBooth often overlooks it. We attribute these issues to the incorrect learning of the embedding alignment for the concept. To address this, we introduce AttnDreamBooth, a novel approach that separately learns the embedding alignment, the attention map, and the subject identity across different training stages. We also introduce a cross-attention map regularization term to enhance the learning of the attention map. Our method demonstrates significant improvements in identity preservation and text alignment compared to the baseline methods.

TIST Journal 2023 Journal Article

Contrastive Learning Models for Sentence Representations

  • Lingling Xu
  • Haoran Xie
  • Zongxi Li
  • Fu Lee Wang
  • Weiming Wang
  • Qing Li

Sentence representation learning is a crucial task in natural language processing, as the quality of learned representations directly influences downstream tasks, such as sentence classification and sentiment analysis. Transformer-based pretrained language models such as bidirectional encoder representations from transformers (BERT) have been extensively applied to various natural language processing tasks, and have exhibited moderately good performance. However, the anisotropy of the learned embedding space prevents BERT sentence embeddings from achieving good results in the semantic textual similarity tasks. It has been shown that contrastive learning can alleviate the anisotropy problem and significantly improve sentence representation performance. Therefore, there has been a surge in the development of models that utilize contrastive learning to fine-tune BERT-like pretrained language models to learn sentence representations. But no systematic review of contrastive learning models for sentence representations has been conducted. To fill this gap, this article summarizes and categorizes the contrastive learning based sentence representation models, common evaluation tasks for assessing the quality of learned representations, and future research directions. Furthermore, we select several representative models for exhaustive experiments to illustrate the quantitative improvement of various strategies on sentence representations.

IS Journal 2019 Journal Article

Segment-level joint topic-sentiment model for online review analysis

  • Qinjuan Yang
  • Yanghui Rao
  • Haoran Xie
  • Jiahai Wang
  • Fu Lee Wang
  • Wai Hong Chan

With the rapid development of the Internet, an increasing number of users enjoy to shop online and express their reviews on the products and services. Analysis of these online reviews can not only help potential users make rational decisions when purchasing but also improves the quality of products and services. Hence, sentiment analysis for online reviews has become an important and meaningful research domain.

AAAI Conference 2017 Short Paper

Cross-Domain Sentiment Classification via Topic-Related TrAdaBoost

  • Xingchang Huang
  • Yanghui Rao
  • Haoran Xie
  • Tak-Lam Wong
  • Fu Lee Wang

Cross-domain sentiment classification aims to tag sentiments for a target domain by labeled data from a source domain. Due to the difference between domains, the accuracy of a trained classifier may be very low. In this paper, we propose a boosting-based learning framework named TR-TrAdaBoost for cross-domain sentiment classification. We firstly explore the topic distribution of documents, and then combine it with the unigram TrAdaBoost. The topic distribution captures the domain information of documents, which is valuable for cross-domain sentiment classification. Experimental results indicate that TR-TrAdaBoost represents documents well and boost the performance and robustness of TrAdaBoost.

IS Journal 2015 Journal Article

Does Summarization Help Stock Prediction? A News Impact Analysis

  • Xiaodong Li
  • Haoran Xie
  • Yangqiu Song
  • Shanfeng Zhu
  • Qing Li
  • Fu Lee Wang

The authors study the problem of how news summarization can help stock price prediction, proposing a generic stock price prediction framework to enable the use of different external signals to predict stock prices. Experiments were conducted on five years of Hong Kong Stock Exchange data, with news reported by Finet; evaluations were performed at individual stock, sector index, and market index levels. The authors' results show that prediction based on news article summarization can effectively outperform prediction based on full-length articles on both validation and independent testing sets.