Arrow Research search

Author name cluster

Kaiqun Fu

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

7 papers
1 author row

Possible papers

7

AAAI Conference 2023 Short Paper

Exploration on Physics-Informed Neural Networks on Partial Differential Equations (Student Abstract)

  • Hoa Ta
  • Shi Wen Wong
  • Nathan McClanahan
  • Jung-Han Kimn
  • Kaiqun Fu

Data-driven related solutions are dominating various scientific fields with the assistance of machine learning and data analytics. Finding effective solutions has long been discussed in the area of machine learning. The recent decade has witnessed the promising performance of the Physics-Informed Neural Networks (PINN) in bridging the gap between real-world scientific problems and machine learning models. In this paper, we explore the behavior of PINN in a particular range of different diffusion coefficients under specific boundary conditions. In addition, different initial conditions of partial differential equations are solved by applying the proposed PINN. Our paper illustrates how the effectiveness of the PINN can change under various scenarios. As a result, we demonstrate a better insight into the behaviors of the PINN and how to make the proposed method more robust while encountering different scientific and engineering problems.

AAAI Conference 2023 Short Paper

PanTop: Pandemic Topic Detection and Monitoring System (Student Abstract)

  • Yangxiao Bai
  • Kaiqun Fu

Diverse efforts to combat the COVID-19 pandemic have continued throughout the past two years. Governments have announced plans for unprecedentedly rapid vaccine development, quarantine measures, and economic revitalization. They contribute to a more effective pandemic response by determining the precise opinions of individuals regarding these mitigation measures. In this paper, we propose a deep learning-based topic monitoring and storyline extraction system for COVID-19 that is capable of analyzing public sentiment and pandemic trends. The proposed method is able to retrieve Twitter data related to COVID-19 and conduct spatiotemporal analysis. Furthermore, a deep learning component of the system provides monitoring and modeling capabilities for topics based on advanced natural language processing models. A variety of visualization methods are applied to the project to show the distribution of each topic. Our proposed system accurately reflects how public reactions change over time along with pandemic topics.

AAAI Conference 2022 Short Paper

Augmentation of Chinese Character Representations with Compositional Graph Learning (Student Abstract)

  • Jason Wang
  • Kaiqun Fu
  • Zhiqian Chen
  • Chang-Tien Lu

Chinese characters have semantic-rich compositional information in radical form. While almost all previous research has applied CNNs to extract this compositional information, our work utilizes deep graph learning on a compact, graph-based representation of Chinese characters. This allows us to exploit temporal information within the strict stroke order used in writing characters. Our results show that our stroke-based model has potential for helping large-scale language models on some Chinese natural language understanding tasks. In particular, we demonstrate that our graph model produces more interpretable embeddings shown through word subtraction analogies and character embedding visualizations.

AAAI Conference 2022 Short Paper

Blocking Influence at Collective Level with Hard Constraints (Student Abstract)

  • Zonghan Zhang
  • Subhodip Biswas
  • Fanglan Chen
  • Kaiqun Fu
  • Taoran Ji
  • Chang-Tien Lu
  • Naren Ramakrishnan
  • Zhiqian Chen

Influence blocking maximization (IBM) is crucial in many critical real-world problems such as rumors prevention and epidemic containment. The existing work suffers from: (1) concentrating on uniform costs at the individual level, (2) mostly utilizing greedy approaches to approximate optimization, (3) lacking a proper graph representation for influence estimates. To address these issues, this research introduces a neural network model dubbed Neural Influence Blocking (NIB) for improved approximation and enhanced influence blocking effectiveness. The code is available at https: //github. com/oates9895/NIB.

AAAI Conference 2022 Short Paper

Early Forecast of Traffic Accident Impact Based on a Single-Snapshot Observation (Student Abstract)

  • Guangyu Meng
  • Qisheng Jiang
  • Kaiqun Fu
  • Beiyu Lin
  • Chang-Tien Lu
  • Zhiqian Chen

Predicting and quantifying the impact of traffic accidents is necessary and critical to Intelligent Transport Systems (ITS). As a state-of-the-art technique in graph learning, current graph neural networks heavily rely on graph Fourier transform, assuming homophily among the neighborhood. However, the homophily assumption makes it challenging to characterize abrupt signals such as traffic accidents. Our paper proposes an abrupt graph wavelet network (AGWN) to model traffic accidents and predict their time durations using only one single snapshot.

AAAI Conference 2021 Conference Paper

Dynamic Multi-Context Attention Networks for Citation Forecasting of Scientific Publications

  • Taoran Ji
  • Nathan Self
  • Kaiqun Fu
  • Zhiqian Chen
  • Naren Ramakrishnan
  • Chang-Tien Lu

Forecasting citations of scientific patents and publications is a crucial task for understanding the evolution and development of technological domains and for foresight into emerging technologies. By construing citations as a time series, the task can be cast into the domain of temporal point processes. Most existing work on forecasting with temporal point processes, both conventional and neural network-based, only performs single-step forecasting. In citation forecasting, however, the more salient goal is n-step forecasting: predicting the arrival time and the technology class of the next n citations. In this paper, we propose Dynamic Multi-Context Attention Networks (DMA-Nets), a novel deep learning sequence-tosequence (Seq2Seq) model with a novel hierarchical dynamic attention mechanism for long-term citation forecasting. Extensive experiments on two real-world datasets demonstrate that the proposed model learns better representations of conditional dependencies over historical sequences compared to state-of-the-art counterparts and thus achieves significant performance for citation predictions.

IJCAI Conference 2019 Conference Paper

Patent Citation Dynamics Modeling via Multi-Attention Recurrent Networks

  • Taoran Ji
  • Zhiqian Chen
  • Nathan Self
  • Kaiqun Fu
  • Chang-Tien Lu
  • Naren Ramakrishnan

Modeling and forecasting forward citations to a patent is a central task for the discovery of emerging technologies and for measuring the pulse of inventive progress. Conventional methods for forecasting these forward citations cast the problem as analysis of temporal point processes which rely on the conditional intensity of previously received citations. Recent approaches model the conditional intensity as a chain of recurrent neural networks to capture memory dependency in hopes of reducing the restrictions of the parametric form of the intensity function. For the problem of patent citations, we observe that forecasting a patent's chain of citations benefits from not only the patent's history itself but also from the historical citations of assignees and inventors associated with that patent. In this paper, we propose a sequence-to-sequence model which employs an attention-of-attention mechanism to capture the dependencies of these multiple time sequences. Furthermore, the proposed model is able to forecast both the timestamp and the category of a patent's next citation. Extensive experiments on a large patent citation dataset collected from USPTO demonstrate that the proposed model outperforms state-of-the-art models at forward citation forecasting.