Arrow Research search

Author name cluster

Jinmao Wei

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

JBHI Journal 2025 Journal Article

PCLT-PPI: Predicting Multi-Type Interactions Between Proteins Based on Point Cloud Structure and Local Topology Preservation

  • Minglei Li
  • Yurui Hou
  • Shuqin Wang
  • Jinmao Wei
  • Jian Liu

Protein-protein interactions (PPIs) play a crucial role in cellular biochemical reactions. Computationally mining PPI can help us better understand cellular regulatory mechanisms. Most existing methods focus on the linear structure of proteins, ignoring the influence of native spatial structure on their properties. Furthermore, when neural networks are used to learn protein embeddings, the nonlinear transformations may change the topological relationships between proteins. To address the above issues, we propose a PPI prediction method based on protein point cloud structure and local topology preservation, naming it PCLT-PPI. It extracts structural features from protein point cloud structures and relational features through graph neural networks. Throughout the process, PCLT-PPI maintains the local topology of proteins in their origin and embedding spaces. Experimental results show that, under three test set partition modes (Random, BFS, DFS) and four evaluation metrics (F1, AUC, AUPR, Hamming Loss), PCLT-PPI performs better than several state-of-the-art PPI prediction methods, especially when predicting protein PPIs that are not visible during training, exhibiting stronger robustness and higher generalization ability. The results also demonstrate that point cloud structure and local topology preservation can improve PPI prediction performance, which may provide a reference for subsequent related research.

JBHI Journal 2023 Journal Article

Predicting Drug-Protein Interactions by Self-Adaptively Adjusting the Topological Structure of the Heterogeneous Network

  • Rong Tang
  • Chang Sun
  • Jipeng Huang
  • Minglei Li
  • Jinmao Wei
  • Jian Liu

Many powerful computational methods based on graph neural networks (GNNs) have been proposed to predict drug-protein interactions (DPIs). It can effectively reduce laboratory workload and the cost of drug discovery and drug repurposing. However, many clinical functions of drugs and proteins are unknown due to their unobserved indications. Therefore, it is difficult to establish a reliable drug-protein heterogeneous network that can describe the relationships between drugs and proteins based on the available information. To solve this problem, we propose a DPI prediction method that can self-adaptively adjust the topological structure of the heterogeneous networks, and name it SATS. SATS establishes a representation learning module based on graph attention network to carry out the drug-protein heterogeneous network. It can self-adaptively learn the relationships among the nodes based on their attributes and adjust the topological structure of the network according to the training loss of the model. Finally, SATS predicts the interaction propensity between drugs and proteins based on their embeddings. The experimental results show that SATS can effectively improve the topological structure of the network. The performance of SATS outperforms several state-of-the-art DPI prediction methods under various evaluation metrics. These prove that SATS is useful to deal with incomplete data and unreliable networks. The case studies on the top section of the prediction results further demonstrate that SATS is powerful for discovering novel DPIs.

AAAI Conference 2020 Conference Paper

Document Summarization with VHTM: Variational Hierarchical Topic-Aware Mechanism

  • Xiyan Fu
  • Jun Wang
  • Jinghan Zhang
  • Jinmao Wei
  • Zhenglu Yang

Automatic text summarization focuses on distilling summary information from texts. This research field has been considerably explored over the past decades because of its significant role in many natural language processing tasks; however, two challenging issues block its further development: (1) how to yield a summarization model embedding topic inference rather than extending with a pre-trained one and (2) how to merge the latent topics into diverse granularity levels. In this study, we propose a variational hierarchical model to holistically address both issues, dubbed VHTM. Different from the previous work assisted by a pre-trained singlegrained topic model, VHTM is the first attempt to jointly accomplish summarization with topic inference via variational encoder-decoder and merge topics into multi-grained levels through topic embedding and attention. Comprehensive experiments validate the superior performance of VHTM compared with the baselines, accompanying with semantically consistent topics.

AAAI Conference 2020 Conference Paper

To Avoid the Pitfall of Missing Labels in Feature Selection: A Generative Model Gives the Answer

  • Yuanyuan Xu
  • Jun Wang
  • Jinmao Wei

In multi-label learning, instances have a large number of noisy and irrelevant features, and each instance is associated with a set of class labels wherein label information is generally incomplete. These missing labels possess two sides like a coin; people cannot predict whether their provided information for feature selection is favorable (relevant) or not (irrelevant) during tossing. Existing approaches either superficially consider the missing labels as negative or indiscreetly impute them with some predicted values, which may either overestimate unobserved labels or introduce new noises in selecting discriminative features. To avoid the pitfall of missing labels, a novel unified framework of selecting discriminative features and modeling incomplete label matrix is proposed from a generative point of view in this paper. Concretely, we relax Smoothness Assumption to infer the label observability, which can reveal the positions of unobserved labels, and employ the spike-and-slab prior to perform feature selection by excluding unobserved labels. Using a data-augmentation strategy leads to full local conjugacy in our model, facilitating simple and efficient Expectation Maximization (EM) algorithm for inference. Quantitative and qualitative experimental results demonstrate the superiority of the proposed approach under various evaluation metrics.

IJCAI Conference 2019 Conference Paper

Revealing Semantic Structures of Texts: Multi-grained Framework for Automatic Mind-map Generation

  • Yang Wei
  • Honglei Guo
  • Jinmao Wei
  • Zhong Su

A mind-map is a diagram used to represent ideas linked to and arranged around a central concept. It’s easier to visually access the knowledge and ideas by converting a text to a mind-map. However, highlighting the semantic skeleton of an article remains a challenge. The key issue is to detect the relations amongst concepts beyond intra-sentence. In this paper, we propose a multi-grained framework for automatic mind-map generation. That is, a novel neural network is taken to detect the relations at first, which employs multi-hop self-attention and gated recurrence network to reveal the directed semantic relations via sentences. A recursive algorithm is then designed to select the most salient sentences to constitute the hierarchy. The human-like mind-map is automatically constructed with the key phrases in the salient sentences. Promising results have been achieved on the comparison with manual mind-maps. The case studies demonstrate that the generated mind-maps reveal the underlying semantic structures of the articles.