Arrow Research search

Author name cluster

Jin Chen

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

21 papers
1 author row

Possible papers

21

AAAI Conference 2026 Conference Paper

Adaptive Evidential Learning for Temporal-Semantic Robustness in Moment Retrieval

  • Haojian Huang
  • Kaijing Ma
  • Jin Chen
  • Haodong Chen
  • Zhou Wu
  • Xianghao Zang
  • Han Fang
  • Chao Ban

In the domain of moment retrieval, accurately identifying temporal segments within videos based on natural language queries remains challenging. Traditional methods often employ pre-trained models that struggle with fine-grained information and deterministic reasoning, leading to difficulties in aligning with complex or ambiguous moments. To overcome these limitations, we explore Deep Evidential Regression (DER) to construct a vanilla Evidential baseline. However, this approach encounters two major issues: the inability to effectively handle modality imbalance and the structural differences in DER's heuristic uncertainty regularizer, which adversely affect uncertainty estimation. This misalignment results in high uncertainty being incorrectly associated with accurate samples rather than challenging ones. Our observations indicate that existing methods lack the adaptability required for complex video scenarios. In response, we propose Debiased Evidential Learning for Moment Retrieval (DEMR), a novel framework that incorporates a Reflective Flipped Fusion (RFF) block for cross-modal alignment and a query reconstruction task to enhance text sensitivity, thereby reducing bias in uncertainty estimation. Additionally, we introduce a Geom-regularizer to refine uncertainty predictions, enabling adaptive alignment with difficult moments and improving retrieval accuracy. Extensive testing on standard datasets and debiased datasets ActivityNet-CD and Charades-CD demonstrates significant enhancements in effectiveness, robustness, and interpretability, positioning our approach as a promising solution for temporal-semantic robustness in moment retrieval.

YNIMG Journal 2026 Journal Article

Multidimensional characterization of structure aberrations for biotypes of major depressive disorder

  • Jiang Zhang
  • Heng Zhang
  • Hui Sun
  • Tianwei Qin
  • Jun Pan
  • Jin Chen
  • Wei Li
  • Meiling Chen

BACKGROUND: Major depressive disorder (MDD) is a heterogeneous clinical syndrome associated with brain structural abnormalities, yet the neurobiological heterogeneity and consistent neuroimaging findings underlying these alterations remain unclear. Multilevel and multidimensional analyses are therefore needed to identify reliable structural signatures of MDD biotypes. METHODS: K-means clustering was applied to identify biotypes in 387 drug-naive MDD patients, with gray matter volume (GMV) compared to 1104 healthy controls. Causal structural covariance network (CaSCN), individual differential structural covariance network (IDSCN), and graph theory-based single-subject morphological network analyses were performed to characterize subtype-specific causal influences, individual-level covariance, and network topology. Transcriptomic and neurotransmitter association analyses were further conducted to probe the biological mechanisms underlying each subtype. RESULTS: Subtype 1 showed predominant GMV alterations in the visual network, subtype 2 in somatomotor, default mode, and limbic networks, and subtype 3 in cerebellar-limbic regions. CaSCN revealed subtype-specific directed influences, indicating differential propagation of structural abnormalities. IDSCN identified distinct altered covariance patterns, highlighting subtype-dependent thalamo-cerebellar changes and selective links to depressive severity. Graph theory showed divergent global topology, with subtype 1 exhibiting higher network integration, whereas subtypes 2 and 3 showed reduced integration and efficiency. Each biotype showed distinct neurobiological profiles, with subtype 1 enriched in cellular functions, subtype 2 in metabolic regulation, and subtype 3 in neurodevelopmental genes, alongside distinct neurotransmitter associations. CONCLUSIONS: These findings advance the understanding of structural and individual-level network alterations underlying MDD biotypes and provide novel insights into the neurobiological mechanisms of MDD heterogeneity.

NeurIPS Conference 2025 Conference Paper

Beyond the Seen: Bounded Distribution Estimation for Open-Vocabulary Learning

  • Xiaomeng Fan
  • Yuchuan Mao
  • Zhi Gao
  • Yuwei Wu
  • Jin Chen
  • Yunde Jia

Open-vocabulary learning requires modeling the data distribution in open environments, which consists of both seen-class and unseen-class data. Existing methods estimate the distribution in open environments using seen-class data, where the absence of unseen classes makes the estimation error inherently unidentifiable. Intuitively, learning beyond the seen classes is crucial for distribution estimation to bound the estimation error. We theoretically demonstrate that the distribution can be effectively estimated by generating unseen-class data, through which the estimation error is upper-bounded. Building on this theoretical insight, we propose a novel open-vocabulary learning method, which generates unseen-class data for estimating the distribution in open environments. The method consists of a class-domain-wise data generation pipeline and a distribution alignment algorithm. The data generation pipeline generates unseen-class data under the guidance of a hierarchical semantic tree and domain information inferred from the seen-class data, facilitating accurate distribution estimation. With the generated data, the distribution alignment algorithm estimates and maximizes the posterior probability to enhance generalization in open-vocabulary learning. Extensive experiments on 11 datasets demonstrate that our method outperforms baseline approaches by up to 14%, highlighting its effectiveness and superiority.

EAAI Journal 2025 Journal Article

Big-data-driven vessel destination prediction for smart port management

  • Jin Chen
  • Qiang Zhang
  • Maohan Liang
  • Chang Peng
  • Chen Chen

The accurate prediction of vessel destinations is crucial for enhancing maritime traffic efficiency, optimizing port management, and improving regional economic analysis. However, destination information in Automatic Identification System (AIS) data is often missing or inaccurate, which undermines the reliability of maritime analytic. Traditional vessel destination prediction methods primarily focus on measuring trajectory similarities, which results in high computational complexity. This study develops a deep learning approach to vessel destination prediction by transforming the problem into an image classification task. Rasterized images of historical destination ports and vessel trajectories are generated, incorporating AIS data within a fixed spatial context. A multi-scale residual convolutional network is constructed to extract relevant trajectory and port distribution features. To enhance the representation of trajectory endpoints, which are critical for predicting the destination port, a multi-attention mechanism is introduced. This mechanism increases the learning weight assigned to endpoint features, improving prediction accuracy. Finally, a classification network predicts the destination port based on the extracted features. The performance of the proposed method is evaluated using AIS data from the Denmark Strait. Experimental results demonstrate that the model outperforms existing methods, highlighting its potential for applications in smart port management and maritime traffic optimization.

AAAI Conference 2025 Conference Paper

Trusted Unified Feature-Neighborhood Dynamics for Multi-View Classification

  • Haojian Huang
  • Chuanyu Qin
  • Zhe Liu
  • Kaijing Ma
  • Jin Chen
  • Han Fang
  • Chao Ban
  • Hao Sun

Multi-view classification (MVC) faces inherent challenges due to domain gaps and inconsistencies across different views, often resulting in uncertainties during the fusion process. While Evidential Deep Learning (EDL) has been effective in addressing view uncertainty, existing methods predominantly rely on the Dempster-Shafer combination rule, which is sensitive to conflicting evidence and often neglects the critical role of neighborhood structures within multi-view data. To address these limitations, we propose a Trusted Unified Feature-NEighborhood Dynamics (TUNED) model for robust MVC. This method effectively integrates local and global feature-neighborhood (F-N) structures for robust decision-making. Specifically, we begin by extracting local F-N structures within each view. To further mitigate potential uncertainties and conflicts in multi-view fusion, we employ a selective Markov random field that adaptively manages cross-view neighborhood dependencies. Additionally, we employ a shared parameterized evidence extractor that learns global consensus conditioned on local F-N structures, thereby enhancing the global integration of multi-view features. Experiments on benchmark datasets show that our method improves accuracy and robustness over existing approaches, particularly in scenarios with high uncertainty and conflicting views.

TCS Journal 2024 Journal Article

On the 2-binomial complexity of the generalized Thue–Morse words

  • Xiao-Tao Lü
  • Jin Chen
  • Zhi-Xiong Wen
  • Wen Wu

In this paper, we study the 2-binomial complexity b t m, 2 ( n ) of the generalized Thue–Morse words t m over the alphabet { 0, 1, …, m − 1 } for every integer m ≥ 3. By using boundary words, we fully characterize when two factors of t m are 2-binomially equivalent. In particular, we obtain the exact value of b t m, 2 ( n ) for every integer n ≥ m 2. As a consequence, b t m, 2 ( n ) is ultimately periodic with period m 2. This result partially answers a question of Lejeune et al. (2020) [11].

JBHI Journal 2023 Journal Article

A Comparative Effectiveness Study on Opioid Use Disorder Prediction Using Artificial Intelligence and Existing Risk Models

  • Sajjad Fouladvand
  • Jeffery Talbert
  • Linda P. Dwoskin
  • Heather Bush
  • Amy L. Meadows
  • Lars E. Peterson
  • Yash R. Mishra
  • Steven K. Roggenkamp

Opioid use disorder (OUD) is a leading cause of death in the United States placing a tremendous burden on patients, their families, and health care systems. Artificial intelligence (AI) can be harnessed with available healthcare data to produce automated OUD prediction tools. In this retrospective study, we developed AI based models for OUD prediction and showed that AI can predict OUD more effectively than existing clinical tools including the unweighted opioid risk tool (ORT). Data include 474, 208 patients' data over 10 years; 269, 748 were females with an average age of 56. 78 years. Cases are prescription opioid users with at least one diagnosis of OUD or at least one prescription for buprenorphine or methadone. Controls are prescription opioid users with no OUD diagnoses or buprenorphine or methadone prescriptions. On 100 randomly selected test sets including 47, 396 patients, our proposed transformer-based AI model can predict OUD more efficiently (AUC = 0. 742 $\pm$ 0. 021) compared to logistic regression (AUC = 0. 651 $\pm$ 0. 025), random forest (AUC = 0. 679 $\pm$ 0. 026), xgboost (AUC = 0. 690 $\pm$ 0. 027), long short-term memory model (AUC = 0. 706 $\pm$ 0. 026), transformer (AUC = 0. 725 $\pm$ 0. 024), and unweighted ORT model (AUC = 0. 559 $\pm$ 0. 025). Our results show that embedding AI algorithms into clinical care may assist clinicians in risk stratification and management of patients receiving opioid therapy.

EAAI Journal 2023 Journal Article

A Semi-Supervised Network Framework for low-light image enhancement

  • Jin Chen
  • Yong Wang
  • Yujuan Han

Existing supervised learning-based low-light image enhancement algorithms treat all degradations as a whole, resulting in limited enhancement performance. It is difficult for fully unsupervised learning to recover more hidden details, making the augmented results unsatisfactory for visual needs. To overwhelm the limitations of supervised and unsupervised learning, we propose a Semi-Supervised Network Framework (SSNF) to enhance low-light images. Specifically, we decouple the low-light image enhancement task into two stages. In the first stage of the SSNF, we employ methods based on information entropy and Retinex to improve the visibility of images. It is worth mentioning that this stage is a lightweight self-supervised network, which only needs to input low-light images and undergo minute-level training to achieve brightness improvement. In the second stage of the SSNF, we utilize U-Net and residual networks to remove problems such as noise and degradation existing in the first-stage enhancement results, thereby improving the visual properties of the enhanced images. It overwhelms the challenge of dealing with low-light images directly. We conduct extensive experiments on datasets such as LOL, synthetic, DICM, etc. The experimental results show that SSNF exhibits better visual effects and outperforms other advanced methods in performance metrics.

NeurIPS Conference 2023 Conference Paper

Knowledge Distillation for High Dimensional Search Index

  • Zepu Lu
  • Jin Chen
  • Defu Lian
  • Zaixi Zhang
  • Yong Ge
  • Enhong Chen

Lightweight compressed models are prevalent in Approximate Nearest Neighbor Search (ANNS) and Maximum Inner Product Search (MIPS) owing to their superiority of retrieval efficiency in large-scale datasets. However, results given by compressed methods are less accurate due to the curse of dimension and the limitations of optimization objectives (e. g. , lacking interactions between queries and documents). Thus, we are encouraged to design a new learning algorithm for the compressed search index on high dimensions to improve retrieval performance. In this paper, we propose a novel KnowledgeDistillation for high dimensional search index framework (KDindex), with the aim of efficiently learning lightweight indexes by distilling knowledge from high-precision ANNS and MIPS models such as graph-based indexes. Specifically, the student is guided to keep the same ranking order of the top-k relevant results yielded by the teacher model, which acts as the additional supervision signals between queries and documents to learn the similarities between documents. Furthermore, to avoid the trivial solutions that all candidates are partitioned to the same centroid, the reconstruction loss that minimizes the compressed error, and the posting list balance strategy that equally allocates the candidates, are integrated into the learning objective. Experiment results demonstrate that KDindex outperforms existing learnable quantization-based indexes and is 40× lighter than the state-of-the-art non-exhaustive methods while achieving comparable recall quality.

AAAI Conference 2022 Conference Paper

Adaptive Image-to-Video Scene Graph Generation via Knowledge Reasoning and Adversarial Learning

  • Jin Chen
  • Xiaofeng Ji
  • Xinxiao Wu

Scene graph in a video conveys a wealth of information about objects and their relationships in the scene, thus benefiting many downstream tasks such as video captioning and visual question answering. Existing methods of scene graph generation require large-scale training videos annotated with objects and relationships in each frame to learn a powerful model. However, such comprehensive annotation is timeconsuming and labor-intensive. On the other hand, it is much easier and less cost to annotate images with scene graphs, so we investigate leveraging annotated images to facilitate training a scene graph generation model for unannotated videos, namely image-to-video scene graph generation. This task presents two challenges: 1) infer unseen dynamic relationships in videos from static relationships in images due to the absence of motion information in images; 2) adapt objects and static relationships from images to video frames due to the domain shift between them. To address the first challenge, we exploit external commonsense knowledge to infer the unseen dynamic relationship from the temporal evolution of static relationships. We tackle the second challenge by hierarchical adversarial learning to reduce the data distribution discrepancy between images and video frames. Extensive experiment results on two benchmark video datasets demonstrate the effectiveness of our method.

NeurIPS Conference 2022 Conference Paper

Cache-Augmented Inbatch Importance Resampling for Training Recommender Retriever

  • Jin Chen
  • Defu Lian
  • Yucheng Li
  • Baoyun Wang
  • Kai Zheng
  • Enhong Chen

Recommender retrievers aim to rapidly retrieve a fraction of items from the entire item corpus when a user query requests, with the representative two-tower model trained with the log softmax loss. For efficiently training recommender retrievers on modern hardwares, inbatch sampling, where the items in the mini-batch are shared as negatives to estimate the softmax function, has attained growing interest. However, existing inbatch sampling based strategies just correct the sampling bias of inbatch items with item frequency, being unable to distinguish the user queries within the mini-batch and still incurring significant bias from the softmax. In this paper, we propose a Cache-Augmented Inbatch Importance Resampling (XIR) for training recommender retrievers, which not only offers different negatives to user queries with inbatch items, but also adaptively achieves a more accurate estimation of the softmax distribution. Specifically, XIR resamples items from the given mini-batch training pairs based on certain probabilities, where a cache with more frequently sampled items is adopted to augment the candidate item set, with the purpose of reusing the historical informative samples. XIR enables to sample query-dependent negatives based on inbatch items and to capture dynamic changes of model training, which leads to a better approximation of the softmax and further contributes to better convergence. Finally, we conduct experiments to validate the superior performance of the proposed XIR compared with competitive approaches.

TCS Journal 2022 Journal Article

On the 2-abelian complexity of generalized Cantor sequences

  • Xiao-Tao Lü
  • Jin Chen
  • Zhi-Xiong Wen

In this paper, we study the generalized Cantor sequence c, which is an ℓ-automatic sequence. We prove that the abelian complexity of the 2-block sequence of c is ℓ-regular if the factor set of the sequence c is mirror invariant. As a consequence, we show that the 2-abelian complexity of a generalized Cantor sequence satisfying certain conditions is ℓ-regular.

AAAI Conference 2021 Conference Paper

Efficient Optimal Selection for Composited Advertising Creatives with Tree Structure

  • Jin Chen
  • Tiezheng Ge
  • Gangwei Jiang
  • Zhiqiang Zhang
  • Defu Lian
  • Kai Zheng

Ad creatives are one of the prominent mediums for online e-commerce advertisements. Ad creatives with enjoyable visual appearance may increase the click-through rate (CTR) of products. Ad creatives are typically handcrafted by advertisers and then delivered to the advertising platforms for advertisement. In recent years, advertising platforms are capable of instantly compositing ad creatives with arbitrarily designated elements of each ingredient, so advertisers are only required to provide basic materials. While facilitating the advertisers, a great number of potential ad creatives can be composited, making it difficult to accurately estimate CTR for them given limited real-time feedback. To this end, we propose an Adaptive and Efficient ad creative Selection (AES) framework based on a tree structure. The tree structure on compositing ingredients enables dynamic programming for efficient ad creative selection on the basis of CTR. Due to limited feedback, the CTR estimator is usually of high variance. Exploration techniques based on Thompson sampling are widely used for reducing variances of the CTR estimator, alleviating feedback sparsity. Based on the tree structure, Thompson sampling is adapted with dynamic programming, leading to efficient exploration for potential ad creatives with the largest CTR. We finally evaluate the proposed algorithm on the synthetic dataset and the real-world dataset. The results show that our approach can outperform competing baselines in terms of convergence rate and overall CTR.

AAAI Conference 2021 Conference Paper

Spatial-temporal Causal Inference for Partial Image-to-video Adaptation

  • Jin Chen
  • Xinxiao Wu
  • Yao Hu
  • Jiebo Luo

Image-to-video adaptation leverages off-the-shelf learned models in labeled images to help classification in unlabeled videos, thus alleviating the high computation overhead of training a video classifier from scratch. This task is very challenging since there exist two types of domain shifts between images and videos: 1) spatial domain shift caused by static appearance variance between images and video frames, and 2) temporal domain shift caused by the absence of dynamic motion in images. Moreover, for different video classes, these two domain shifts have different effects on the domain gap and should not be treated equally during adaptation. In this paper, we propose a spatial-temporal causal inference framework for image-to-video adaptation. We first construct a spatial-temporal causal graph to infer the effects of the spatial and temporal domain shifts by performing counterfactual causality. We then learn causality-guided bidirectional heterogeneous mappings between images and videos to adaptively reduce the two domain shifts. Moreover, to relax the assumption that the label spaces of the image and video domains are the same by the existing methods, we incorporate class-wise alignment into the learning of image-video mappings to perform partial image-to-video adaptation where the image label space subsumes the video label space. Extensive experiments on several video datasets have validated the effectiveness of our proposed method.

IS Journal 2020 Journal Article

Collaborative Filtering With Ranking-Based Priors on Unknown Ratings

  • Jin Chen
  • Defu Lian
  • Kai Zheng

Advanced collaborative filtering methods based on explicit feedback assume that unknown ratings are missing not at random. The state-of-the-art algorithm hypothesizes that unknown items are weakly rated and sets an explicit prior to unknown ratings. However, the prior assuming unknown ratings be close to zero may be questionable and it is challenging to set appropriate prior ratings for unknown items. In this article, to avert the use of prior ratings, we propose a ranking-based prior by hypothesizing that each user's unknown ratings are close to each other. This prior essentially acts as a regularizer to penalize the discrepancy of predicted ratings between any two unknown items. With the ranking-based prior, we design a generic collaborative filtering framework for explicit feedback and develop an efficient optimization algorithm for parameter learning. We finally evaluate the proposed algorithms on four real-world rating datasets. The results show that the proposed algorithms consistently outperform the state-of-the-art baselines and that the ranking-based prior leads to superior recommendation accuracy.

AAAI Conference 2020 Conference Paper

Deep Object Co-Segmentation via Spatial-Semantic Network Modulation

  • Kaihua Zhang
  • Jin Chen
  • Bo Liu
  • Qingshan Liu

Object co-segmentation is to segment the shared objects in multiple relevant images, which has numerous applications in computer vision. This paper presents a spatial and semantic modulated deep network framework for object cosegmentation. A backbone network is adopted to extract multi-resolution image features. With the multi-resolution features of the relevant images as input, we design a spatial modulator to learn a mask for each image. The spatial modulator captures the correlations of image feature descriptors via unsupervised learning. The learned mask can roughly localize the shared foreground object while suppressing the background. For the semantic modulator, we model it as a supervised image classification task. We propose a hierarchical second-order pooling module to transform the image features for classification use. The outputs of the two modulators manipulate the multi-resolution features by a shiftand-scale operation so that the features focus on segmenting co-object regions. The proposed model is trained end-to-end without any intricate post-processing. Extensive experiments on four image co-segmentation benchmark datasets demonstrate the superior accuracy of the proposed method compared to state-of-the-art methods. The codes are available at http: //kaihuazhang. net/.

AAAI Conference 2019 Conference Paper

Improving One-Class Collaborative Filtering via Ranking-Based Implicit Regularizer

  • Jin Chen
  • Defu Lian
  • Kai Zheng

One-class collaborative filtering (OCCF) problems are vital in many applications of recommender systems, such as news and music recommendation, but suffers from sparsity issues and lacks negative examples. To address this problem, the state-of-the-arts assigned smaller weights to unobserved samples and performed low-rank approximation. However, the ground-truth ratings of unobserved samples are usually set to zero but ill-defined. In this paper, we propose a ranking-based implicit regularizer and provide a new general framework for OCCF, to avert the ground-truth ratings of unobserved samples. We then exploit it to regularize a ranking-based loss function and design efficient optimization algorithms to learn model parameters. Finally, we evaluate them on three realworld datasets. The results show that the proposed regularizer significantly improves ranking-based algorithms and that the proposed framework outperforms the state-of-the-art OCCF algorithms.

TCS Journal 2019 Journal Article

On the abelian complexity of generalized Thue-Morse sequences

  • Jin Chen
  • Zhi-Xiong Wen

In this paper, we study the abelian complexity ρ n a b ( t ( k ) ) of generalized Thue-Morse sequences t ( k ) for every integer k ≥ 2. We obtain the exact value of ρ n a b ( t ( k ) ) for every integer n ≥ k. Consequently, ρ n a b ( t ( k ) ) is ultimately periodic with period k. Moreover, we show that the abelian complexities of a class of infinite sequences are automatic sequences.

AAAI Conference 2018 Conference Paper

Unsupervised Deep Learning of Mid-Level Video Representation for Action Recognition

  • Jingyi Hou
  • Xinxiao Wu
  • Jin Chen
  • Jiebo Luo
  • Yunde Jia

Current deep learning methods for action recognition rely heavily on large scale labeled video datasets. Manually annotating video datasets is laborious and may introduce unexpected bias to train complex deep models for learning video representation. In this paper, we propose an unsupervised deep learning method which employs unlabeled local spatial-temporal volumes extracted from action videos to learn mid-level video representation for action recognition. Specifically, our method simultaneously discovers mid-level semantic concepts by discriminative clustering and optimizes local spatial-temporal features by two relatively small and simple deep neural networks. The clustering generates semantic visual concepts that guide the training of the deep networks, and the networks in turn guarantee the robustness of the semantic concepts. Experiments on the HMDB51 and the UCF101 datasets demonstrate the superiority of the proposed method, even over several supervised learning methods.

TCS Journal 2016 Journal Article

On the permutation complexity of the Cantor-like sequences

  • Xiao-Tao Lü
  • Jin Chen
  • Ying-Jun Guo
  • Zhi-Xiong Wen

In this paper, we give a precise formula for the permutation complexity of Cantor-like sequences, which are non-uniformly recurrent automatic sequences. Since the sequences are automatic, as it was proved by Charlier et al. in 2012, the permutation complexity of each of them is a regular sequence. We give a precise recurrence relation and a generalized automaton for it.

AIIM Journal 2005 Journal Article

Discovering reliable protein interactions from high-throughput experimental data using network topology

  • Jin Chen
  • Wynne Hsu
  • Mong Li Lee
  • See-Kiong Ng

Objective: Current protein–protein interaction (PPI) detection via high-throughput experimental methods, such as yeast-two-hybrid has been reported to be highly erroneous, leading to potentially costly spurious discoveries. This work introduces a novel measure called IRAP, i. e. “interaction reliability by alternative path”, for assessing the reliability of protein interactions based on the underlying topology of the PPI network. Methods and materials: A candidate PPI is considered to be reliable if it is involved in a closed loop in which the alternative path of interactions between the two interacting proteins is strong. We devise an algorithm called AlternativePathFinder to compute the IRAP value for each interaction in a complex PPI network. Validation of the IRAP as a measure for assessing the reliability of PPIs is performed with extensive experiments on yeast PPI data. All the data used in our experiments can be downloaded from our supplementary data web site at http: //www. comp. nus. edu. sg/∼chenjin/data. html. Results: Results show consistently that IRAP measure is an effective way for discovering reliable PPIs in large datasets of error-prone experimentally-derived PPIs. Results also indicate that IRAP is better than IG2, and markedly better than the more simplistic IG1 measure. Conclusion: Experimental results demonstrate that a global, system-wide approach—such as IRAP that considers the entire interaction network instead of merely local neighbors—is a much more promising approach for assessing the reliability of PPIs.