Arrow Research search

Author name cluster

Rajgopal Kannan

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

6 papers
2 author rows

Possible papers

6

UAI Conference 2025 Conference Paper

Conformal Prediction for Federated Graph Neural Networks with Missing Neighbor Information

  • Ömer Faruk Akgül
  • Rajgopal Kannan
  • Viktor K. Prasanna

Uncertainty quantification is essential for reliable federated graph learning, yet existing methods struggle with decentralized and heterogeneous data. In this work, we first extend Conformal Prediction (CP), a well-established method for uncertainty quantification, to federated graph learning, formalizing conditions for CP validity under partial exchangeability across distributed subgraphs. We prove that our approach maintains rigorous coverage guarantees even with client-specific data distributions. Building on this foundation, we address a key challenge in federated graph learning: missing neighbor information, which inflates CP set sizes and reduces efficiency. To mitigate this, we propose a variational autoencoder (VAE)-based architecture that reconstructs missing neighbors while preserving data privacy. Empirical evaluations on real-world datasets demonstrate the effectiveness of our method: our theoretically grounded federated training strategy reduces CP set sizes by 15. 4%, with the VAE-based reconstruction providing an additional 4. 9% improvement, all while maintaining rigorous coverage guarantees.

NeurIPS Conference 2025 Conference Paper

Mixture of Scope Experts at Test: Generalizing Deeper Graph Neural Networks with Shallow Variants

  • Gangda Deng
  • Hongkuan Zhou
  • Rajgopal Kannan
  • Viktor Prasanna

Heterophilous graphs, where dissimilar nodes tend to connect, pose a challenge for graph neural networks (GNNs). Increasing the GNN depth can expand the scope (i. e. , receptive field), potentially finding homophily from the higher-order neighborhoods. However, GNNs suffer from performance degradation as depth increases. Despite having better expressivity, state-of-the-art deeper GNNs achieve only marginal improvements compared to their shallow variants. Through theoretical and empirical analysis, we systematically demonstrate a shift in GNN generalization preferences across nodes with different homophily levels as depth increases. This creates a disparity in generalization patterns between GNN models with varying depth. Based on these findings, we propose to improve deeper GNN generalization while maintaining high expressivity by Mixture of scope experts at test (Moscat). Experimental results show that Moscat works flexibly with various GNN architectures across a wide range of datasets while significantly improving accuracy.

NeurIPS Conference 2021 Conference Paper

Decoupling the Depth and Scope of Graph Neural Networks

  • Hanqing Zeng
  • Muhan Zhang
  • Yinglong Xia
  • Ajitesh Srivastava
  • Andrey Malevich
  • Rajgopal Kannan
  • Viktor Prasanna
  • Long Jin

State-of-the-art Graph Neural Networks (GNNs) have limited scalability with respect to the graph and model sizes. On large graphs, increasing the model depth often means exponential expansion of the scope (i. e. , receptive field). Beyond just a few layers, two fundamental challenges emerge: 1. degraded expressivity due to oversmoothing, and 2. expensive computation due to neighborhood explosion. We propose a design principle to decouple the depth and scope of GNNs – to generate representation of a target entity (i. e. , a node or an edge), we first extract a localized subgraph as the bounded-size scope, and then apply a GNN of arbitrary depth on top of the subgraph. A properly extracted subgraph consists of a small number of critical neighbors, while excluding irrelevant ones. The GNN, no matter how deep it is, smooths the local neighborhood into informative representation rather than oversmoothing the global graph into “white noise”. Theoretically, decoupling improves the GNN expressive power from the perspectives of graph signal processing (GCN), function approximation (GraphSAGE) and topological learning (GIN). Empirically, on seven graphs (with up to 110M nodes) and six backbone GNN architectures, our design achieves significant accuracy improvement with orders of magnitude reduction in computation and hardware cost.

ICLR Conference 2020 Conference Paper

GraphSAINT: Graph Sampling Based Inductive Learning Method

  • Hanqing Zeng
  • Hongkuan Zhou
  • Ajitesh Srivastava
  • Rajgopal Kannan
  • Viktor K. Prasanna

Graph Convolutional Networks (GCNs) are powerful models for learning representations of attributed graphs. To scale GCNs to large graphs, state-of-the-art methods use various layer sampling techniques to alleviate the "neighbor explosion" problem during minibatch training. We propose GraphSAINT, a graph sampling based inductive learning method that improves training efficiency and accuracy in a fundamentally different way. By changing perspective, GraphSAINT constructs minibatches by sampling the training graph, rather than the nodes or edges across GCN layers. Each iteration, a complete GCN is built from the properly sampled subgraph. Thus, we ensure fixed number of well-connected nodes in all layers. We further propose normalization technique to eliminate bias, and sampling algorithms for variance reduction. Importantly, we can decouple the sampling from the forward and backward propagation, and extend GraphSAINT with many architecture variants (e.g., graph attention, jumping connection). GraphSAINT demonstrates superior performance in both accuracy and training time on five large graphs, and achieves new state-of-the-art F1 scores for PPI (0.995) and Reddit (0.970).

AAMAS Conference 2018 Conference Paper

FActCheck: Keeping Activation of Fake News at Check

  • Ajitesh Srivastava
  • Rajgopal Kannan
  • Charalampos Chelmis
  • Viktor K. Prasanna

The diffusion of fake news has become a crucial problem in recent years. One way to battle it is to propagate the corresponding real news. To achieve this goal, we find a set of individuals who are likely to receive the fake news so that they can test its credibility, and when they propagate the corresponding real news, it is likely to reach a large number of individuals. For this problem, we propose a polynomial time greedy algorithm (AFC) which provides (1 − 1/e −ϵ)-approximation. We further optimize the runtime of AFC by developing a fast graph-pruning heuristic (RAFC) that performs as well as AFC in checking the spread of fake news. Our experiments on real-world networks demonstrate that our approach outperforms popular methods in social network analysis literature.

IJCAI Conference 2016 Conference Paper

Implementation of Learning-Based Dynamic Demand Response on a Campus Micro-Grid

  • Sanmukh R. Kuppannagari
  • Rajgopal Kannan
  • Charalampos Chelmis
  • Viktor K. Prasanna

Demand Response (DR) allows utilities to curtail electricity consumption during peak demand periods. Real time automated DR can offer utilities a scalable solution for fine grained control of curtailment over small intervals for the duration of the entire DR event. In this work, we demonstrate a system for a real time automated Dynamic DR (D2R). Our system has already been integrated with the electrical infrastructure of the University of Southern California, which offers a unique environment to study the impact of automated DR in a complex social and cultural environment including 170 buildings in a city-within-a-city scenario. Our large scale information processing system coupled with accurate forecasting models for sparse data and fast polynomial time optimization algorithms for curtailment maximization provide the ability to adapt and respond to changing curtailment requirements in near real-time. Our D2 R algorithms automatically and dynamically select customers for load curtailment to guarantee the achievement of a curtailment target over a given DR interval.