Arrow Research search
Back to IJCAI

IJCAI 2022

Adversarial Explanations for Knowledge Graph Embeddings

Conference Paper Machine Learning Artificial Intelligence

Abstract

We propose a novel black-box approach for performing adversarial attacks against knowledge graph embedding models. An adversarial attack is a small perturbation of the data at training time to cause model failure at test time. We make use of an efficient rule learning approach and use abductive reasoning to identify triples which are logical explanations for a particular prediction. The proposed attack is then based on the simple idea to suppress or modify one of the triples in the most confident explanation. Although our attack scheme is model independent and only needs access to the training data, we report results on par with state-of-the-art white-box attack methods that additionally require full access to the model architecture, the learned embeddings, and the loss functions. This is a surprising result which indicates that knowledge graph embedding models can partly be explained post hoc with the help of symbolic methods.

Authors

Keywords

  • Knowledge Representation and Reasoning: Diagnosis and Abductive Reasoning
  • Knowledge Representation and Reasoning: Learning and reasoning
  • Machine Learning: Adversarial Machine Learning
  • Machine Learning: Explainable/Interpretable Machine Learning
  • Machine Learning: Relational Learning

Context

Venue
International Joint Conference on Artificial Intelligence
Archive span
1969-2025
Indexed papers
14525
Paper id
149832063114494157