Arrow Research search
Back to IJCAI

IJCAI 2017

Image-embodied Knowledge Representation Learning

Conference Paper Machine Learning S-Z Artificial Intelligence

Abstract

Entity images could provide significant visual information for knowledge representation learning. Most conventional methods learn knowledge representations merely from structured triples, ignoring rich visual information extracted from entity images. In this paper, we propose a novel Image-embodied Knowledge Representation Learning model (IKRL), where knowledge representations are learned with both triple facts and images. More specifically, we first construct representations for all images of an entity with a neural image encoder. These image representations are then integrated into an aggregated image-based representation via an attention-based method. We evaluate our IKRL models on knowledge graph completion and triple classification. Experimental results demonstrate that our models outperform all baselines on both tasks, which indicates the significance of visual information for knowledge representations and the capability of our models in learning knowledge representations with images.

Authors

Keywords

  • Machine Learning: Data Mining
  • Natural Language Processing: Information Extraction
  • Natural Language Processing: Natural Language Semantics

Context

Venue
International Joint Conference on Artificial Intelligence
Archive span
1969-2025
Indexed papers
14525
Paper id
104638344787681927