Arrow Research search
Back to AAAI

AAAI 2023

MNER-QG: An End-to-End MRC Framework for Multimodal Named Entity Recognition with Query Grounding

Conference Paper AAAI Technical Track on Machine Learning II Artificial Intelligence

Abstract

Multimodal named entity recognition (MNER) is a critical step in information extraction, which aims to detect entity spans and classify them to corresponding entity types given a sentence-image pair. Existing methods either (1) obtain named entities with coarse-grained visual clues from attention mechanisms, or (2) first detect fine-grained visual regions with toolkits and then recognize named entities. However, they suffer from improper alignment between entity types and visual regions or error propagation in the two-stage manner, which finally imports irrelevant visual information into texts. In this paper, we propose a novel end-to-end framework named MNER-QG that can simultaneously perform MRC-based multimodal named entity recognition and query grounding. Specifically, with the assistance of queries, MNER-QG can provide prior knowledge of entity types and visual regions, and further enhance representations of both text and image. To conduct the query grounding task, we provide manual annotations and weak supervisions that are obtained via training a highly flexible visual grounding model with transfer learning. We conduct extensive experiments on two public MNER datasets, Twitter2015 and Twitter2017. Experimental results show that MNER-QG outperforms the current state-of-the-art models on the MNER task, and also improves the query grounding performance.

Authors

Keywords

  • CV: Multi-modal Vision
  • DMKM: Mining of Visual, Multimedia & Multimodal Data
  • ML: Multimodal Learning
  • SNLP: Information Extraction
  • SNLP: Language Grounding

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
597170525490589324