Arrow Research search
Back to NAI

NAI 2025

Graphic Improvements: Adding Explicit Syntactic Graphs to Neural Machine Translation

Journal Article journal-article Artificial Intelligence ยท Neurosymbolic AI

Abstract

Neural language models such as bidirectional encoder representations from transformers or generative pretrained transformer operate on the basis of sequences of words. Pretraining on a large corpus endows them with implicit knowledge about the relationship between words. This study explores the extent to which the explicit incorporation of knowledge about syntactic relations, represented as a graph of dependencies, can enhance machine translation (MT) tasks. Specifically, it employs the graph attention network (GAT), trained on a universal dependencies corpus, to evaluate the impact of explicit syntactic knowledge, even when derived from a smaller corpus, in comparison to the pretraining of implicit knowledge on a massive corpus. The investigation involves an experiment on integrating GAT models into the MT framework, demonstrating robust improvement in MT quality for three language pairs, thus opening up possibilities for neurosymbolic approaches to natural language processing.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
Neurosymbolic Artificial Intelligence
Archive span
2024-2026
Indexed papers
43
Paper id
1056418512746173317