AAAI Conference 2021 Conference Paper
A Unified Multi-Task Learning Framework for Joint Extraction of Entities and Relations
- Tianyang Zhao
- Zhao Yan
- Yunbo Cao
- Zhoujun Li
Joint extraction of entities and relations has achieved great success in recent year by task decomposition and multi-task learning. Previous works effectively perform the task through different extraction order, such as relation-last, relation-first and relation-middle manner. However, these methods still suffer from the template-dependency, non-entity detection and non-predefined relation prediction problem. To overcome these challenges, in this paper, we propose a unified multitask learning framework, which decomposes the task into three interacted sub-tasks. Specifically, we first introduce the type-attentional method for subject extraction to provide prior type information explicitly. Then, the subject-aware relation prediction is presented to select useful relations based on the combination of global and local semantics. Third, we propose a question generation based QA method for object extraction to obtain diverse queries automatically. Notably, our method detects subjects or objects without relying on NER models and thus it is capable of dealing with the non-entity scenario. Finally, three sub-tasks are integrated into a unified model through parameter sharing. Extensive experiments demonstrate that the proposed framework outperforms all the baseline methods on four benchmark datasets, and further achieves excellent performance for non-predefined relations.