Arrow Research search

Author name cluster

Yinuo Guo

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

4 papers
1 author row

Possible papers

4

AAAI Conference 2021 Conference Paper

Iterative Utterance Segmentation for Neural Semantic Parsing

  • Yinuo Guo
  • Zeqi Lin
  • Jian-Guang Lou
  • Dongmei Zhang

Neural semantic parsers usually fail to parse long and complex utterances into correct meaning representations, due to the lack of exploiting the principle of compositionality. To address this issue, we present a novel framework for boosting neural semantic parsers via iterative utterance segmentation. Given an input utterance, our framework iterates between two neural modules: a segmenter for segmenting a span from the utterance, and a parser for mapping the span into a partial meaning representation. Then, these intermediate parsing results are composed into the final meaning representation. One key advantage is that this framework does not require any handcraft templates or additional labeled data for utterance segmentation: we achieve this through proposing a novel training method, in which the parser provides pseudo supervision for the segmenter. Experiments on GEO, COMPLEXWEBQUESTIONS and FORMULAS show that our framework can consistently improve performances of neural semantic parsers in different domains. On data splits that require compositional generalization, our framework brings significant accuracy gains: GEO 63. 1 → 81. 2, FORMULAS 59. 7 → 72. 7, COMPLEXWE- BQUESTIONS 27. 1 → 56. 3.

AAAI Conference 2021 Conference Paper

Revisiting Iterative Back-Translation from the Perspective of Compositional Generalization

  • Yinuo Guo
  • Hualei Zhu
  • Zeqi Lin
  • Bei Chen
  • Jian-Guang Lou
  • Dongmei Zhang

Human intelligence exhibits compositional generalization (i. e. , the capacity to understand and produce unseen combinations of seen components), but current neural seq2seq models lack such ability. In this paper, we revisit iterative backtranslation, a simple yet effective semi-supervised method, to investigate whether and how it can improve compositional generalization. In this work: (1) We first empirically show that iterative back-translation substantially improves the performance on compositional generalization benchmarks (CFQ and SCAN). (2) To understand why iterative backtranslation is useful, we carefully examine the performance gains and find that iterative back-translation can increasingly correct errors in pseudo-parallel data. (3) To further encourage this mechanism, we propose curriculum iterative backtranslation, which better improves the quality of pseudoparallel data, thus further improving the performance.

AAAI Conference 2020 Conference Paper

Fact-Aware Sentence Split and Rephrase with Permutation Invariant Training

  • Yinuo Guo
  • Tao Ge
  • Furu Wei

Sentence Split and Rephrase aims to break down a complex sentence into several simple sentences with its meaning preserved. Previous studies tend to address the issue by seq2seq learning from parallel sentence pairs, which takes a complex sentence as input and sequentially generates a series of simple sentences. However, the conventional seq2seq learning has two limitations for this task: (1) it does not take into account the facts stated in the long sentence; As a result, the generated simple sentences may miss or inaccurately state the facts in the original sentence. (2) The order variance of the simple sentences to be generated may confuse the seq2seq model during training because the simple sentences derived from the long source sentence could be in any order. To overcome the challenges, we first propose the Fact-aware Sentence Encoding, which enables the model to learn facts from the long sentence and thus improves the precision of sentence split; then we introduce Permutation Invariant Training to alleviate the effects of order variance in seq2seq learning for this task. Experiments on the WebSplit-v1. 0 benchmark dataset show that our approaches can largely improve the performance over the previous seq2seq learning approaches. Moreover, an extrinsic evaluation on oiebenchmark verifies the effectiveness of our approaches by an observation that splitting long sentences with our state-of-theart model as preprocessing is helpful for improving OpenIE performance.

NeurIPS Conference 2020 Conference Paper

Hierarchical Poset Decoding for Compositional Generalization in Language

  • Yinuo Guo
  • Zeqi Lin
  • Jian-Guang Lou
  • Dongmei Zhang

We formalize human language understanding as a structured prediction task where the output is a partially ordered set (poset). Current encoder-decoder architectures do not take the poset structure of semantics into account properly, thus suffering from poor compositional generalization ability. In this paper, we propose a novel hierarchical poset decoding paradigm for compositional generalization in language. Intuitively: (1) the proposed paradigm enforces partial permutation invariance in semantics, thus avoiding overfitting to bias ordering information; (2) the hierarchical mechanism allows to capture high-level structures of posets. We evaluate our proposed decoder on Compositional Freebase Questions (CFQ), a large and realistic natural language question answering dataset that is specifically designed to measure compositional generalization. Results show that it outperforms current decoders.