Arrow Research search

Author name cluster

Julien Kloetzer

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

5 papers
1 author row

Possible papers

5

AAAI Conference 2019 Conference Paper

Exploiting Background Knowledge in Compact Answer Generation for Why-Questions

  • Ryu Iida
  • Canasai Kruengkrai
  • Ryo Ishida
  • Kentaro Torisawa
  • Jong-Hoon Oh
  • Julien Kloetzer

This paper proposes a novel method for generating compact answers to open-domain why-questions, such as the following answer, “Because deep learning technologies were introduced, ” to the question, “Why did Google’s machine translation service improve so drastically? ” Although many works have dealt with why-question answering, most have focused on retrieving as answers relatively long text passages that consist of several sentences. Because of their length, such passages are not appropriate to be read aloud by spoken dialog systems and smart speakers; hence, we need to create a method that generates compact answers. We developed a novel neural summarizer for this compact answer generation task. It combines a recurrent neural network-based encoderdecoder model with stacked convolutional neural networks and was designed to effectively exploit background knowledge, in this case a set of causal relations (e. g. , “[Microsoft’s machine translation has made great progress over the last few years]effect since [it started to use deep learning. ]cause”) that was extracted from a large web data archive (4 billion web pages). Our experimental results show that our method achieved significantly better ROUGE F-scores than existing encoder-decoder models and their variations that were augmented with query-attention and memory networks, which are used to exploit the background knowledge.

AAAI Conference 2018 Conference Paper

Semi-Distantly Supervised Neural Model for Generating Compact Answers to Open-Domain Why Questions

  • Ryo Ishida
  • Kentaro Torisawa
  • Jong-Hoon Oh
  • Ryu Iida
  • Canasai Kruengkrai
  • Julien Kloetzer

This paper proposes a neural network-based method for generating compact answers to open-domain why-questions (e. g. , “Why was Mr. Trump elected as the president of the US? ”). Unlike factoid question answering methods that provide short text spans as answers, existing work for whyquestion answering have aimed at answering questions by retrieving relatively long text passages, each of which often consists of several sentences, from a text archive. While the actual answer to a why-question may be expressed over several consecutive sentences, these often contain redundant and/or unrelated parts. Such answers would not be suitable for spoken dialog systems and smart speakers such as Amazon Echo, which receive much attention in these days. In this work, we aim at generating non-redundant compact answers to why-questions from answer passages retrieved from a very large web data corpora (4 billion web pages) by an already existing open-domain why-question answering system, using a novel neural network obtained by extending existing summarization methods. We also automatically generate training data using a large number of causal relations automatically extracted from 4 billion web pages by an existing supervised causality recognizer. The data is used to train our neural network, together with manually created training data. Through a series of experiments, we show that both our novel neural network and auto-generated training data improve the quality of the generated answers both in ROUGE score and in a subjective evaluation.

AAAI Conference 2017 Conference Paper

Improving Event Causality Recognition with Multiple Background Knowledge Sources Using Multi-Column Convolutional Neural Networks

  • Canasai Kruengkrai
  • Kentaro Torisawa
  • Chikara Hashimoto
  • Julien Kloetzer
  • Jong-Hoon Oh
  • Masahiro Tanaka

We propose a method for recognizing such event causalities as “smoke cigarettes” → “die of lung cancer” using background knowledge taken from web texts as well as original sentences from which candidates for the causalities were extracted. We retrieve texts related to our event causality candidates from four billion web pages by three distinct methods, including a why-question answering system, and feed them to our multi-column convolutional neural networks. This allows us to identify the useful background knowledge scattered in web texts and effectively exploit the identified knowledge to recognize event causalities. We empirically show that the combination of our neural network architecture and background knowledge significantly improves average precision, while the previous state-of-the-art method gains just a small benefit from such background knowledge.

AAAI Conference 2016 Conference Paper

A Semi-Supervised Learning Approach to Why-Question Answering

  • Jong-Hoon Oh
  • Kentaro Torisawa
  • Chikara Hashimoto
  • Ryu Iida
  • Masahiro Tanaka
  • Julien Kloetzer

We propose a semi-supervised learning method for improving why-question answering (why-QA). The key of our method is to generate training data (question-answer pairs) from causal relations in texts such as “[Tsunamis are generated]effect because [the ocean’s water mass is displaced by an earthquake]cause. ” A naive method for the generation would be to make a question-answer pair by simply converting the effect part of the causal relations into a why-question, like “Why are tsunamis generated? ” from the above example, and using the source text of the causal relations as an answer. However, in our preliminary experiments, this naive method actually failed to improve the why-QA performance. The main reason was that the machine-generated questions were often incomprehensible like “Why does (it) happen? ”, and that the system suffered from overfitting to the results of our automatic causality recognizer. Hence, we developed a novel method that effectively filters out incomprehensible questions and retrieves from texts answers that are likely to be paraphrases of a given causal relation. Through a series of experiments, we showed that our approach significantly improved the precision of the top answer by 8% over the current state-of-the-art system for Japanese why-QA.

AAAI Conference 2015 Conference Paper

Generating Event Causality Hypotheses through Semantic Relations

  • Chikara Hashimoto
  • Kentaro Torisawa
  • Julien Kloetzer
  • Jong-Hoon Oh

Event causality knowledge is indispensable for intelligent natural language understanding. The problem is that any method for extracting event causalities from text is insufficient; it is likely that some event causalities that we can recognize in this world are not written in a corpus, no matter its size. We propose a method of hypothesizing unseen event causalities from known event causalities extracted from the web by the semantic relations between nouns. For example, our method can hypothesize deploy a security camera→avoid crimes from deploy a mosquito net→avoid malaria through semantic relation A PREVENTS B. Our experiments show that, from 2. 4 million event causalities extracted from the web, our method generated more than 300, 000 hypotheses, which were not in the input, with 70% precision. We also show that our method outperforms a state-ofthe-art hypothesis generation method.