Arrow Research search

Author name cluster

Bassem Makni

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

AAAI Conference 2021 Conference Paper

A Deep Reinforcement Learning Approach to First-Order Logic Theorem Proving

  • Maxwell Crouse
  • Ibrahim Abdelaziz
  • Bassem Makni
  • Spencer Whitehead
  • Cristina Cornelio
  • Pavan Kapanipathi
  • Kavitha Srinivas
  • Veronika Thost

Automated theorem provers have traditionally relied on manually tuned heuristics to guide how they perform proof search. Deep reinforcement learning has been proposed as a way to obviate the need for such heuristics, however, its deployment in automated theorem proving remains a challenge. In this paper we introduce TRAIL, a system that applies deep reinforcement learning to saturation-based theorem proving. TRAIL leverages (a) a novel neural representation of the state of a theorem prover and (b) a novel characterization of the inference selection process in terms of an attention-based action policy. We show through systematic analysis that these mechanisms allow TRAIL to significantly outperform previous reinforcementlearning-based theorem provers on two benchmark datasets for first-order logic automated theorem proving (proving around 15% more theorems).

AAAI Conference 2020 Conference Paper

Infusing Knowledge into the Textual Entailment Task Using Graph Convolutional Networks

  • Pavan Kapanipathi
  • Veronika Thost
  • Siva Sankalp Patel
  • Spencer Whitehead
  • Ibrahim Abdelaziz
  • Avinash Balakrishnan
  • Maria Chang
  • Kshitij Fadnis

Textual entailment is a fundamental task in natural language processing. Most approaches for solving this problem use only the textual content present in training data. A few approaches have shown that information from external knowledge sources like knowledge graphs (KGs) can add value, in addition to the textual content, by providing background knowledge that may be critical for a task. However, the proposed models do not fully exploit the information in the usually large and noisy KGs, and it is not clear how it can be effectively encoded to be useful for entailment. We present an approach that complements text-based entailment models with information from KGs by (1) using Personalized PageRank to generate contextual subgraphs with reduced noise and (2) encoding these subgraphs using graph convolutional networks to capture the structural and semantic information in KGs. We evaluate our approach on multiple textual entailment datasets and show that the use of external knowledge helps the model to be robust and improves prediction accuracy. This is particularly evident in the challenging BreakingNLI dataset, where we see an absolute improvement of 5-20% over multiple text-based entailment models.

AAAI Conference 2019 Conference Paper

Improving Natural Language Inference Using External Knowledge in the Science Questions Domain

  • Xiaoyan Wang
  • Pavan Kapanipathi
  • Ryan Musa
  • Mo Yu
  • Kartik Talamadupula
  • Ibrahim Abdelaziz
  • Maria Chang
  • Achille Fokoue

Natural Language Inference (NLI) is fundamental to many Natural Language Processing (NLP) applications including semantic search and question answering. The NLI problem has gained significant attention due to the release of large scale, challenging datasets. Present approaches to the problem largely focus on learning-based methods that use only textual information in order to classify whether a given premise entails, contradicts, or is neutral with respect to a given hypothesis. Surprisingly, the use of methods based on structured knowledge – a central topic in artificial intelligence – has not received much attention vis-a-vis the NLI problem. While there are many open knowledge bases that contain various types of reasoning information, their use for NLI has not been well explored. To address this, we present a combination of techniques that harness external knowledge to improve performance on the NLI problem in the science questions domain. We present the results of applying our techniques on text, graph, and text-and-graph based models; and discuss the implications of using external knowledge to solve the NLI problem. Our model achieves close to state-of-the-art performance for NLI on the SciTail science questions dataset.