Arrow Research search

Author name cluster

Ayush Shrivastava

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

2 papers
1 author row

Possible papers

2

AAAI Conference 2022 Conference Paper

TEACh: Task-Driven Embodied Agents That Chat

  • Aishwarya Padmakumar
  • Jesse Thomason
  • Ayush Shrivastava
  • Patrick Lange
  • Anjali Narayan-Chen
  • Spandana Gella
  • Robinson Piramuthu
  • Gokhan Tur

Robots operating in human spaces must be able to engage in natural language interaction, both understanding and executing instructions, and using conversation to resolve ambiguity and correct mistakes. To study this, we introduce TEACh, a dataset of over 3, 000 human–human, interactive dialogues to complete household tasks in simulation. A Commander with access to oracle information about a task communicates in natural language with a Follower. The Follower navigates through and interacts with the environment to complete tasks varying in complexity from MAKE COFFEE to PREPARE BREAKFAST, asking questions and getting additional information from the Commander. We propose three benchmarks using TEACh to study embodied intelligence challenges, and we evaluate initial models’ abilities in dialogue understanding, language grounding, and task execution.

NeurIPS Conference 2019 Conference Paper

Chasing Ghosts: Instruction Following as Bayesian State Tracking

  • Peter Anderson
  • Ayush Shrivastava
  • Devi Parikh
  • Dhruv Batra
  • Stefan Lee

A visually-grounded navigation instruction can be interpreted as a sequence of expected observations and actions an agent following the correct trajectory would encounter and perform. Based on this intuition, we formulate the problem of finding the goal location in Vision-and-Language Navigation (VLN) within the framework of Bayesian state tracking - learning observation and motion models conditioned on these expectable events. Together with a mapper that constructs a semantic spatial map on-the-fly during navigation, we formulate an end-to-end differentiable Bayes filter and train it to identify the goal by predicting the most likely trajectory through the map according to the instructions. The resulting navigation policy constitutes a new approach to instruction following that explicitly models a probability distribution over states, encoding strong geometric and algorithmic priors while enabling greater explainability. Our experiments show that our approach outperforms a strong LingUNet baseline when predicting the goal location on the map. On the full VLN task, i. e. navigating to the goal location, our approach achieves promising results with less reliance on navigation constraints.