Arrow Research search

Author name cluster

Dileep Kini

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

3 papers
1 author row

Possible papers

3

IJCAI Conference 2015 Conference Paper

FlashNormalize: Programming by Examples for Text Normalization

  • Dileep Kini
  • Sumit Gulwani

Several applications including text-to-speech require some normalized format of non-standard words in various domains such as numbers, dates, and currencies and in various human languages. The traditional approach of manually constructing a program for such a normalization task requires expertise in both programming and target (human) language and further does not scale to a large number of domain, format, and target language combinations. We propose to learn programs for such normalization tasks through examples. We present a domainspecific programming language that offers appropriate abstractions for succinctly describing such normalization tasks, and then present a novel search algorithm that can effectively learn programs in this language from input-output examples. We also briefly describe domain-specific heuristics for guiding users of our system to provide representative examples for normalization tasks related to that domain. Our experiments show that we are able to effectively learn desired programs for a variety of normalization tasks.

IJCAI Conference 2013 Conference Paper

Automated Grading of DFA Constructions

  • Rajeev Alur
  • Loris D'Antoni
  • Sumit Gulwani
  • Dileep Kini
  • Mahesh Viswanathan

One challenge in making online education more effective is to develop automatic grading software that can provide meaningful feedback. This paper provides a solution to automatic grading of the standard computation-theory problem that asks a student to construct a deterministic finite automaton (DFA) from the given description of its language. We focus on how to assign partial grades for incorrect answers. Each student’s answer is compared to the correct DFA using a hybrid of three techniques devised to capture different classes of errors. First, in an attempt to catch syntactic mistakes, we compute the edit distance between the two DFA descriptions. Second, we consider the entropy of the symmetric difference of the languages of the two DFAs, and compute a score that estimates the fraction of the number of strings on which the student answer is wrong. Our third technique is aimed at capturing mistakes in reading of the problem description. For this purpose, we consider a description language MOSEL, which adds syntactic sugar to the classical Monadic Second Order Logic, and allows defining regular languages in a concise and natural way. We provide algorithms, along with optimizations, for transforming MOSEL descriptions into DFAs and vice-versa. These allow us to compute the syntactic edit distance of the incorrect answer from the correct one in terms of their logical representations. We report an experimental study that evaluates hundreds of answers submitted by (real) students by comparing grades/feedback computed by our tool with human graders. Our conclusion is that the tool is able to assign partial grades in a meaningful way, and should be preferred over the human graders for both scalability and consistency.

Highlights Conference 2013 Conference Abstract

Automated grading of DFA constructions

  • Rajeev Alur
  • Loris D'Antoni
  • Sumit Gulwani
  • Dileep Kini
  • Mahesh Viswanathan

One challenge in making online education more effective is to develop automatic grading software that can provide meaningful feedback. This paper provides a solution to automatic grading of the standard computation-theory problem that asks a student to construct a deterministic finite automaton from the given description of its language. We focus on how to assign partial grades for incorrect answers. We experimentally show that the tool is able to assign partial grades in a meaningful way, and should be preferred over the human graders for both scalability and consistency. 13: 00 14: 00 Lunch