Arrow Research search

Author name cluster

Efim Kinber

Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.

10 papers
1 author row

Possible papers

10

TCS Journal 2019 Journal Article

Intrinsic complexity of partial learning

  • Sanjay Jain
  • Efim Kinber

A partial learner in the limit [25], given a representation of the target language (a text), outputs a sequence of conjectures, where one correct conjecture appears infinitely many times and other conjectures each appear a finite number of times. Following [7] and [21], we define intrinsic complexity of partial learning, based on reducibilities between learning problems. Although the whole class of recursively enumerable languages is partially learnable (see [25]) and, thus, belongs to the complete learnability degree, we discovered a rich structure of incomplete degrees, reflecting different types of learning strategies (based, to some extent, on topological structures of the target language classes). We also exhibit examples of complete classes that illuminate the character of the strategies for partial learning of the hardest classes.

TCS Journal 2016 Journal Article

Parallel learning of automatic classes of languages

  • Sanjay Jain
  • Efim Kinber

We introduce and explore a model for parallel learning of families of languages computable by finite automata. In this model, an algorithmic or automatic learner takes on n different input languages and identifies at least m of them correctly. For finite parallel learning, for large enough families, we establish a full characterization of learnability in terms of characteristic samples of languages. Based on this characterization, we show that it is the difference n − m, the number of languages which are potentially not identified, which is crucial. Similar results are obtained also for parallel learning in the limit. We consider also parallel finite learnability by finite automata and obtain some partial results. A number of problems for automatic variant of parallel learning remain open.

TCS Journal 2013 Journal Article

Mind change speed-up for learning languages from positive data

  • Sanjay Jain
  • Efim Kinber

Within the frameworks of learning in the limit of indexed classes of recursive languages from positive data and automatic learning in the limit of indexed classes of regular languages (with automatically computable sets of indices), we study the problem of minimizing the maximum number of mind changes F M ( n ) by a learner M on all languages with indices not exceeding n. For inductive inference of recursive languages, we establish two conditions under which F M ( n ) can be made smaller than any recursive unbounded non-decreasing function. We also establish how F M ( n ) is affected if at least one of these two conditions does not hold. In the case of automatic learning, some partial results addressing speeding up the function F M ( n ) are obtained.

TCS Journal 2009 Journal Article

One-shot learners using negative counterexamples and nearest positive examples

  • Sanjay Jain
  • Efim Kinber

As some cognitive research suggests, in the process of learning languages, in addition to overt explicit negative evidence, a child often receives covert explicit evidence in form of corrected or rephrased sentences. In this paper, we suggest one approach to formalization of overt and covert evidence within the framework of one-shot learners via subset and membership queries to a teacher (oracle). We compare and explore general capabilities of our models, as well as complexity advantages of learnability models of one type over models of other types, where complexity is measured in terms of number of queries. In particular, we establish that “correcting” positive examples are sometimes more helpful to a learner than just negative (counter) examples and access to full positive data.

TCS Journal 2008 Journal Article

Learning and extending sublanguages

  • Sanjay Jain
  • Efim Kinber

A number of natural models for learning in the limit are introduced to deal with the situation when a learner is required to provide a grammar covering the input even if only a part of the target language is available. Examples of language families are exhibited that are learnable in one model and not learnable in another one. Some characterizations for learnability of algorithmically enumerable families of languages for the models in question are obtained. Since learnability of any part of the target language does not imply monotonicity of the learning process, we consider our models also under the additional monotonicity constraint.

TCS Journal 2007 Journal Article

Learning languages from positive data and a limited number of short counterexamples

  • Sanjay Jain
  • Efim Kinber

We consider two variants of a model for learning languages in the limit from positive data and a limited number of short negative counterexamples (counterexamples are considered to be short if they are smaller than the largest element of input seen so far). Negative counterexamples to a conjecture are examples which belong to the conjectured language but do not belong to the input language. Within this framework, we explore how/when learners using n short (arbitrary) negative counterexamples can be simulated (or simulate) using least short counterexamples or just ‘no’ answers from a teacher. We also study how a limited number of short counterexamples fairs against unconstrained counterexamples, and also compare their capabilities with the data that can be obtained from subset, superset, and equivalence queries (possibly with counterexamples). A surprising result is that just one short counterexample can sometimes be more useful than any bounded number of counterexamples of arbitrary sizes. Most of the results exhibit salient examples of languages learnable or not learnable within corresponding variants of our models.

TCS Journal 2007 Journal Article

Learning multiple languages in groups

  • Sanjay Jain
  • Efim Kinber

We consider a variant of Gold’s learning paradigm where a learner receives as input n different languages (in the form of one text where all input languages are interleaved). Our goal is to explore the situation when a more “coarse” classification of input languages is possible, whereas more refined classification is not. More specifically, we answer the following question: under which conditions, a learner, being fed n different languages, can produce m grammars covering all input languages, but cannot produce k grammars covering input languages for any k > m. We also consider a variant of this task, where each of the output grammars may not cover more than r input languages. Our main results indicate that the major factor affecting classification capabilities is the difference n − m between the number n of input languages and the number m of output grammars. We also explore the relationship between classification capabilities for smaller and larger groups of input languages. For the variant of our model with the upper bound on the number of languages allowed to be represented by one output grammar, for classes consisting of disjoint languages, we found complete picture of relationship between classification capabilities for different parameters n (the number of input languages), m (number of output grammars), and r (bound on the number of languages represented by each output grammar). This picture includes a combinatorial characterization of classification capabilities for the parameters n, m, r of certain types.

TCS Journal 2003 Journal Article

On learning of functions refutably

  • Sanjay Jain
  • Efim Kinber
  • Rolf Wiehagen
  • Thomas Zeugmann

Learning of recursive functions refutably informally means that for every recursive function, the learning machine has either to learn this function or to refute it, that is to signal that it is not able to learn it. Three modi of making precise the notion of refuting are considered. We show that the corresponding types of learning refutably are of strictly increasing power, where already the most stringent of them turns out to be of remarkable topological and algorithmical richness. Furthermore, all these types are closed under union, though in different strengths. Also, these types are shown to be different with respect to their intrinsic complexity; two of them do not contain function classes that are “most difficult” to learn, while the third one does. Moreover, we present several characterizations for these types of learning refutably. Some of these characterizations make clear where the refuting ability of the corresponding learning machines comes from and how it can be realized, in general. For learning with anomalies refutably, we show that several results from standard learning without refutation stand refutably. From this we derive some hierarchies for refutable learning. Finally, we prove that in general one cannot trade stricter refutability constraints for more liberal learning criteria.

TCS Journal 2000 Journal Article

Learning languages and functions by erasing

  • Sanjay Jain
  • Efim Kinber
  • Steffen Lange
  • Rolf Wiehagen
  • Thomas Zeugmann

Learning by erasing means the process of eliminating potential hypotheses from further consideration thereby converging to the least hypothesis never eliminated. This hypothesis must be a solution to the actual learning problem. The capabilities of learning by erasing are investigated in relation to two factors: the choice of the overall hypothesis space itself and what sets of hypotheses must or may be erased. These learning capabilities are studied for two fundamental kinds of objects to be learned, namely languages and functions. For learning languages by erasing, the case of learning indexed families is investigated. A complete picture of all separations and coincidences of the considered models is derived. Learning by erasing is compared with standard models of language learning such as learning in the limit, finite learning and conservative learning. The exact location of these types within the hierarchy of the models of learning by erasing is established. Necessary and sufficient conditions for language learning by erasing are presented. For learning functions by erasing, mainly the case of learning minimal programs is studied. Various relationships and differences between the considered types of function learning by erasing and also to standard function learning are exhibited. In particular, these types are explored in Kolmogorov numberings that can be viewed as natural Gödel numberings of the partial recursive functions. Necessary and sufficient conditions for function learning by erasing are derived.

TCS Journal 1996 Journal Article

Frequency computation and bounded queries

  • Richard Beigel
  • William Gasarch
  • Efim Kinber

There have been several papers over the last ten years that consider the number of queries needed to compute a function as a measure of its complexity. The following function has been studied extensively in that light: F a A (x 1, …, x a ) = A(x 1)…A(x a ). We are interested in the complexity (in terms of the number of queries) of approximating F a A. Let b ⩽ a and let f be any function such that F a A (x 1, …, x a ) and f(x 1, …, x a ) agree on at least b bits. For a general set A we have matching upper and lower bounds on f that depend on coding theory. These are applied to get exact bounds for the case where A is semirecursive, A is superterse, and (assuming P ≠ NP) A = SAT. We obtain exact bounds when A is the halting problem using different methods.