ICML Conference 1998 Conference Paper
Multiple-Instance Learning for Natural Scene Classification
- Oded Maron
- Aparna Lakshmi Ratan
Author name cluster
Possible papers associated with this exact author name in Arrow. This page groups case-insensitive exact name matches and is not a full identity disambiguation profile.
ICML Conference 1998 Conference Paper
NeurIPS Conference 1997 Conference Paper
Multiple-instance learning is a variation on supervised learning, where the task is to learn a concept given positive and negative bags of instances. Each bag may contain many instances, but a bag is labeled positive even if only one of the instances in it falls within the concept. A bag is labeled negative only if all the instances in it are negative. We describe a new general framework, called Diverse Density, for solving multiple-instance learning problems. We apply this framework to learn a simple description of a person from a series of images (bags) containing that person, to a stock selection problem, and to the drug activity prediction problem.
AAAI Conference 1994 Short Paper
After a learning system has been trained, the usual procedure is to average the testing errors in order to obtain an estimate of how well the system has learned. However, that is tossing away a lot of potentially useful information. We present an algorithm which exploits the distribution of errors in order to find where the algorithm performs badly and partition the space into parts which can be learned easily. We will show a simple example which gives the intuition of the algorithm, and then a more complex one which brings forth some of the details of the algorithm. Let us suppose that we are trying to learn the absolute value function. Almost all learning algorithms perform well along the arms of the function, but do badly around the cusp. If we notice th 'hill' of errors around x = 0, then we can partition the space which we are trying to learn into two parts which fall on either side of the hill. Those two partitions have the property of not only being linear, but of being learnable. Each partition can be trained separately, and when tested separately gives a better answer since irrelevant and misleading training points from other partitions have not been included.
NeurIPS Conference 1993 Conference Paper
Selecting a good model of a set of input points by cross validation is a computationally intensive process, especially if the number of possible models or the number of training points is high. Tech(cid: 173) niques such as gradient descent are helpful in searching through the space of models, but problems such as local minima, and more importantly, lack of a distance metric between various models re(cid: 173) duce the applicability of these search methods. Hoeffding Races is a technique for finding a good model for the data by quickly dis(cid: 173) carding bad models, and concentrating the computational effort at differentiating between the better ones. This paper focuses on the special case of leave-one-out cross validation applied to memory(cid: 173) based learning algorithms, but we also argue that it is applicable to any class of model selection problems.
AAAI Conference 1992 Conference Paper
We assume that it is useful for a robot to construct a spatial representation of its environment for navigation purposes. In addition, we assume that robots, like people, make occasional errors in perceiving the spatial features of their environment. Typical perceptual errors include confusing two distinct locations or failing to identify the same location seen at different times. We are interested in the consequences of perceptual uncertainty in terms of the time and space required to learn a map with a given accuracy. We measure accuracy in terms of the probability that the robot correctly identifies a particular underlying spatial configuration. We derive considerable power by providing the robot with routines that allow it to identify landmarks on the basis of local features. We provide a mathematical model of the problem and algorithms that are guaranteed to learn the underlying spatial configuration for a given class of environments with probability 1 - 5 in time polynomial in l/S and some measure of the structural complexity of the environment and the robot’ s ability to discern that structure. Our algorithms apply to a variety of environments that can be modeled as labeled graphs or deterministic finite automata.
AAAI Conference 1992 Conference Paper
In this paper we introduce an extension of the Probably Approximately Correct (PAC) learning model to study the problem of learning inclusion hierarchies of concepts (sometimes called is-a hierarchies) from random examples. Using only the hypothesis representations output over many different runs of a learning algorithm, we wish to reconstruct the partial order (with respect to generality) among the different target concepts used to train the algorithm. We give an efficient algorithm for this problem with the property that each run is oblivious of all other runs: each run can take place in isolation, without access to any examples except those of the current target concept, and without access to the current pool of hypothesis representations. Thus, additional mechanisms providing shared information between runs are not necessary for the inference of some nontrivial hierarchies.