Arrow Research search
Back to AAAI

AAAI 1994

Using Errors to Create Piecewise Learnable Partitions

Short Paper Student Abstracts Artificial Intelligence

Abstract

After a learning system has been trained, the usual procedure is to average the testing errors in order to obtain an estimate of how well the system has learned. However, that is tossing away a lot of potentially useful information. We present an algorithm which exploits the distribution of errors in order to find where the algorithm performs badly and partition the space into parts which can be learned easily. We will show a simple example which gives the intuition of the algorithm, and then a more complex one which brings forth some of the details of the algorithm. Let us suppose that we are trying to learn the absolute value function. Almost all learning algorithms perform well along the arms of the function, but do badly around the cusp. If we notice th 'hill' of errors around x = 0, then we can partition the space which we are trying to learn into two parts which fall on either side of the hill. Those two partitions have the property of not only being linear, but of being learnable. Each partition can be trained separately, and when tested separately gives a better answer since irrelevant and misleading training points from other partitions have not been included.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
425449457008739987