Arrow Research search
Back to AAAI

AAAI 2002

Minimum Majority Classification and Boosting

Conference Paper Learning Artificial Intelligence

Abstract

Motivated by a theoretical analysis of the generalization of boosting, we examine learning algorithms that work by trying to fit data using a simple majority vote over a small number of a collection of hypotheses. We provide experimental evidence that an algorithm based on this principle outputs hypotheses that often generalize nearly as well as those output by boosting, and sometimes better. We also provide experimental evidence for an additional reason that boosting algorithms generalize well, that they take advantage of cases in which there are many simple hypotheses with independent errors.

Authors

Keywords

No keywords are indexed for this paper.

Context

Venue
AAAI Conference on Artificial Intelligence
Archive span
1980-2026
Indexed papers
28718
Paper id
990372280321776460