Publication:
Generating ensembles of heterogeneous classifiers using Stacked Generalization

Loading...
Thumbnail Image
Identifiers
Publication date
2015-02-28
Defense date
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
John Wiley & Sons
Impact
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
Over the last two decades, the machine learning and related communities have conducted numerous studies to improve the performance of a single classifier by combining several classifiers generated from one or more learning algorithms. Bagging and Boosting are the most representative examples of algorithms for generating homogeneous ensembles of classifiers. However, Stacking has become a commonly used technique for generating ensembles of heterogeneous classifiers since Wolpert presented his study entitled Stacked Generalization in 1992. Studies that have addressed the Stacking issue demonstrated that when selecting base learning algorithms for generating classifiers that are members of the ensemble, their learning parameters and the learning algorithm for generating the meta-classifier were critical issues. Most studies on this topic manually select the appropriate combination of base learning algorithms and their learning parameters. However, some other methods use automatic methods to determine good Stacking configurations instead of starting from these strong initial assumptions. In this paper, we describe Stacking and its variants and present several examples of application domains.
Description
Keywords
Combining Classifiers, Feature-Selection, Decision Trees, Classification, Prediction, Algorithm, Accuracy
Bibliographic citation
Sesmero, M.P., Ledezma, A.I. and Sanchis, A. (2015), Generating ensembles of heterogeneous classifiers using Stacked Generalization. WIREs Data Mining Knowl Discov, 5: 21-34.