Publication:
On building ensembles of stacked denoising auto-encoding classifiers and their further improvement

Loading...
Thumbnail Image
Identifiers
Publication date
2018-01
Defense date
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
Elsevier
Impact
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
To aggregate diverse learners and to train deep architectures are the two principal avenues towards increasing the expressive capabilities of neural networks. Therefore, their combinations merit attention. In this contribution, we study how to apply some conventional diversity methods-bagging and label switching- to a general deep machine, the stacked denoising auto-encoding classifier, in order to solve a number of appropriately selected image recognition problems. The main conclusion of our work is that binarizing multi-class problems is the key to obtain benefit from those diversity methods. Additionally, we check that adding other kinds of performance improvement procedures, such as pre-emphasizing training samples and elastic distortion mechanisms, further increases the quality of the results. In particular, an appropriate combination of all the above methods leads us to reach a new absolute record in classifying MNIST handwritten digits. These facts reveal that there are clear opportunities for designing more powerful classifiers by means of combining different improvement techniques. (C) 2017 Elsevier B.V. All rights reserved.
Description
Keywords
Augmentation, Classification, Deep, Diversity, Learning, Pre-Emphasis
Bibliographic citation
Alvear Sandoval, R. F. y Figueiras Vidal, A. R. (2018). On building ensembles of stacked denoising auto-encoding classifiers and their further improvement. Information Fusion, 39, pp. 41-52.