Archivo Abierto Institucional de la Universidad Carlos III de Madrid >
Departamento de Informática >
Grupo de Computación Evolutiva y Redes Neuronales (EVANNAI) >
DI - GCERN - Artículos de revistas científicas >
Please use this identifier to cite or link to this item:
|Title: ||A selective learning method to improve the generalization of multilayer feedforward neural networks.|
|Author(s): ||Galván, Inés M.|
Valls, José M.
|Publisher: ||World Scientific Publishing Company|
|Issued date: ||Apr-2001|
|Citation: ||International Journal of Neural Systems, 2001, vol. 11, n. 2, p. 167-177|
|Abstract: ||Multilayer feedforward neural networks with backpropagation algorithm have been used successfully in many applications. However, the level of generalization is heavily dependent on the quality of the training data. That is, some of the training patterns can be redundant or irrelevant. It has been shown that with careful dynamic selection of training patterns, better generalization performance may be obtained. Nevertheless, generalization is carried out independently of the novel patterns to be approximated. In this paper, we present a learning method that automatically selects the training patterns more appropriate to the new sample to be predicted. This training method follows a lazy learning strategy, in the sense that it builds approximations centered around the novel sample. The proposed method has been applied to three different domains: two artificial approximation problems and a real time series prediction problem. Results have been compared to standard backpropagation using the complete training data set and the new method shows better generalization abilities.|
|Publisher version: ||http://web.ebscohost.com/ehost/detail?vid=1&hid=102&sid=329db7e0-5da7-4c17-9ed5-b718b41ea578%40sessionmgr102&bdata=JnNpdGU9ZWhvc3QtbGl2ZQ%3d%3d#db=aph&AN=7084469|
|Rights: ||© World Scientific Publishing Company|
|Appears in Collections:||DI - GCERN - Artículos de revistas científicas|
Items in E-Archivo are protected by copyright, with all rights reserved, unless otherwise indicated.