Publication:
Model architecture can transform catastrophic forgetting into positive transfer

Loading...
Thumbnail Image
Identifiers
Publication date
2022-06-24
Defense date
Advisors
Tutors
Journal Title
Journal ISSN
Volume Title
Publisher
Nature Research
Impact
Google Scholar
Export
Research Projects
Organizational Units
Journal Issue
Abstract
The work of McCloskey and Cohen popularized the concept of catastrophic interference. They used a neural network that tried to learn addition using two groups of examples as two different tasks. In their case, learning the second task rapidly deteriorated the acquired knowledge about the previous one. We hypothesize that this could be a symptom of a fundamental problem: addition is an algorithmic task that should not be learned through pattern recognition. Therefore, other model architectures better suited for this task would avoid catastrophic forgetting. We use a neural network with a different architecture that can be trained to recover the correct algorithm for the addition of binary numbers. This neural network includes conditional clauses that are naturally treated within the back-propagation algorithm. We test it in the setting proposed by McCloskey and Cohen and training on random additions one by one. The neural network not only does not suffer from catastrophic forgetting but it improves its predictive power on unseen pairs of numbers as training progresses. We also show that this is a robust effect, also present when averaging many simulations. This work emphasizes the importance that neural network architecture has for the emergence of catastrophic forgetting and introduces a neural network that is able to learn an algorithm.
Description
Keywords
Computational science, Computer science, Scientific data
Bibliographic citation
Ruiz-Garcia, M. (2022). Model architecture can transform catastrophic forgetting into positive transfer. Scientific Reports, 12, 10736.