Measuring Catastrophic Forgetting in Neural Networks [PDF]
Deep neural networks are used in many state-of-the-art systems for machine perception. Once a network is trained to do a specific task, e.g., bird classification, it cannot easily be trained to do new tasks, e.g., incrementally learning to recognize ...
Abitino, Angelina +4 more
core +5 more sources
Overcoming catastrophic forgetting in neural networks. [PDF]
Significance Deep neural networks are currently the most successful machine-learning technique for solving a variety of tasks, including language translation, image classification, and image generation. One weakness of such models is that, unlike humans, they are unable to learn multiple tasks sequentially.
Kirkpatrick J +13 more
europepmc +9 more sources
Diffusion-based neuromodulation can eliminate catastrophic forgetting in simple neural networks. [PDF]
A long-term goal of AI is to produce agents that can learn a diversity of skills throughout their lifetimes and continuously improve those skills via experience. A longstanding obstacle towards that goal is catastrophic forgetting, which is when learning
Roby Velez, Jeff Clune
doaj +5 more sources
Catastrophic forgetting: still a problem for DNNs [PDF]
We investigate the performance of DNNs when trained on class-incremental visual problems consisting of initial training, followed by retraining with added visual classes.
Abdullah, S. +3 more
core +4 more sources
Can sleep protect memories from catastrophic forgetting? [PDF]
Continual learning remains an unsolved problem in artificial neural networks. The brain has evolved mechanisms to prevent catastrophic forgetting of old knowledge during new training.
Oscar C González +4 more
doaj +4 more sources
Sleep prevents catastrophic forgetting in spiking neural networks by forming a joint synaptic weight representation. [PDF]
Artificial neural networks overwrite previously learned tasks when trained sequentially, a phenomenon known as catastrophic forgetting. In contrast, the brain learns continuously, and typically learns best when new training is interleaved with periods of
Ryan Golden +3 more
doaj +2 more sources
Model architecture can transform catastrophic forgetting into positive transfer [PDF]
The work of McCloskey and Cohen popularized the concept of catastrophic interference. They used a neural network that tried to learn addition using two groups of examples as two different tasks.
Miguel Ruiz-Garcia
doaj +2 more sources
Overcoming Catastrophic Forgetting in Incremental Object Detection via Elastic Response Distillation [PDF]
Traditional object detectors are ill-equipped for incremental learning. However, fine-tuning directly on a well-trained detection model with only new data will lead to catastrophic forgetting.
Tao Feng, Mang Wang, Hangjie Yuan
openalex +3 more sources
Natural Way to Overcome Catastrophic Forgetting in Neural Networks [PDF]
The problem of catastrophic forgetting manifested itself in models of neural networks based on the connectionist approach, which have been actively studied since the second half of the 20th century.
Alexey Kutalev
doaj +4 more sources
A brain-inspired algorithm that mitigates catastrophic forgetting of artificial and spiking neural networks with low computational cost. [PDF]
Neuromodulators in the brain act globally at many forms of synaptic plasticity, represented as metaplasticity, which is rarely considered by existing spiking (SNNs) and nonspiking artificial neural networks (ANNs).
Zhang T +5 more
europepmc +2 more sources

