Adaptive multi-mode locomotion for bipedal wheel-legged robots via sparse mixture-of-experts deep reinforcement learning. [PDF]
He P, Zhao Z, Duan S, Wang P, Lei H.
europepmc +1 more source
Dual-Stage Clean-Sample Selection for Incremental Noisy Label Learning. [PDF]
Li J, Ma X, Shi Y.
europepmc +1 more source
Parallel trade-offs in human cognition and neural networks: The dynamic interplay between in-context and in-weight learning. [PDF]
Russin J, Pavlick E, Frank MJ.
europepmc +1 more source
Editorial: Unraveling information encoding and representation in memory formation and learning. [PDF]
Montani F.
europepmc +1 more source
FM-LoRA: Factorized Low-Rank Meta-Prompting for Continual Learning. [PDF]
Yu X, Yang J, Wu X, Qiu P, Liu X.
europepmc +1 more source
Catastrophic Forgetting, Rehearsal and Pseudorehearsal [PDF]
This paper reviews the problem of catastrophic forgetting (the loss or disruption of previously learned information when new information is learned) in neural networks, and explores rehearsal mechanisms (the retraining of some of the previously learned information as the new information is added) as a potential solution.
Anthony Robins
exaly +2 more sources
Related searches:
Catastrophic forgetting in connectionist networks
Trends in Cognitive Sciences, 1999All natural cognitive systems, and, in particular, our own, gradually forget previously learned information. Plausible models of human cognition should therefore exhibit similar patterns of gradual forgetting of old information as new information is acquired.
exaly +3 more sources
Catastrophic forgetting and mode collapse in GANs
2020 International Joint Conference on Neural Networks (IJCNN), 2020In this paper, we show that Generative Adversarial Networks (GANs) suffer from catastrophic forgetting even when they are trained to approximate a single target distribution. We show that GAN training is a continual learning problem in which the sequence of changing model distributions is the sequence of tasks to the discriminator.
Hoang Thanh-Tung, Truyen Tran 0001
openaire +1 more source
Avoiding Catastrophic Forgetting
Trends in Cognitive Sciences, 2017Humans regularly perform new learning without losing memory for previous information, but neural network models suffer from the phenomenon of catastrophic forgetting in which new learning impairs prior function. A recent article presents an algorithm that spares learning at synapses important for previously learned function, reducing catastrophic ...
openaire +2 more sources
Measuring Catastrophic Forgetting in Visual Question Answering
2021Catastrophic forgetting is a ubiquitous problem for the current generation of Artificial Neural Networks: When a network is asked to learn multiple tasks in a sequence, it fails dramatically as it tends to forget past knowledge. Little is known on how far multimodal conversational agents suffer from this phenomenon.
Claudio Greco 0002 +3 more
openaire +2 more sources

