Results 41 to 50 of about 33,100 (246)

Overcoming Catastrophic Forgetting With Unlabeled Data in the Wild [PDF]

open access: yes2019 IEEE/CVF International Conference on Computer Vision (ICCV), 2019
ICCV 2019; v3 updated Figure ...
Kibok Lee 0003   +3 more
openaire   +2 more sources

How catastrophic can catastrophic forgetting be in linear regression?

open access: yesCoRR, 2022
To better understand catastrophic forgetting, we study fitting an overparameterized linear model to a sequence of tasks with different input distributions. We analyze how much the model forgets the true labels of earlier tasks after training on subsequent tasks, obtaining exact expressions and bounds. We establish connections between continual learning
Itay Evron   +4 more
openaire   +3 more sources

Unified Probabilistic Deep Continual Learning through Generative Replay and Open Set Recognition

open access: yes, 2020
We introduce a probabilistic approach to unify open set recognition with the prevention of catastrophic forgetting in deep continual learning, based on variational Bayesian inference.
Hong, Yong Won   +4 more
core   +1 more source

Incremental Learning With Adaptive Model Search and a Nominal Loss Model

open access: yesIEEE Access, 2022
This paper addresses an incremental learning problem, in which tasks are learned sequentially without access to the previously trained dataset. Catastrophic forgetting is a significant bottleneck to incremental learning as the network performs poorly on ...
Chanho Ahn, Eunwoo Kim, Songhwai Oh
doaj   +1 more source

Continual Learning Objective for Analyzing Complex Knowledge Representations

open access: yesSensors, 2022
Human beings tend to incrementally learn from the rapidly changing environment without comprising or forgetting the already learned representations. Although deep learning also has the potential to mimic such human behaviors to some extent, it suffers ...
Asad Mansoor Khan   +4 more
doaj   +1 more source

On the role of neurogenesis in overcoming catastrophic forgetting

open access: yesCoRR, 2018
Lifelong learning capabilities are crucial for artificial autonomous agents operating on real-world data, which is typically non-stationary and temporally correlated. In this work, we demonstrate that dynamically grown networks outperform static networks in incremental learning scenarios, even when bounded by the same amount of memory in both cases ...
German Ignacio Parisi   +2 more
openaire   +2 more sources

Investigating Catastrophic Forgetting of Deep Learning Models Within Office 31 Dataset

open access: yesIEEE Access
Deep learning models have shown impressive performance in various tasks. However, they are prone to a phenomenon called catastrophic forgetting. This means they do not remember what they have learned when training on new tasks. In this research paper, we
Hidayaturrahman   +3 more
doaj   +1 more source

Catastrophic Importance of Catastrophic Forgetting

open access: yesCoRR, 2018
This paper describes some of the possibilities of artificial neural networks that open up after solving the problem of catastrophic forgetting. A simple model and reinforcement learning applications of existing methods are also proposed.
openaire   +2 more sources

Overcoming Catastrophic Forgetting in Graph Neural Networks

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2021
Catastrophic forgetting refers to the tendency that a neural network ``forgets'' the previous learned knowledge upon learning new tasks. Prior methods have been focused on overcoming this problem on convolutional neural networks (CNNs), where the input samples like images lie in a grid domain, but have largely overlooked graph neural networks (GNNs ...
Huihui Liu, Yiding Yang, Xinchao Wang
openaire   +2 more sources

Neural modularity helps organisms evolve to learn new skills without forgetting old skills. [PDF]

open access: yesPLoS Computational Biology, 2015
A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new ...
Kai Olav Ellefsen   +2 more
doaj   +1 more source

Home - About - Disclaimer - Privacy