Results 11 to 20 of about 33,100 (246)

Catastrophic forgetting: still a problem for DNNs [PDF]

open access: yes, 2019
We investigate the performance of DNNs when trained on class-incremental visual problems consisting of initial training, followed by retraining with added visual classes.
Abdullah, S.   +3 more
core   +2 more sources

Combating catastrophic forgetting with developmental compression [PDF]

open access: yesProceedings of the Genetic and Evolutionary Computation Conference, 2018
Generally intelligent agents exhibit successful behavior across problems in several settings. Endemic in approaches to realize such intelligence in machines is catastrophic forgetting: sequential learning corrupts knowledge obtained earlier in the ...
Bongard J.   +4 more
core   +2 more sources

Measuring Catastrophic Forgetting in Neural Networks

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2017
Deep neural networks are used in many state-of-the-art systems for machine perception. Once a network is trained to do a specific task, e.g., bird classification, it cannot easily be trained to do new tasks, e.g., incrementally learning to recognize ...
Abitino, Angelina   +4 more
core   +2 more sources

Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization [PDF]

open access: yesProceedings of the National Academy of Sciences of the United States of America, 2018
Nicolas Y Masse, David J Freedman
exaly   +2 more sources

Domain-incremental white blood cell classification with privacy-aware continual learning [PDF]

open access: yesScientific Reports
White blood cell (WBC) classification plays a vital role in hematology for diagnosing various medical conditions. However, it faces significant challenges due to domain shifts caused by variations in sample sources (e.g., blood or bone marrow) and ...
Pratibha Kumari   +6 more
doaj   +2 more sources

Overcoming catastrophic forgetting in neural networks. [PDF]

open access: yesProc Natl Acad Sci U S A, 2017
Significance Deep neural networks are currently the most successful machine-learning technique for solving a variety of tasks, including language translation, image classification, and image generation. One weakness of such models is that, unlike humans, they are unable to learn multiple tasks sequentially.
Kirkpatrick J   +13 more
europepmc   +6 more sources

Do not Forget to Attend to Uncertainty while Mitigating Catastrophic Forgetting [PDF]

open access: yes2021 IEEE Winter Conference on Applications of Computer Vision (WACV), 2021
One of the major limitations of deep learning models is that they face catastrophic forgetting in an incremental learning scenario. There have been several approaches proposed to tackle the problem of incremental learning. Most of these methods are based on knowledge distillation and do not adequately utilize the information provided by older task ...
Vinod K. Kurmi   +3 more
openaire   +2 more sources

Variational Continuous Bayesian Meta-learning Based Algorithm for Recommendation [PDF]

open access: yesJisuanji kexue, 2023
Meta-learning methods have been introduced into recommendation algorithms in recent years to alleviate the problem of cold start.The existing meta-learning algorithms can only improve the ability of the algorithm to deal with a set of statically ...
ZHU Wentao, LIU Wei, LIANG Shangsong, ZHU Huaijie, YIN Jian
doaj   +1 more source

Incremental Fault Diagnosis Method Based on Metric Feature Distillation and Improved Sample Memory

open access: yesIEEE Access, 2023
Incremental learning-based fault diagnosis systems (IFD) are widely used because of their ability to handle constantly updated fault data and types. However, the catastrophic forgetting problem remains the most crucial contemporary challenge facing IFD ...
Qilang Min   +3 more
doaj   +1 more source

On Sequential Bayesian Inference for Continual Learning

open access: yesEntropy, 2023
Sequential Bayesian inference can be used for continual learning to prevent catastrophic forgetting of past tasks and provide an informative prior when learning new tasks.
Samuel Kessler   +4 more
doaj   +1 more source

Home - About - Disclaimer - Privacy