Results 51 to 60 of about 33,100 (246)
Reducing catastrophic forgetting with learning on synthetic data [PDF]
Catastrophic forgetting is a problem caused by neural networks' inability to learn data in sequence. After learning two tasks in sequence, performance on the first one drops significantly. This is a serious disadvantage that prevents many deep learning applications to real-life problems where not all object classes are known beforehand; or change in ...
Wojciech Masarczyk, Ivona Tautkute
openaire +2 more sources
Localizing Catastrophic Forgetting in Neural Networks
Artificial neural networks (ANNs) suffer from catastrophic forgetting when trained on a sequence of tasks. While this phenomenon was studied in the past, there is only very limited recent research on this phenomenon. We propose a method for determining the contribution of individual parameters in an ANN to catastrophic forgetting. The method is used to
Felix Wiewel, Bin Yang 0009
openaire +2 more sources
Self-Organizing Multiple Readouts for Reservoir Computing
With advancements in deep learning (DL), artificial intelligence (AI) technology has become an indispensable tool. However, the application of DL incurs significant computational costs, making it less viable for edge AI scenarios.
Yuichiro Tanaka, Hakaru Tamukoh
doaj +1 more source
Overcoming Catastrophic Forgetting by Generative Regularization
In this paper, we propose a new method to overcome catastrophic forgetting by adding generative regularization to Bayesian inference framework. Bayesian method provides a general framework for continual learning. We could further construct a generative regularization term for all given classification models by leveraging energy-based models and ...
Patrick H. Chen +3 more
openaire +2 more sources
Label-Guided relation prototype generation for Continual Relation Extraction [PDF]
Continual relation extraction (CRE) aims to extract relations towards the continuous and iterative arrival of new data. To address the problem of catastrophic forgetting, some existing research endeavors have focused on exploring memory replay methods by
Shuang Liu +3 more
doaj +2 more sources
Overcoming Catastrophic Forgetting Using Sparse Coding and Meta Learning
Continuous learning occurs naturally in human beings. However, Deep Learning methods suffer from a problem known as Catastrophic Forgetting (CF) that consists of a model drastically decreasing its performance on previously learned tasks when it is ...
Julio Hurtado, Hans Lobel, Alvaro Soto
doaj +1 more source
Continual Learning for Multimodal Data Fusion of a Soft Gripper
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley +1 more source
An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks [PDF]
Catastrophic forgetting is a problem faced by many machine learning models and algorithms. When trained on one task, then trained on a second task, many machine learning models "forget" how to perform the first task.
Bengio, Yoshua +4 more
core
Addressing catastrophic forgetting for medical domain expansion
Model brittleness is a key concern when deploying deep learning models in real-world medical settings. A model that has high performance at one institution may suffer a significant decline in performance when tested at other institutions. While pooling datasets from multiple institutions and retraining may provide a straightforward solution, it is ...
Sharut Gupta +15 more
openaire +2 more sources
Physics‐Embedded Neural Network: A Novel Approach to Design Polymeric Materials
Traditional black‐box models for polymer mechanics rely solely on data and lack physical interpretability. This work presents a physics‐embedded neural network (PENN) that integrates constitutive equations into machine learning. The approach ensures reliable stress predictions, provides interpretable parameters, and enables performance‐driven, inverse ...
Siqi Zhan +8 more
wiley +1 more source

