Investigating Catastrophic Forgetting of Deep Learning Models Within Office 31 Dataset
Deep learning models have shown impressive performance in various tasks. However, they are prone to a phenomenon called catastrophic forgetting. This means they do not remember what they have learned when training on new tasks. In this research paper, we
Hidayaturrahman +3 more
doaj +1 more source
Neural modularity helps organisms evolve to learn new skills without forgetting old skills. [PDF]
A long-standing goal in artificial intelligence is creating agents that can learn a variety of different skills for different problems. In the artificial intelligence subfield of neural networks, a barrier to that goal is that when agents learn a new ...
Kai Olav Ellefsen +2 more
doaj +1 more source
Quantum Continual Learning Overcoming Catastrophic Forgetting
Catastrophic forgetting describes the fact that machine learning models will likely forget the knowledge of previously learned tasks after the learning process of a new one. It is a vital problem in the continual learning scenario and recently has attracted tremendous concern across different communities.
Jiang, Wenjie +2 more
openaire +2 more sources
Overcoming Catastrophic Forgetting Using Sparse Coding and Meta Learning
Continuous learning occurs naturally in human beings. However, Deep Learning methods suffer from a problem known as Catastrophic Forgetting (CF) that consists of a model drastically decreasing its performance on previously learned tasks when it is ...
Julio Hurtado, Hans Lobel, Alvaro Soto
doaj +1 more source
Label-Guided relation prototype generation for Continual Relation Extraction [PDF]
Continual relation extraction (CRE) aims to extract relations towards the continuous and iterative arrival of new data. To address the problem of catastrophic forgetting, some existing research endeavors have focused on exploring memory replay methods by
Shuang Liu +3 more
doaj +2 more sources
Self-Organizing Multiple Readouts for Reservoir Computing
With advancements in deep learning (DL), artificial intelligence (AI) technology has become an indispensable tool. However, the application of DL incurs significant computational costs, making it less viable for edge AI scenarios.
Yuichiro Tanaka, Hakaru Tamukoh
doaj +1 more source
On Robustness of Generative Representations Against Catastrophic Forgetting [PDF]
Catastrophic forgetting of previously learned knowledge while learning new tasks is a widely observed limitation of contemporary neural networks. Although many continual learning methods are proposed to mitigate this drawback, the main question remains unanswered: what is the root cause of catastrophic forgetting? In this work, we aim at answering this
Wojciech Masarczyk +2 more
openaire +3 more sources
Continual Learning for Multimodal Data Fusion of a Soft Gripper
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley +1 more source
An Empirical Investigation of Catastrophic Forgetting in Gradient-Based Neural Networks [PDF]
Catastrophic forgetting is a problem faced by many machine learning models and algorithms. When trained on one task, then trained on a second task, many machine learning models "forget" how to perform the first task.
Bengio, Yoshua +4 more
core
Physics‐Embedded Neural Network: A Novel Approach to Design Polymeric Materials
Traditional black‐box models for polymer mechanics rely solely on data and lack physical interpretability. This work presents a physics‐embedded neural network (PENN) that integrates constitutive equations into machine learning. The approach ensures reliable stress predictions, provides interpretable parameters, and enables performance‐driven, inverse ...
Siqi Zhan +8 more
wiley +1 more source

