Results 11 to 20 of about 339,627 (329)

Addressing catastrophic forgetting in payload parameter identification using incremental ensemble learning [PDF]

open access: yesFrontiers in Robotics and AI
Collaborative robots (cobots) are increasingly integrated into Industry 4.0 dynamic manufacturing environments that require frequent system reconfiguration due to changes in cobot paths and payloads. This necessitates fast methods for identifying payload
Wael Taie   +3 more
doaj   +2 more sources

Continual learning and catastrophic forgetting [PDF]

open access: yesarXiv.org
Preprint of a book chapter; 21 pages, 4 ...
Gido M. van de Ven   +2 more
openaire   +4 more sources

A Data-Free Approach to Mitigate Catastrophic Forgetting in Federated Class Incremental Learning for Vision Tasks. [PDF]

open access: yesAdv Neural Inf Process Syst, 2023
Deep learning models often suffer from forgetting previously learned information when trained on new data. This problem is exacerbated in federated learning (FL), where the data is distributed and can change independently for each user.
Babakniya S   +4 more
europepmc   +2 more sources

Combating catastrophic forgetting with developmental compression [PDF]

open access: yesProceedings of the Genetic and Evolutionary Computation Conference, 2018
Generally intelligent agents exhibit successful behavior across problems in several settings. Endemic in approaches to realize such intelligence in machines is catastrophic forgetting: sequential learning corrupts knowledge obtained earlier in the ...
Bongard J.   +4 more
core   +2 more sources

Catastrophic Forgetting in Deep Graph Networks: A Graph Classification Benchmark [PDF]

open access: yesFrontiers in Artificial Intelligence, 2022
In this work, we study the phenomenon of catastrophic forgetting in the graph representation learning scenario. The primary objective of the analysis is to understand whether classical continual learning techniques for flat and sequential data have a ...
Antonio Carta   +4 more
doaj   +2 more sources

Incremental Learning of Object Detectors without Catastrophic Forgetting [PDF]

open access: yesIEEE International Conference on Computer Vision, 2017
Despite their success for object detection, convolutional neural networks are ill-equipped for incremental learning, i.e., adapting the original model trained on a set of classes to additionally detect objects of new classes, in the absence of the ...
Alahari, Karteek   +2 more
core   +7 more sources

Array heterogeneity prevents catastrophic forgetting in infants. [PDF]

open access: yesCognition, 2015
Working memory is limited in adults and infants. But unlike adults, infants whose working memory capacity is exceeded often fail in a particularly striking way: they do not represent any of the presented objects, rather than simply remembering as many objects as they can and ignoring anything further (Feigenson & Carey, 2003, 2005).
Zosh JM, Feigenson L.
europepmc   +4 more sources

Investigating the Catastrophic Forgetting in Multimodal Large Language Models [PDF]

open access: greenarXiv.org, 2023
Following the success of GPT4, there has been a surge in interest in multimodal large language model (MLLM) research. This line of research focuses on developing general-purpose LLMs through fine-tuning pre-trained LLMs and vision models.
Yuexiang Zhai   +6 more
openalex   +3 more sources

How catastrophic can catastrophic forgetting be in linear regression? [PDF]

open access: yesAnnual Conference Computational Learning Theory, 2022
To better understand catastrophic forgetting, we study fitting an overparameterized linear model to a sequence of tasks with different input distributions. We analyze how much the model forgets the true labels of earlier tasks after training on subsequent tasks, obtaining exact expressions and bounds. We establish connections between continual learning
Evron, Itay   +4 more
openaire   +3 more sources

More Than Catastrophic Forgetting: Integrating General Capabilities For Domain-Specific LLMs [PDF]

open access: goldConference on Empirical Methods in Natural Language Processing
The performance on general tasks decreases after Large Language Models (LLMs) are fine-tuned on domain-specific tasks, the phenomenon is known as Catastrophic Forgetting (CF).
Chengyuan Liu   +8 more
openalex   +2 more sources

Home - About - Disclaimer - Privacy