Results 271 to 280 of about 339,627 (329)

Catastrophic forgetting in connectionist networks

Trends in Cognitive Sciences, 1999
All natural cognitive systems, and, in particular, our own, gradually forget previously learned information. Plausible models of human cognition should therefore exhibit similar patterns of gradual forgetting of old information as new information is acquired.
R. French
semanticscholar   +5 more sources

An Empirical Study of Catastrophic Forgetting in Large Language Models During Continual Fine-Tuning

IEEE Transactions on Audio, Speech, and Language Processing, 2023
Catastrophic forgetting (CF) is a phenomenon that occurs in machine learning when a model forgets previously learned information while acquiring new knowledge for achieving satisfactory performance in downstream tasks.
Yun Luo   +5 more
semanticscholar   +1 more source

Avoiding Catastrophic Forgetting

Trends in Cognitive Sciences, 2017
Humans regularly perform new learning without losing memory for previous information, but neural network models suffer from the phenomenon of catastrophic forgetting in which new learning impairs prior function. A recent article presents an algorithm that spares learning at synapses important for previously learned function, reducing catastrophic ...
openaire   +2 more sources

Catastrophic Forgetting in Deep Learning: A Comprehensive Taxonomy

Journal of the Brazilian Computer Society, 2023
Deep Learning models have achieved remarkable performance in tasks such as image classification or generation, often surpassing human accuracy. However, they can struggle to learn new tasks and update their knowledge without access to previous data ...
Everton L. Aleixo   +3 more
semanticscholar   +1 more source

Overcoming Catastrophic Forgetting in Continual Learning by Exploring Eigenvalues of Hessian Matrix

IEEE Transactions on Neural Networks and Learning Systems, 2023
Neural networks tend to suffer performance deterioration on previous tasks when they are applied to multiple tasks sequentially without access to previous data.
Yajing Kong   +4 more
semanticscholar   +1 more source

Actions as Language: Fine-Tuning VLMs into VLAs Without Catastrophic Forgetting

arXiv.org
Fine-tuning vision-language models (VLMs) on robot teleoperation data to create vision-language-action (VLA) models is a promising paradigm for training generalist policies, but it suffers from a fundamental tradeoff: learning to produce actions often ...
Asher Hancock   +4 more
semanticscholar   +1 more source

Speech-IFEval: Evaluating Instruction-Following and Quantifying Catastrophic Forgetting in Speech-Aware Language Models

Interspeech
We introduce Speech-IFeval, an evaluation framework designed to assess instruction-following capabilities and quantify catastrophic forgetting in speech-aware language models (SLMs).
Ke-Han Lu, Chun-Yi Kuan, Hung-yi Lee
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy