Results 281 to 290 of about 339,627 (329)
Some of the next articles are maybe not open access.

Mitigating Catastrophic Forgetting in Cross-Domain Fault Diagnosis: An Unsupervised Class Incremental Learning Network Approach

IEEE Transactions on Instrumentation and Measurement
While deep learning has found widespread application in fault diagnosis, it continues to face three primary challenges. First, it assumes that training and test datasets adhere to the same distribution, which is often not the case in industries with ...
Yifan Zhan   +3 more
semanticscholar   +1 more source

Mitigating Catastrophic Forgetting in Large Language Models with Self-Synthesized Rehearsal

Annual Meeting of the Association for Computational Linguistics
Large language models (LLMs) suffer from catastrophic forgetting during continual learning. Conventional rehearsal-based methods rely on previous training data to retain the model's ability, which may not be feasible in real-world applications.
Jianheng Huang   +7 more
semanticscholar   +1 more source

Model Editing at Scale leads to Gradual and Catastrophic Forgetting

Annual Meeting of the Association for Computational Linguistics
Editing knowledge in large language models is an attractive capability to have which allows us to correct incorrectly learnt facts during pre-training, as well as update the model with an ever-growing list of new facts.
Akshat Gupta   +2 more
semanticscholar   +1 more source

Analyzing and Reducing Catastrophic Forgetting in Parameter Efficient Tuning

arXiv.org
Existing research has shown that large language models (LLMs) exhibit remarkable performance in language understanding and generation. However, when LLMs are continuously fine-tuned on complex and diverse domain-specific downstream tasks, the inference ...
Weijieying Ren   +4 more
semanticscholar   +1 more source

Continual Named Entity Recognition without Catastrophic Forgetting

Conference on Empirical Methods in Natural Language Processing, 2023
Continual Named Entity Recognition (CNER) is a burgeoning area, which involves updating an existing model by incorporating new entity types sequentially. Nevertheless, continual learning approaches are often severely afflicted by catastrophic forgetting.
Duzhen Zhang   +6 more
semanticscholar   +1 more source

Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models

International Conference on Machine Learning
Catastrophic forgetting emerges as a critical challenge when fine-tuning multi-modal large language models (MLLMs), where improving performance on unseen tasks often leads to a significant performance drop on the original tasks.
Didi Zhu   +7 more
semanticscholar   +1 more source

Alleviating Catastrophic Forgetting of Incremental Object Detection via Within-Class and Between-Class Knowledge Distillation

IEEE International Conference on Computer Vision, 2023
Incremental object detection (IOD) task requires a model to learn continually from newly added data. However, directly fine-tuning a well-trained detection model on a new task will sharply decrease the performance on old tasks, which is known as ...
Mengxue Kang   +6 more
semanticscholar   +1 more source

Addressing Loss of Plasticity and Catastrophic Forgetting in Continual Learning

International Conference on Learning Representations
Deep representation learning methods struggle with continual learning, suffering from both catastrophic forgetting of useful units and loss of plasticity, often due to rigid and unuseful units. While many methods address these two issues separately, only
Mohamed Elsayed, A. Mahmood
semanticscholar   +1 more source

Revisiting Catastrophic Forgetting in Large Language Model Tuning

Conference on Empirical Methods in Natural Language Processing
Catastrophic Forgetting (CF) means models forgetting previously acquired knowledge when learning new data. It compromises the effectiveness of large language models (LLMs) during fine-tuning, yet the underlying causes have not been thoroughly ...
Hongyu Li, Liang Ding, Meng Fang, D. Tao
semanticscholar   +1 more source

SelfAug: Mitigating Catastrophic Forgetting in Retrieval-Augmented Generation via Distribution Self-Alignment

Conference on Empirical Methods in Natural Language Processing
Recent advancements in large language models (LLMs) have revolutionized natural language processing through their remarkable capabilities in understanding and executing diverse tasks.
Yuqing Huang   +11 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy