Results 91 to 100 of about 33,100 (246)

Overcoming catastrophic forgetting in neural networks

open access: yesCoRR
Catastrophic forgetting is the primary challenge that hinders continual learning, which refers to a neural network ability to sequentially learn multiple tasks while retaining previously acquired knowledge. Elastic Weight Consolidation, a regularization-based approach inspired by synaptic consolidation in biological neural systems, has been used to ...
Brandon Shuen Yi Loke   +4 more
openaire   +2 more sources

AT‐AER: Adversarial Training With Adaptive Example Reuse

open access: yesCAAI Transactions on Intelligence Technology, EarlyView.
ABSTRACT Adversarial training (AT) is widely regarded as a crucial defense method for deep neural networks against adversarial attacks. Most of the existing AT methods suffer from the problems of insufficient coverage of perturbation space and robust overfitting.
Meng Hu   +5 more
wiley   +1 more source

LaGu-RCL: Language-Guided Resolution-Continual Learning for Semantic Segmentation of Remote Sensing Images

open access: yesRemote Sensing
Remote sensing image semantic segmentation faces substantial challenges in training and transferring models across images with varying resolutions. This issue can be effectively mitigated by continuously learning knowledge derived from new resolutions ...
Penglong Li   +3 more
doaj   +1 more source

M3LoRA: Flexible Task Adaptation via Multiple Low‐Rank Matrices With Mixture‐of‐Subspaces and Minor Singular Components Initialization

open access: yesCAAI Transactions on Intelligence Technology, EarlyView.
ABSTRACT Parameter‐efficient fine‐tuning (PEFT) has become a crucial paradigm for domain adaptation, achieving strong performance by updating only a small fraction of model parameters. Among various PEFT methods, low‐rank adaptation (LoRA) is widely adopted due to its structural simplicity and computational efficiency.
Xu Luo   +4 more
wiley   +1 more source

Continual Classification Learning Using Generative Models

open access: yes, 2018
Continual learning is the ability to sequentially learn over time by accommodating knowledge while retaining previously learned experiences. Neural networks can learn multiple tasks when trained on them jointly, but cannot maintain performance on ...
Gregorova, Magda   +3 more
core  

Amidst (waste) abundance: world‐making and struggles in hosting a municipal landfill in the Bolivian lowlands Au milieu de l'abondance (de déchets) : fabrication du monde et luttes liées à la gestion d'une décharge municipale dans les basses terres de Bolivie

open access: yesJournal of the Royal Anthropological Institute, EarlyView.
In recent decades, solid waste has proliferated worldwide, becoming a pressing global issue. This article explores the role of Indigenous people dwelling within and upon emerging waste scenarios, with a specific focus on involved forms of sociality and ontological contestation. Drawing on the case of a municipal landfill sited on a Guarani community in
Vanesa Martín Galán
wiley   +1 more source

EXACFS - A CIL Method to Mitigate Catastrophic Forgetting

open access: yesProceedings of the Fifteenth Indian Conference on Computer Vision Graphics and Image Processing
Deep neural networks (DNNS) excel at learning from static datasets but struggle with continual learning, where data arrives sequentially. Catastrophic forgetting, the phenomenon of forgetting previously learned knowledge, is a primary challenge. This paper introduces EXponentially Averaged Class-wise Feature Significance (EXACFS) to mitigate this issue
S. Balasubramanian 0001   +6 more
openaire   +3 more sources

From talking tools to metahumans: social interaction, semiotic skill, and the authority of AI chatbots Des outils parlants aux métahumains : interactions sociales, compétences sémiotiques et autorité des robots conversationnels

open access: yesJournal of the Royal Anthropological Institute, EarlyView.
What does it take to turn a tool into a talking tool and that into an ultimate authority? Generative artificial intelligence (GenAI) in its diverse forms, such as large language models (LLMs), is celebrated as a useful tool. But LLM‐based conversational agents, or chatbots, the software applications through which ordinary users are likely to engage ...
Webb Keane
wiley   +1 more source

Continual Test-Time Adaptation for Robust Remote Photoplethysmography Estimation

open access: yesIEEE Access
Remote photoplethysmography (rPPG) estimation has made considerable progress by leveraging deep learning, yet its performance remains highly susceptible to the domain shifts caused by lighting, skin tone and movement, particularly during inference ...
Hyunwoo Lee   +3 more
doaj   +1 more source

From Nominalisation to Passive in Old Tibetan: Reconstructing Grammatical Meaning in an Extinct Language1

open access: yesTransactions of the Philological Society, EarlyView.
Abstract Based on an analysis of the Old Literary Tibetan corpus—a corpus of the oldest documented Tibetic language—the present study provides evidence that literary Tibetan v3 verb stems (commonly termed ‘future’) initially encoded passive voice. New arguments put forward in this article range from Trans‐Himalayan nominal morphology to early Tibetan ...
Joanna Bialek
wiley   +1 more source

Home - About - Disclaimer - Privacy