Results 21 to 30 of about 337,942 (263)

A review of continual learning for robotics

open access: yes智能科学与技术学报, 2022
One of the limitations of robotics is that it is difficult for robots to adapt to fickle tasks.A robot will inevitably forget the knowledge from old environments or tasks when facing new environments or tasks.In order to summarize research in continual ...
Chao ZHAO   +4 more
doaj  

Survey of Pre-training-based Continual Learning Methods (Invited) [PDF]

open access: yesJisuanji gongcheng
Traditional machine learning algorithms perform well only when the training and testing sets are identically distributed. They cannot perform incremental learning for new categories or tasks that were not present in the original training set.
LU Yue, ZHOU Xiangyu, ZHANG Shizhou, LIANG Guoqiang, XING Yinghui, CHENG De, ZHANG Yanning
doaj   +1 more source

Continuous-Action Q-Learning [PDF]

open access: yesMachine Learning, 2002
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
del R. Millán, José   +2 more
openaire   +1 more source

Decentralized Federated Continual Learning Method Combined with Meta-learning [PDF]

open access: yesJisuanji kexue
For the problems of continual learning and data security in federated continual scenarios,a decentralized federated continual learning framework combined with meta-learning is constructed.First,in order to solve the problem of catastrophic forgetting in ...
HUANG Nan, LI Dongdong, YAO Jia, WANG Zhe
doaj   +1 more source

Continual Reinforcement Learning in 3D Non-stationary Environments

open access: yes, 2020
High-dimensional always-changing environments constitute a hard challenge for current reinforcement learning techniques. Artificial agents, nowadays, are often trained off-line in very static and controlled conditions in simulation such that training ...
Culurciello, Eugenio   +3 more
core   +1 more source

Subspace distillation for continual learning

open access: yesNeural Networks, 2023
An ultimate objective in continual learning is to preserve knowledge learned in preceding tasks while learning new tasks. To mitigate forgetting prior knowledge, we propose a novel knowledge distillation technique that takes into the account the manifold structure of the latent/output space of a neural network in learning novel tasks.
Kaushik Roy   +3 more
openaire   +4 more sources

Variational Continual Learning

open access: yes, 2017
Published at International Conference on Learning Representations (ICLR ...
Turner, RE   +3 more
openaire   +2 more sources

Autonomous Deep Learning: Continual Learning Approach for Dynamic Environments

open access: yes, 2020
The feasibility of deep neural networks (DNNs) to address data stream problems still requires intensive study because of the static and offline nature of conventional deep learning approaches.
Ashfahani, Andri, Pratama, Mahardhika
core   +1 more source

Unsupervised Learning to Overcome Catastrophic Forgetting in Neural Networks

open access: yesIEEE Journal on Exploratory Solid-State Computational Devices and Circuits, 2019
Continual learning is the ability to acquire a new task or knowledge without losing any previously collected information. Achieving continual learning in artificial intelligence (AI) is currently prevented by catastrophic forgetting, where training of a ...
Irene Munoz-Martin   +5 more
doaj   +1 more source

Scalable Recollections for Continual Lifelong Learning

open access: yes, 2018
Given the recent success of Deep Learning applied to a variety of single tasks, it is natural to consider more human-realistic settings. Perhaps the most difficult of these settings is that of continual lifelong learning, where the model must learn ...
Bouneffouf, Djallel   +3 more
core   +1 more source

Home - About - Disclaimer - Privacy