Results 11 to 20 of about 1,489,144 (280)

Latent Multi-task Architecture Learning

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2018
Multi-task learning (MTL) allows deep neural networks to learn from related tasks by sharing parameters with other networks. In practice, however, MTL involves searching an enormous space of possible parameter sharing architectures to find (a) the layers
Augenstein, Isabelle   +3 more
core   +4 more sources

Learning Gait Representations with Noisy Multi-Task Learning

open access: yesSensors, 2022
Gait analysis is proven to be a reliable way to perform person identification without relying on subject cooperation. Walking is a biometric that does not significantly change in short periods of time and can be regarded as unique to each person. So far,
Adrian Cosma, Emilian Radoi
doaj   +3 more sources

Multi-Task Learning Based Network Embedding [PDF]

open access: yesFrontiers in Neuroscience, 2020
The goal of network representation learning, also called network embedding, is to encode the network structure information into a continuous low-dimensionality embedding space where geometric relationships among the vectors can reflect the relationships ...
Shanfeng Wang, Qixiang Wang, Maoguo Gong
doaj   +3 more sources

Model-Protected Multi-Task Learning [PDF]

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2022
Multi-task learning (MTL) refers to the paradigm of learning multiple related tasks together. In contrast, in single-task learning (STL) each individual task is learned independently. MTL often leads to better trained models because they can leverage the commonalities among related tasks.
Jian Liang   +5 more
openaire   +3 more sources

Polymer informatics with multi-task learning [PDF]

open access: yesPatterns, 2021
Modern data-driven tools are transforming application-specific polymer development cycles. Surrogate models that can be trained to predict the properties of new polymers are becoming commonplace. Nevertheless, these models do not utilize the full breadth of the knowledge available in datasets, which are oftentimes sparse; inherent correlations between ...
Künneth, Christopher   +5 more
openaire   +3 more sources

Asynchronous Multi-task Learning [PDF]

open access: yes2016 IEEE 16th International Conference on Data Mining (ICDM), 2016
Many real-world machine learning applications involve several learning tasks which are inter-related. For example, in healthcare domain, we need to learn a predictive model of a certain disease for many hospitals. The models for each hospital may be different because of the inherent differences in the distributions of the patient populations.
Baytas, Inci M.   +3 more
openaire   +2 more sources

Sparse multi-task reinforcement learning [PDF]

open access: yesIntelligenza Artificiale, 2015
Abstract In multi-task reinforcement learning (MTRL), the objective is to simultaneously learn multiple tasks and exploit their similarity to improve the performance w.r.t. single-task learning. In this paper we investigate the case when all the tasks can be accurately represented in a linear approximation space using the same small subset of the ...
Calandriello, Daniele   +2 more
openaire   +4 more sources

Multi-task gradient descent for multi-task learning [PDF]

open access: yesMemetic Computing, 2020
Multi-Task Learning (MTL) aims to simultaneously solve a group of related learning tasks by leveraging the salutary knowledge memes contained in the multiple tasks to improve the generalization performance. Many prevalent approaches focus on designing a sophisticated cost function, which integrates all the learning tasks and explores the task-task ...
Lu Bai   +3 more
openaire   +2 more sources

Pareto Multi-task Deep Learning [PDF]

open access: yes, 2020
Neuroevolution has been used to train Deep Neural Networks on reinforcement learning problems. A few attempts have been made to extend it to address either multi-task or multi-objective optimization problems. This research work presents the Multi-Task Multi-Objective Deep Neuroevolution method, a highly parallelizable algorithm that can be adopted for ...
Riccio S. D.   +4 more
openaire   +2 more sources

Convex multi-task feature learning [PDF]

open access: yesMachine Learning, 2007
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Argyriou, Andreas   +2 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy