Results 11 to 20 of about 10,380,352 (315)

Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow [PDF]

open access: yesInternational Conference on Learning Representations, 2022
We present rectified flow, a surprisingly simple approach to learning (neural) ordinary differential equation (ODE) models to transport between two empirically observed distributions \pi_0 and \pi_1, hence providing a unified solution to generative ...
Xingchao Liu, Chengyue Gong, Qiang Liu
semanticscholar   +1 more source

Faculty and Student Attitudes about Transfer of Learning [PDF]

open access: yesInSight, 2008
Transfer of learning is using previous knowledge in novel contexts. While this is a basic assumption of the educational process, students may not always perceive all the options for using what they have learned in different, novel situations.
Robin Lightner   +2 more
doaj   +6 more sources

A Comprehensive Survey of Deep Transfer Learning for Anomaly Detection in Industrial Time Series: Methods, Applications, and Directions [PDF]

open access: yesIEEE Access, 2023
Automating the monitoring of industrial processes has the potential to enhance efficiency and optimize quality by promptly detecting abnormal events and thus facilitating timely interventions.
Peng Yan   +6 more
semanticscholar   +1 more source

Factors associated with positive attitude towards hypertension control in Hawassa city administration: Community based cross‐sectional study

open access: yesHealth Science Reports, 2022
Background and Aims In low‐income countries where there is shortage of appropriate medical care to manage hypertension (HTN), understanding dynamics of communities' knowledge and attitude to prevent through lifestyle is crucial.
Tsegab Paulose   +2 more
doaj   +1 more source

VL-ADAPTER: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks [PDF]

open access: yesComputer Vision and Pattern Recognition, 2021
Recently, fine-tuning language models pre-trained on large text corpora have provided huge improvements on vision-and-language (V&L) tasks as well as on pure language tasks.
Yi-Lin Sung, Jaemin Cho, Mohit Bansal
semanticscholar   +1 more source

RadImageNet: An Open Radiologic Deep Learning Research Dataset for Effective Transfer Learning

open access: yesRadiology: Artificial Intelligence, 2022
Purpose To demonstrate the value of pretraining with millions of radiologic images compared with ImageNet photographic images on downstream medical applications when using transfer learning.
X. Mei   +14 more
semanticscholar   +1 more source

A Proposal of Transfer Learning for Monthly Macroeconomic Time Series Forecast

open access: yesEngineering Proceedings, 2023
Transfer learning has not been widely explored with time series. However, it could boost the application and performance of deep learning models for predicting macroeconomic time series with few observations, like monthly variables.
Martín Solís   +1 more
doaj   +1 more source

Deep Transfer Learning for Bearing Fault Diagnosis: A Systematic Review Since 2016

open access: yesIEEE Transactions on Instrumentation and Measurement, 2023
The traditional deep learning-based bearing fault diagnosis approaches assume that the training and test data follow the same distribution. This assumption, however, is not always true for the bearing data collected in practical scenarios, leading to a ...
Xiaohan Chen   +5 more
semanticscholar   +1 more source

A Study of CNN and Transfer Learning in Medical Imaging: Advantages, Challenges, Future Scope

open access: yesSustainability, 2023
This paper presents a comprehensive study of Convolutional Neural Networks (CNN) and transfer learning in the context of medical imaging. Medical imaging plays a critical role in the diagnosis and treatment of diseases, and CNN-based models have ...
A. Salehi   +7 more
semanticscholar   +1 more source

Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning [PDF]

open access: yesInternational Conference on Learning Representations, 2023
Prompt tuning, in which a base pretrained model is adapted to each task via conditioning on learned prompt vectors, has emerged as a promising approach for efficiently adapting large language models to multiple downstream tasks. However, existing methods
Zhen Wang   +5 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy