Results 101 to 110 of about 5,885,991 (322)

A General Approach for Achieving Supervised Subspace Learning in Sparse Representation

open access: yesIEEE Access, 2019
Over the past few decades, a large family of subspace learning algorithms based on dictionary learning have been designed to provide different solutions to learn subspace feature.
Jianshun Sang   +2 more
doaj   +1 more source

Parental Guidance and Supervised Learning [PDF]

open access: yesSSRN Electronic Journal, 2006
We propose a simple theoretical model of supervised learning that is potentially useful to interpret a number of empirical phenomena relevant to the nature-nurture debate. The model captures a basic trade-o between sheltering the child from the consequences of his mistakes, and allowing him to learn from experience.
Alessandro Lizzeri, Marciano Siniscalchi
openaire   +5 more sources

ICU‐EEG Pattern Detection by a Convolutional Neural Network

open access: yesAnnals of Clinical and Translational Neurology, EarlyView.
ABSTRACT Objective Patients in the intensive care unit (ICU) often require continuous EEG (cEEG) monitoring due to the high risk of seizures and rhythmic and periodic patterns (RPPs). However, interpreting cEEG in real time is resource‐intensive and heavily relies on specialized expertise, which is not always available.
Giulio Degano   +5 more
wiley   +1 more source

Association of Elevated Serum S100A8/A9 Levels and Cognitive Impairment in Patients With Systemic Lupus Erythematosus

open access: yesArthritis Care &Research, EarlyView.
Objective Cognitive impairment (CI) is common in patients with systemic lupus erythematosus (SLE). Despite its prevalence, the immune mechanisms are not well understood. We previously reported elevated serum levels of S100A8/A9 and matrix metalloproteinase 9 (MMP‐9) in patients with SLE and CI.
Carolina Muñoz‐Grajales   +18 more
wiley   +1 more source

Adversarial Dropout for Supervised and Semi-Supervised Learning

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2018
Recently, training with adversarial examples, which are generated by adding a small but worst-case perturbation on input examples, has improved the generalization performance of neural networks. In contrast to the biased individual inputs to enhance the generality, this paper introduces adversarial dropout, which is a minimal set of ...
Park, Sungrae   +3 more
openaire   +2 more sources

Is Self-Supervised Learning More Robust Than Supervised Learning?

open access: yes, 2022
Self-supervised contrastive learning is a powerful tool to learn visual representation without labels. Prior work has primarily focused on evaluating the recognition accuracy of various pre-training algorithms, but has overlooked other behavioral aspects.
Zhong, Yuanyi   +4 more
openaire   +2 more sources

Hip morphology‐based osteoarthritis risk prediction models: development and external validation using individual participant data from the World COACH Consortium

open access: yesArthritis Care &Research, Accepted Article.
Objectives This study aims to develop hip morphology‐based radiographic hip osteoarthritis (RHOA) risk prediction models and investigates the added predictive value of hip morphology measurements and the generalizability to different populations. Methods We combined data from nine prospective cohort studies participating in the World COACH consortium ...
Myrthe A. van den Berg   +26 more
wiley   +1 more source

A survey of the impact of self-supervised pretraining for diagnostic tasks in medical X-ray, CT, MRI, and ultrasound

open access: yesBMC Medical Imaging
Self-supervised pretraining has been observed to be effective at improving feature representations for transfer learning, leveraging large amounts of unlabelled data.
Blake VanBerlo   +2 more
doaj   +1 more source

Home - About - Disclaimer - Privacy