Results 51 to 60 of about 82,164 (262)

A Systematic Review of Evidence on the Clinical Effectiveness of Surveillance Imaging in Children With Medulloblastoma and Ependymoma

open access: yesPediatric Blood &Cancer, EarlyView.
ABSTRACT Surveillance imaging aims to detect tumour relapse before symptoms develop, but it's unclear whether earlier detection of relapse leads to better outcomes in children and young people (CYP) with medulloblastoma and ependymoma. This systematic review aims to identify relevant literature to determine the efficacy of surveillance magnetic ...
Lucy Shepherd   +3 more
wiley   +1 more source

Permeability Prediction Using Vision Transformers

open access: yesMathematical and Computational Applications
Accurate permeability predictions remain pivotal for understanding fluid flow in porous media, influencing crucial operations across petroleum engineering, hydrogeology, and related fields.
Cenk Temizel   +5 more
doaj   +1 more source

Comparative Analysis of Deep Learning Architectures and Vision Transformers for Musical Key Estimation

open access: yesInformation, 2023
The musical key serves as a crucial element in a piece, offering vital insights into the tonal center, harmonic structure, and chord progressions while enabling tasks such as transposition and arrangement.
Manav Garg   +6 more
doaj   +1 more source

Developing evidence‐based, cost‐effective P4 cancer medicine for driving innovation in prevention, therapeutics, patient care and reducing healthcare inequalities

open access: yesMolecular Oncology, EarlyView.
The cancer problem is increasing globally with projections up to the year 2050 showing unfavourable outcomes in terms of incidence and cancer‐related deaths. The main challenges are prevention, improved therapeutics resulting in increased cure rates and enhanced health‐related quality of life.
Ulrik Ringborg   +43 more
wiley   +1 more source

A vision transformer machine learning model for COVID-19 diagnosis using chest X-ray images

open access: yesHealthcare Analytics
This study leverages machine learning to enhance the diagnostic accuracy of COVID-19 using chest X-rays. The study evaluates various architectures, including efficient neural networks (EfficientNet), multiscale vision transformers (MViT), efficient ...
Tianyi Chen   +6 more
doaj   +1 more source

Neural Architecture Search for Transformers: A Survey

open access: yesIEEE Access, 2022
Transformer-based Deep Neural Network architectures have gained tremendous interest due to their effectiveness in various applications across Natural Language Processing (NLP) and Computer Vision (CV) domains.
Krishna Teja Chitty-Venkata   +3 more
doaj   +1 more source

Visual Recovery Reflects Cortical MeCP2 Sensitivity in Rett Syndrome

open access: yesAnnals of Clinical and Translational Neurology, EarlyView.
ABSTRACT Objective Rett syndrome (RTT) is a devastating neurodevelopmental disorder with developmental regression affecting motor, sensory, and cognitive functions. Sensory disruptions contribute to the complex behavioral and cognitive difficulties and represent an important target for therapeutic interventions.
Alex Joseph Simon   +12 more
wiley   +1 more source

Vision Transformers for Image Classification: A Comparative Survey

open access: yesTechnologies
Transformers were initially introduced for natural language processing, leveraging the self-attention mechanism. They require minimal inductive biases in their design and can function effectively as set-based architectures.
Yaoli Wang   +4 more
doaj   +1 more source

Waterline Extraction for Artificial Coast With Vision Transformers

open access: yesFrontiers in Environmental Science, 2022
Accurate acquisition for the positions of the waterlines plays a critical role in coastline extraction. However, waterline extraction from high-resolution images is a very challenging task because it is easily influenced by the complex background.
Le Yang, Xing Wang, Jingsheng Zhai
doaj   +1 more source

Adaptive Attention Span in Transformers

open access: yes, 2019
We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time.
Bojanowski, Piotr   +3 more
core   +1 more source

Home - About - Disclaimer - Privacy