Results 31 to 40 of about 14,848 (289)

CORSD: Class-Oriented Relational Self Distillation

open access: yesICASSP 2023 - 2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023
Knowledge distillation conducts an effective model compression method while holding some limitations:(1) the feature based distillation methods only focus on distilling the feature map but are lack of transferring the relation of data examples; (2) the relational distillation methods are either limited to the handcrafted functions for relation ...
Yu, Muzhou   +5 more
openaire   +2 more sources

SdAE: Self-distillated Masked Autoencoder

open access: yes, 2022
With the development of generative-based self-supervised learning (SSL) approaches like BeiT and MAE, how to learn good representations by masking random patches of the input image and reconstructing the missing information has grown in concern. However, BeiT and PeCo need a "pre-pretraining" stage to produce discrete codebooks for masked patches ...
Chen, Yabo   +6 more
openaire   +2 more sources

Production of ethanol from sugars and lignocellulosic biomass by Thermoanaerobacter J1 Isolated from a hot spring in Iceland [PDF]

open access: yes, 2012
Thermophilic bacteria have gained increased attention as candidates for bioethanol production from lignocellulosic biomass. This study investigated ethanol production by Thermoanaerobacter strain J1 from hydrolysates made from lignocellulosic biomass in ...
Jessen, Jan Eric, Orlygsson, Johann
core   +2 more sources

Self-distilled Vision Transformer for Domain Generalization

open access: yes, 2023
23 pages, 12 ...
Sultana, Maryam   +4 more
openaire   +2 more sources

Multisensor Land Cover Classification With Sparsely Annotated Data Based on Convolutional Neural Networks and Self-Distillation

open access: yesIEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2021
Extensive research studies have been conducted in recent years to exploit the complementarity among multisensor (or multimodal) remote sensing data for prominent applications such as land cover mapping.
Yawogan Jean Eudes Gbodjo   +4 more
doaj   +1 more source

Recent advances in second generation ethanol production by thermophilic bacteria [PDF]

open access: yes, 2014
There is an increased interest in using thermophilic bacteria for the production of bioethanol from complex lignocellulosic biomass due to their higher operating temperatures and broad substrate range.
Orlygsson, Johann, Scully, Sean
core   +2 more sources

LGD: Label-Guided Self-Distillation for Object Detection

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2022
In this paper, we propose the first self-distillation framework for general object detection, termed LGD (Label-Guided self-Distillation). Previous studies rely on a strong pretrained teacher to provide instructive knowledge that could be unavailable in real-world scenarios.
Zhang, Peizhen   +5 more
openaire   +2 more sources

Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation

open access: yes, 2019
Convolutional neural networks have been widely deployed in various application scenarios. In order to extend the applications' boundaries to some accuracy-crucial domains, researchers have been investigating approaches to boost accuracy through either ...
Bao, Chenglong   +5 more
core   +1 more source

Global-Local Self-Distillation for Visual Representation Learning

open access: yes2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023
The downstream accuracy of self-supervised methods is tightly linked to the proxy task solved during training and the quality of the gradients extracted from it. Richer and more meaningful gradients updates are key to allow self-supervised methods to learn better and in a more efficient manner.
Lebailly, Tim, Tuytelaars, Tinne
openaire   +2 more sources

Exploring complementary information of self‐supervised pretext tasks for unsupervised video pre‐training

open access: yesIET Computer Vision, 2022
This study addresses the problem of the unsupervised pre‐training of video representation learning. The authors' focus is on two common approaches: knowledge distillation and self‐supervised learning.
Wei Zhou   +3 more
doaj   +1 more source

Home - About - Disclaimer - Privacy