Results 11 to 20 of about 14,848 (289)

Reverse Self-Distillation Overcoming the Self-Distillation Barrier

open access: yesIEEE Open Journal of the Computer Society, 2023
Deep neural networks generally cannot gather more helpful information with limited data in image classification, resulting in poor performance. Self-distillation, as a novel knowledge distillation technique, integrates the roles of teacher and student ...
Shuiping Ni   +4 more
doaj   +2 more sources

Improving Differentiable Architecture Search via Self-Distillation

open access: yesNeural Networks, 2023
Differentiable Architecture Search (DARTS) is a simple yet efficient Neural Architecture Search (NAS) method. During the search stage, DARTS trains a supernet by jointly optimizing architecture parameters and network parameters.
Li, Jian   +3 more
core   +3 more sources

Towards Generalized Multi-stage Clustering: Multi-view Self-distillation

open access: yesIEEE Transactions on Neural Networks and Learning Systems, 2023
Existing multi-stage clustering methods independently learn the salient features from multiple views and then perform the clustering task. Particularly, multi-view clustering (MVC) has attracted a lot of attention in multi-view or multi-modal scenarios ...
Li, Tao   +3 more
core   +3 more sources

SILC: Improving Vision Language Pretraining with Self-Distillation

open access: yes, 2023
Image-Text pretraining on web-scale image caption datasets has become the default recipe for open vocabulary classification and retrieval models thanks to the success of CLIP and its variants.
Hoyer, Lukas   +5 more
core   +3 more sources

Similarity and Consistency by Self-distillation Method [PDF]

open access: yesJisuanji kexue, 2023
Due to high data pre-processing costs and missing local features detection in self-distillation methods for models compression,a similarity and consistency by self-distillation(SCD) method is proposed to improve model classification accuracy.Firstly ...
WAN Xu, MAO Yingchi, WANG Zibo, LIU Yi, PING Ping
doaj   +1 more source

Knowledge Distillation With Feature Self Attention

open access: yesIEEE Access, 2023
With the rapid development of deep learning technology, the size and performance of the network continuously grow, making network compression essential for commercial applications.
Sin-Gu Park, Dong-Joong Kang
doaj   +1 more source

A self‐distillation object segmentation method via frequency domain knowledge augmentation

open access: yesIET Computer Vision, 2023
Most self‐distillation methods need complex auxiliary teacher structures and require lots of training samples in object segmentation task. To solve this challenging, a self‐distillation object segmentation method via frequency domain knowledge ...
Lei Chen   +3 more
doaj   +1 more source

Self-Distilled Self-supervised Representation Learning

open access: yes2023 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2023
WACV 23, 11 ...
Jang, Jiho   +5 more
openaire   +2 more sources

On Self-Distilling Graph Neural Network [PDF]

open access: yesProceedings of the Thirtieth International Joint Conference on Artificial Intelligence, 2021
Recently, the teacher-student knowledge distillation framework has demonstrated its potential in training Graph Neural Networks (GNNs). However, due to the difficulty of training over-parameterized GNN models, one may not easily obtain a satisfactory teacher model for distillation.
Chen, Yuzhao   +5 more
openaire   +2 more sources

Self-Learning for Few-Shot Remote Sensing Image Captioning

open access: yesRemote Sensing, 2022
Large-scale caption-labeled remote sensing image samples are expensive to acquire, and the training samples available in practical application scenarios are generally limited.
Haonan Zhou   +3 more
doaj   +1 more source

Home - About - Disclaimer - Privacy