Results 11 to 20 of about 14,848 (289)
Reverse Self-Distillation Overcoming the Self-Distillation Barrier
Deep neural networks generally cannot gather more helpful information with limited data in image classification, resulting in poor performance. Self-distillation, as a novel knowledge distillation technique, integrates the roles of teacher and student ...
Shuiping Ni +4 more
doaj +2 more sources
Improving Differentiable Architecture Search via Self-Distillation
Differentiable Architecture Search (DARTS) is a simple yet efficient Neural Architecture Search (NAS) method. During the search stage, DARTS trains a supernet by jointly optimizing architecture parameters and network parameters.
Li, Jian +3 more
core +3 more sources
Towards Generalized Multi-stage Clustering: Multi-view Self-distillation
Existing multi-stage clustering methods independently learn the salient features from multiple views and then perform the clustering task. Particularly, multi-view clustering (MVC) has attracted a lot of attention in multi-view or multi-modal scenarios ...
Li, Tao +3 more
core +3 more sources
SILC: Improving Vision Language Pretraining with Self-Distillation
Image-Text pretraining on web-scale image caption datasets has become the default recipe for open vocabulary classification and retrieval models thanks to the success of CLIP and its variants.
Hoyer, Lukas +5 more
core +3 more sources
Similarity and Consistency by Self-distillation Method [PDF]
Due to high data pre-processing costs and missing local features detection in self-distillation methods for models compression,a similarity and consistency by self-distillation(SCD) method is proposed to improve model classification accuracy.Firstly ...
WAN Xu, MAO Yingchi, WANG Zibo, LIU Yi, PING Ping
doaj +1 more source
Knowledge Distillation With Feature Self Attention
With the rapid development of deep learning technology, the size and performance of the network continuously grow, making network compression essential for commercial applications.
Sin-Gu Park, Dong-Joong Kang
doaj +1 more source
A self‐distillation object segmentation method via frequency domain knowledge augmentation
Most self‐distillation methods need complex auxiliary teacher structures and require lots of training samples in object segmentation task. To solve this challenging, a self‐distillation object segmentation method via frequency domain knowledge ...
Lei Chen +3 more
doaj +1 more source
Self-Distilled Self-supervised Representation Learning
WACV 23, 11 ...
Jang, Jiho +5 more
openaire +2 more sources
On Self-Distilling Graph Neural Network [PDF]
Recently, the teacher-student knowledge distillation framework has demonstrated its potential in training Graph Neural Networks (GNNs). However, due to the difficulty of training over-parameterized GNN models, one may not easily obtain a satisfactory teacher model for distillation.
Chen, Yuzhao +5 more
openaire +2 more sources
Self-Learning for Few-Shot Remote Sensing Image Captioning
Large-scale caption-labeled remote sensing image samples are expensive to acquire, and the training samples available in practical application scenarios are generally limited.
Haonan Zhou +3 more
doaj +1 more source

