Results 301 to 310 of about 4,930,132 (374)
Self-Distillation of Hidden Layers for Self-Supervised Representation Learning
Scott Lowe +4 more
openalex +1 more source
Adaptive Lightweight Network Construction Method for Self-Knowledge Distillation
siyuan lu +3 more
openalex +1 more source
Hybrid model integrating LeViT transformer and distillation techniques for pattern detection and dance classification. [PDF]
Wang Y.
europepmc +1 more source
Ensemble Distribution Distillation for Self-Supervised Human Activity Recognition
Matthew Nolan, Lina Yao, Damien Robert
openalex +1 more source
Dual-graph knowledge distillation for few-shot class-incremental microorganism recognition. [PDF]
Xu S, Hu Y, Zhang Y, Chen L, Yin Y.
europepmc +1 more source
Some of the next articles are maybe not open access.
Related searches:
Related searches:
Intra-class progressive and adaptive self-distillation
Neural NetworksIn recent years, knowledge distillation (KD) has become widely used in compressing models, training compact and efficient students to reduce computational load and training time due to the increasing parameters in deep neural networks. To minimize training costs, self-distillation has been proposed, with methods like offline-KD and online-KD requiring ...
Jianping Gou +5 more
openaire +3 more sources

