Results 1 to 10 of about 142,324 (226)

Decoupled Classifier Knowledge Distillation. [PDF]

open access: yesPLoS ONE
Mainstream knowledge distillation methods primarily include self-distillation, offline distillation, online distillation, output-based distillation, and feature-based distillation.
Hairui Wang   +3 more
doaj   +2 more sources

MTAKD: multi-teacher agreement knowledge distillation for edge AI skin disease diagnosis [PDF]

open access: yesScientific Reports
Skin disease diagnosis remains challenging in remote areas due to limited access to dermatology specialists and unreliable internet connectivity. Edge AI offers a potential solution by offloading the inference process from cloud servers to mobile devices.
Andreas Winata   +4 more
doaj   +2 more sources

A contrast enhanced representation normalization approach to knowledge distillation [PDF]

open access: yesScientific Reports
Within the research scope of knowledge distillation, contrastive representation distillation has achieved remarkable research results by introducing the Contrastive Representation Distillation Loss.
Zhiqiang Bao, Di Zhu, Leying Du, Yang Li
doaj   +2 more sources

Stepwise self-knowledge distillation for skin lesion image classification [PDF]

open access: yesScientific Reports
Self-knowledge distillation, which involves using the same network structure for both the teacher and student models, has gained considerable attention in the field of medical image classification.
Jian Zheng   +4 more
doaj   +2 more sources

Mutual Learning Knowledge Distillation Based on Multi-stage Multi-generative Adversarial Network [PDF]

open access: yesJisuanji kexue, 2022
Aiming at the problems of insufficient knowledge distillation efficiency,single stage training methods,complex training processes and difficult convergence of traditional knowledge distillation methods in image classification tasks,this paper designs a ...
HUANG Zhong-hao, YANG Xing-yao, YU Jiong, GUO Liang, LI Xiang
doaj   +1 more source

Multiple-Stage Knowledge Distillation

open access: yesApplied Sciences, 2022
Knowledge distillation (KD) is a method in which a teacher network guides the learning of a student network, thereby resulting in an improvement in the performance of the student network.
Chuanyun Xu   +6 more
doaj   +1 more source

Memory-Replay Knowledge Distillation

open access: yesSensors, 2021
Knowledge Distillation (KD), which transfers the knowledge from a teacher to a student network by penalizing their Kullback–Leibler (KL) divergence, is a widely used tool for Deep Neural Network (DNN) compression in intelligent sensor systems ...
Jiyue Wang, Pei Zhang, Yanxiong Li
doaj   +1 more source

Similarity and Consistency by Self-distillation Method [PDF]

open access: yesJisuanji kexue, 2023
Due to high data pre-processing costs and missing local features detection in self-distillation methods for models compression,a similarity and consistency by self-distillation(SCD) method is proposed to improve model classification accuracy.Firstly ...
WAN Xu, MAO Yingchi, WANG Zibo, LIU Yi, PING Ping
doaj   +1 more source

Review of Recent Distillation Studies [PDF]

open access: yesMATEC Web of Conferences, 2023
Knowledge distillation has gained a lot of interest in recent years because it allows for compressing a large deep neural network (teacher DNN) into a smaller DNN (student DNN), while maintaining its accuracy.
Gao Minghong
doaj   +1 more source

A Virtual Knowledge Distillation via Conditional GAN

open access: yesIEEE Access, 2022
Knowledge distillation aims at transferring the knowledge from a pre-trained complex model, called teacher, to a relatively smaller and faster one, called student. Unlike previous works that transfer the teacher’s softened distributions or feature
Sihwan Kim
doaj   +1 more source

Home - About - Disclaimer - Privacy