Results 21 to 30 of about 141,062 (268)

What Knowledge Gets Distilled in Knowledge Distillation?

open access: yes, 2022
Knowledge distillation aims to transfer useful information from a teacher network to a student network, with the primary goal of improving the student's performance for the task at hand. Over the years, there has a been a deluge of novel techniques and use cases of knowledge distillation.
Ojha, Utkarsh   +4 more
openaire   +2 more sources

Knowledge distillation in deep learning and its applications [PDF]

open access: yesPeerJ Computer Science, 2021
Deep learning based models are relatively large, and it is hard to deploy such models on resource-limited devices such as mobile phones and embedded devices.
Abdolmaged Alkhulaifi   +2 more
doaj   +2 more sources

Reverse Self-Distillation Overcoming the Self-Distillation Barrier

open access: yesIEEE Open Journal of the Computer Society, 2023
Deep neural networks generally cannot gather more helpful information with limited data in image classification, resulting in poor performance. Self-distillation, as a novel knowledge distillation technique, integrates the roles of teacher and student ...
Shuiping Ni   +4 more
doaj   +1 more source

SMKD: Selective Mutual Knowledge Distillation

open access: yes2023 International Joint Conference on Neural Networks (IJCNN), 2023
Mutual knowledge distillation (MKD) is a technique used to transfer knowledge between multiple models in a collaborative manner. However, it is important to note that not all knowledge is accurate or reliable, particularly under challenging conditions such as label noise, which can lead to models that memorize undesired information. This problem can be
Li, Z   +5 more
openaire   +2 more sources

Multi-assistant Dynamic Setting Method for Knowledge Distillation [PDF]

open access: yesJisuanji kexue
Knowledge distillation is increasingly gaining attention in key areas such as model compression for object recognition.Through in-depth research into the efficiency of knowledge distillation and an analysis of the characteristics of knowledge transfer ...
SI Yuehang, CHENG Qing, HUANG Jincai
doaj   +1 more source

A novel model compression method based on joint distillation for deepfake video detection

open access: yesJournal of King Saud University: Computer and Information Sciences, 2023
In recent years, deepfake videos have been abused to create fake news, which threaten the integrity of digital videos. Although existing detection methods leveraged cumbersome neural networks to achieve promising detection performance, they cannot be ...
Xiong Xu   +5 more
doaj   +1 more source

Distilling Knowledge by Mimicking Features [PDF]

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2021
To appear in IEEE Trans ...
Wang, Guo-Hua, Ge, Yifan, Wu, Jianxin
openaire   +3 more sources

Sub-Band Knowledge Distillation Framework for Speech Enhancement

open access: yes, 2020
In single-channel speech enhancement, methods based on full-band spectral features have been widely studied. However, only a few methods pay attention to non-full-band spectral features.
Gao, Guanglai   +5 more
core   +1 more source

Forest Fire Object Detection Analysis Based on Knowledge Distillation

open access: yesFire, 2023
This paper investigates the application of the YOLOv7 object detection model combined with knowledge distillation techniques in forest fire detection.
Jinzhou Xie, Hongmin Zhao
doaj   +1 more source

A non-negative feedback self-distillation method for salient object detection [PDF]

open access: yesPeerJ Computer Science, 2023
Self-distillation methods utilize Kullback-Leibler divergence (KL) loss to transfer the knowledge from the network itself, which can improve the model performance without increasing computational resources and complexity. However, when applied to salient
Lei Chen   +6 more
doaj   +2 more sources

Home - About - Disclaimer - Privacy