Results 61 to 70 of about 141,062 (268)

Knowledge Condensation Distillation

open access: yes, 2022
Knowledge Distillation (KD) transfers the knowledge from a high-capacity teacher network to strengthen a smaller student. Existing methods focus on excavating the knowledge hints and transferring the whole knowledge to the student. However, the knowledge redundancy arises since the knowledge shows different values to the student at different learning ...
Li, Chenxin   +7 more
openaire   +2 more sources

Chemoselective Sequential Polymerization: An Approach Toward Mixed Plastic Waste Recycling

open access: yesAdvanced Functional Materials, EarlyView.
Inspired by biological protein metabolism, this study demonstrates the closed‐loop recycling of mixed synthetic polymers via ring‐closing depolymerization followed by a chemoselective sequential polymerizations process. The approach recovers pure polymers from mixed feedstocks, even in multilayer formats, highlighting a promising strategy to overcome a
Gadi Slor   +5 more
wiley   +1 more source

Self-knowledge distillation via dropout

open access: yesComputer Vision and Image Understanding, 2023
11 ...
Hyoje Lee   +3 more
openaire   +2 more sources

Germanane Quantum Dots Promote Metabolic Reprogramming of Immune Cells Toward Regulatory T Cells and Suppress Inflammation In Vitro and In Vivo

open access: yesAdvanced Functional Materials, EarlyView.
Metabolic changes in immune cells direct the phenotype and function of the host immune system. Smart nanomaterials must target metabolic pathways to direct immune cell fate. This study reports the fabrication and first application of germanane quantum dots (GeHQDs) to modulate inflammation in vitro and in vivo.
Abhay Srivastava   +7 more
wiley   +1 more source

NTCE-KD: Non-Target-Class-Enhanced Knowledge Distillation

open access: yesSensors
Most logit-based knowledge distillation methods transfer soft labels from the teacher model to the student model via Kullback–Leibler divergence based on softmax, an exponential normalization function. However, this exponential nature of softmax tends to
Chuan Li, Xiao Teng, Yan Ding, Long Lan
doaj   +1 more source

Transformer-Based Knowledge Distillation with Ghost Attention for Multimodal Edge-Based Smart Surveillance [PDF]

open access: yesITM Web of Conferences
In the modern era, knowledge distillation has gained attention as an important technique for edge-based smart surveillance that integrates accurate yet lightweight deployable models on resource-constrained devices.
Sataar Zahrah
doaj   +1 more source

SI‐bioATRP in Mesoporous Silica for Size‐Exclusion Driven Local Polymer Placement

open access: yesAdvanced Functional Materials, EarlyView.
An enzyme‐catalyzed surface‐initiated atom transfer radical polymerization (SI‐bioATRP) of an anionic monomer within mesoporous silica particles, using hemoglobin as a catalyst, allows for controlling the location of the formed polymer via size‐exclusion effects between the nanopores and the biomacromolecules, thereby opening routes to functional ...
Oleksandr Wondra   +8 more
wiley   +1 more source

Knowledge distillation of face recognition via attention cosine similarity review

open access: yesIET Computer Vision
Deep learning‐based face recognition models have demonstrated remarkable performance in benchmark tests, and knowledge distillation technology has been frequently accustomed to obtain high‐precision real‐time face recognition models specifically designed
Zhuo Wang, SuWen Zhao, WanYi Guo
doaj   +1 more source

Distilling Diverse Knowledge for Deep Ensemble Learning

open access: yesIEEE Access
Bidirectional knowledge distillation improves network performance by sharing knowledge between networks during the training of multiple networks. Additionally, performance is further improved by using an ensemble of multiple networks during inference ...
Naoki Okamoto   +3 more
doaj   +1 more source

Triplet Knowledge Distillation

open access: yes, 2023
In Knowledge Distillation, the teacher is generally much larger than the student, making the solution of the teacher likely to be difficult for the student to learn. To ease the mimicking difficulty, we introduce a triplet knowledge distillation mechanism named TriKD. Besides teacher and student, TriKD employs a third role called anchor model.
Wang, Xijun   +5 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy