Results 21 to 30 of about 142,324 (226)
A Deep Hierarchical Approach to Lifelong Learning in Minecraft
We propose a lifelong learning system that has the ability to reuse and transfer knowledge from one task to another while efficiently retaining the previously learned knowledge-base.
Givony, Shahar +4 more
core +1 more source
Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation
Convolutional neural networks have been widely deployed in various application scenarios. In order to extend the applications' boundaries to some accuracy-crucial domains, researchers have been investigating approaches to boost accuracy through either ...
Bao, Chenglong +5 more
core +1 more source
The time-dimensional self-distillation seeks to transfer knowledge from earlier historical models to subsequent ones with minimal computational overhead.
Yingchao Wang, Wenqi Niu, Hanpo Hou
doaj +1 more source
Knowledge distillation is one effective approach to compress deep learning models. However, the current distillation methods are relatively monotonous. There are still rare studies about the combination of distillation strategies using multiple types of ...
Ziyi Chen +5 more
doaj +1 more source
Knowledge Transfer via Distillation of Activation Boundaries Formed by Hidden Neurons
An activation boundary for a neuron refers to a separating hyperplane that determines whether the neuron is activated or deactivated. It has been long considered in neural networks that the activations of neurons, rather than their exact output values ...
Choi, Jin Young +3 more
core +1 more source
MKD: Mixup-Based Knowledge Distillation for Mandarin End-to-End Speech Recognition
Large-scale automatic speech recognition model has achieved impressive performance. However, huge computational resources and massive amount of data are required to train an ASR model.
Xing Wu +4 more
doaj +1 more source
Knowledge Distillation with Adversarial Samples Supporting Decision Boundary
Many recent works on knowledge distillation have provided ways to transfer the knowledge of a trained network for improving the learning process of a new one, but finding a good technique for knowledge distillation is still an open problem. In this paper,
Choi, Jin Young +3 more
core +1 more source
NeuRes: Highly Activated Neurons Responses Transfer via Distilling Sparse Activation Maps
In recent years, Knowledge Distillation has obtained a significant interest in mobile, edge, and IoT devices due to its ability to transfer knowledge from the large and complex teacher to the lightweight student network.
Sharmen Akhter +3 more
doaj +1 more source
Progressive Label Distillation: Learning Input-Efficient Deep Neural Networks [PDF]
Much of the focus in the area of knowledge distillation has been on distilling knowledge from a larger teacher network to a smaller student network. However, there has been little research on how the concept of distillation can be leveraged to distill ...
Lin, Zhong Qiu, Wong, Alexander
core +2 more sources
CRISPRI‐mediated gene silencing and phenotypic exploration in nontuberculous mycobacteria. In this Research Protocol, we describe approaches to control, monitor, and quantitatively assess CRISPRI‐mediated gene silencing in M. smegmatis and M. abscessus model organisms.
Vanessa Point +7 more
wiley +1 more source

