Results 21 to 30 of about 219,795 (335)

Making Monolingual Sentence Embeddings Multilingual Using Knowledge Distillation [PDF]

open access: yesConference on Empirical Methods in Natural Language Processing, 2020
We present an easy and efficient method to extend existing sentence embedding models to new languages. This allows to create multilingual versions from previously monolingual models.
Nils Reimers, Iryna Gurevych
semanticscholar   +1 more source

The Advances in the Special Microwave Effects of the Heterogeneous Catalytic Reactions

open access: yesFrontiers in Chemistry, 2020
In the present, microwave field has been widely used in chemical processes as an important means of intensification. The heterogeneous catalysts coupling with microwave has been shown to have many advantages, such as high catalytic performance and ...
Hong Li   +5 more
doaj   +1 more source

Relational Knowledge Distillation [PDF]

open access: yesComputer Vision and Pattern Recognition, 2019
Knowledge distillation aims at transferring knowledge acquired in one model (a teacher) to another model (a student) that is typically smaller. Previous approaches can be expressed as a form of training the student to mimic output activations of ...
Wonpyo Park   +3 more
semanticscholar   +1 more source

Fine-tuning Global Model via Data-Free Knowledge Distillation for Non-IID Federated Learning [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint. Data heterogeneity is one of the main challenges in FL, which results in slow convergence and degraded performance.
Lin Zhang   +4 more
semanticscholar   +1 more source

Distilling Nonlocality [PDF]

open access: yesPhysical Review Letters, 2009
Revised abstract, introduction and conclusion.
Manuel Forster   +2 more
openaire   +3 more sources

TinyViT: Fast Pretraining Distillation for Small Vision Transformers [PDF]

open access: yesEuropean Conference on Computer Vision, 2022
Vision transformer (ViT) recently has drawn great attention in computer vision due to its remarkable model capability. However, most prevailing ViT models suffer from huge number of parameters, restricting their applicability on devices with limited ...
Kan Wu   +6 more
semanticscholar   +1 more source

Symbolic Chain-of-Thought Distillation: Small Models Can Also “Think” Step-by-Step [PDF]

open access: yesAnnual Meeting of the Association for Computational Linguistics, 2023
Chain-of-thought prompting (e.g., “Let’s think step-by-ste”) primes large language models to verbalize rationalization for their predictions. While chain-of-thought can lead to dramatic performance gains, benefits appear to emerge only for sufficiently ...
Liunian Harold Li   +5 more
semanticscholar   +1 more source

Focal and Global Knowledge Distillation for Detectors [PDF]

open access: yesComputer Vision and Pattern Recognition, 2021
Knowledge distillation has been applied to image classification successfully. However, object detection is much more sophisticated and most knowledge distillation methods have failed on it.
Zhendong Yang   +6 more
semanticscholar   +1 more source

Improving the Quality of Environmentally Friendly Liquid Smoke Distillation Process to Increase Artisan Income

open access: yesWarta LPM, 2023
The problems the Smoke-liquid Craftsman Group often faces are production capacity, product quality, and low selling prices.  Low production capacity due to the limited ability distillation machine.
Sukamta Sukamta   +4 more
doaj   +1 more source

On Distillation of Guided Diffusion Models [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
Classifier-free guided diffusion models have recently been shown to be highly effective at high-resolution image generation, and they have been widely used in large-scale diffusion frameworks including DALL.E 2, Stable Diffusion and Imagen.
Chenlin Meng   +5 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy