Results 51 to 60 of about 4,930,132 (374)
LGD: Label-Guided Self-Distillation for Object Detection
In this paper, we propose the first self-distillation framework for general object detection, termed LGD (Label-Guided self-Distillation). Previous studies rely on a strong pretrained teacher to provide instructive knowledge that could be unavailable in real-world scenarios.
Zhang, Peizhen +5 more
openaire +2 more sources
A Mixed-Scale Self-Distillation Network for Accurate Ship Detection in SAR Images
Ship detection in synthetic aperture radar (SAR) images has attracted extensive attention due to its promising applications. While numerous methods for ship detection have been proposed, detecting ships in complex scenarios remains challenging.
Shuang Liu +6 more
doaj +1 more source
Global-Local Self-Distillation for Visual Representation Learning
The downstream accuracy of self-supervised methods is tightly linked to the proxy task solved during training and the quality of the gradients extracted from it. Richer and more meaningful gradients updates are key to allow self-supervised methods to learn better and in a more efficient manner.
Lebailly, Tim, Tuytelaars, Tinne
openaire +2 more sources
Nemesis: Neural Mean Teacher Learning-Based Emotion-Centric Speaker
Image captioning is the multi-modal task of automatically describing a digital image based on its contents and their semantic relationship. This research area has gained increasing popularity over the past few years; however, most of the previous studies
Aryan Yousefi, Kalpdrum Passi
doaj +1 more source
Domain-Agnostic Clustering with Self-Distillation
NeurIPS 2021 Workshop: Self-Supervised Learning - Theory and ...
Adnan, Mohammed +3 more
openaire +2 more sources
Iterative Graph Self-Distillation
Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a method called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach.
Hanlin Zhang +6 more
openaire +3 more sources
MISSU: 3D Medical Image Segmentation via Self-Distilling TransUNet
U-Nets have achieved tremendous success in medical image segmentation. Nevertheless, it may suffer limitations in global (long-range) contextual interactions and edge-detail preservation. In contrast, Transformer has an excellent ability to capture long-range dependencies by leveraging the self-attention mechanism into the encoder. Although Transformer
Nan Wang +6 more
openaire +3 more sources
MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [PDF]
Recently, multi-expert methods have led to significant improvements in long-tail recognition (LTR). We summarize two aspects that need further enhancement to contribute to LTR boosting: (1) More diverse experts; (2) Lower model variance.
Hu, Wei +4 more
core
MaskCLIP: Masked Self-Distillation Advances Contrastive Language-Image Pretraining
This paper presents a simple yet effective framework MaskCLIP, which incorporates a newly proposed masked self-distillation into contrastive language-image pretraining.
Bao, Jianmin +11 more
core
Rethinking plastic waste: innovations in enzymatic breakdown of oil‐based polyesters and bioplastics
Plastic pollution remains a critical environmental challenge, and current mechanical and chemical recycling methods are insufficient to achieve a fully circular economy. This review highlights recent breakthroughs in the enzymatic depolymerization of both oil‐derived polyesters and bioplastics, including high‐throughput protein engineering, de novo ...
Elena Rosini +2 more
wiley +1 more source

