A Mutually Auxiliary Multitask Model With Self-Distillation for Emotion-Cause Pair Extraction
Emotion-cause pair extraction (ECPE), which aims to extract emotions and the corresponding causes in documents, has a wide range of applications in network public opinion analysis.
Jiaxin Yu +3 more
doaj +1 more source
Domain-Agnostic Clustering with Self-Distillation
NeurIPS 2021 Workshop: Self-Supervised Learning - Theory and ...
Adnan, Mohammed +3 more
openaire +2 more sources
A Mixed-Scale Self-Distillation Network for Accurate Ship Detection in SAR Images
Ship detection in synthetic aperture radar (SAR) images has attracted extensive attention due to its promising applications. While numerous methods for ship detection have been proposed, detecting ships in complex scenarios remains challenging.
Shuang Liu +6 more
doaj +1 more source
MDCS: More Diverse Experts with Consistency Self-distillation for Long-tailed Recognition [PDF]
Recently, multi-expert methods have led to significant improvements in long-tail recognition (LTR). We summarize two aspects that need further enhancement to contribute to LTR boosting: (1) More diverse experts; (2) Lower model variance.
Hu, Wei +4 more
core
Iterative Graph Self-Distillation
Recently, there has been increasing interest in the challenge of how to discriminatively vectorize graphs. To address this, we propose a method called Iterative Graph Self-Distillation (IGSD) which learns graph-level representation in an unsupervised manner through instance discrimination using a self-supervised contrastive learning approach.
Hanlin Zhang +6 more
openaire +3 more sources
MISSU: 3D Medical Image Segmentation via Self-Distilling TransUNet
U-Nets have achieved tremendous success in medical image segmentation. Nevertheless, it may suffer limitations in global (long-range) contextual interactions and edge-detail preservation. In contrast, Transformer has an excellent ability to capture long-range dependencies by leveraging the self-attention mechanism into the encoder. Although Transformer
Nan Wang +6 more
openaire +3 more sources
MaskCLIP: Masked Self-Distillation Advances Contrastive Language-Image Pretraining
This paper presents a simple yet effective framework MaskCLIP, which incorporates a newly proposed masked self-distillation into contrastive language-image pretraining.
Bao, Jianmin +11 more
core
Anchorage‐independent and faster growth in clonal population from UV‐irradiated NER‐deficient cells
UV‐irradiated cells expressing a DDB2 mutant protein unable to interact with PCNA (DDB2PCNA‐) form clones able to grow without anchorage. Different experimental approaches reveal heterogeneity in cell cycle regulation and drug response within these clones, emphasizing the crucial role of the DDB2‐PCNA interaction in preventing cellular transformation ...
Paola Perucca +6 more
wiley +1 more source
Nemesis: Neural Mean Teacher Learning-Based Emotion-Centric Speaker
Image captioning is the multi-modal task of automatically describing a digital image based on its contents and their semantic relationship. This research area has gained increasing popularity over the past few years; however, most of the previous studies
Aryan Yousefi, Kalpdrum Passi
doaj +1 more source
Spatial Self-Distillation for Object Detection with Inaccurate Bounding Boxes
Object detection via inaccurate bounding boxes supervision has boosted a broad interest due to the expensive high-quality annotation data or the occasional inevitability of low annotation quality (\eg tiny objects).
Chen, Pengfei +5 more
core

