Results 81 to 90 of about 4,930,132 (374)
Intra-class patch swap for self-distillation
Accepted for publication in ...
Hongjun Choi +3 more
openaire +2 more sources
Deep Contrastive Representation Learning With Self-Distillation
Recently, contrastive learning (CL) is a promising way of learning discriminative representations from time series data. In the representation hierarchy, semantic information extracted from lower levels is the basis of that captured from higher levels ...
Zhiwen Xiao +7 more
semanticscholar +1 more source
Robust Representation Learning with Self-Distillation for Domain Generalization
Domain generalization is a challenging problem in machine learning, where the goal is to train a model that can generalize well to unseen target domains without prior knowledge of these domains.
Jayavelu, Senthilnath, Singh, Ankur
core
A hydrogel–liquid metal composite peripheral nerve interface (HLB‐PNI) combines electrically durable electrodes and tissue‐adhesive hydrogel for tissue‐adaptive implantation. In nerve‐injured rats, it enables the diagnosis of sensory‐motor connectivity via stimulation and neural signal recording.
Yewon Kim +5 more
wiley +1 more source
Lying at the intersection of self-supervised learning (SSL) and knowledge distillation (KD), Self-supervised KD (SSKD) differs from classical KD frameworks by assuming the teacher model is pretrained without labels.
Taegoo Kang, Sung-Ho Bae, Chaoning Zhang
doaj +1 more source
Teaching Yourself: A Self-Knowledge Distillation Approach to Action Recognition
Knowledge distillation, which is a process of transferring complex knowledge learned by a heavy network, i.e., a teacher, to a lightweight network, i.e., a student, has emerged as an effective technique for compressing neural networks.
Duc-Quang Vu, Ngan Le, Jia-Ching Wang
doaj +1 more source
Efficient Semantic Segmentation via Self-Attention and Self-Distillation [PDF]
Lightweight models are pivotal in efficient semantic segmentation, but they often suffer from insufficient context information due to limited convolution and small receptive field.
An, S, Liao, Q, Lu, Z, Xue, J-H
core
This study presents a new hole transporting material (HTM) mechanism for self‐assembled monolayers in near‐infrared organic photodetectors. The formation of zwitterions induces a strong electric field that significantly increases the work function of HTM‐coated indium tin oxide substrates. The devices exhibit low dark current and noise, along with high
Jiyoung Shin +9 more
wiley +1 more source
Data-Efficient Language-Supervised Zero-Shot Learning with Self-Distillation [PDF]
Ruizhe Cheng +4 more
openalex +1 more source
Personalized Federated Learning via Backbone Self-Distillation
In practical scenarios, federated learning frequently necessitates training personalized models for each client using heterogeneous data. This paper proposes a backbone self-distillation approach to facilitate personalized federated learning. In this approach, each client trains its local model and only sends the backbone weights to the server.
Pengju Wang +4 more
openaire +2 more sources

