Results 81 to 90 of about 4,930,132 (374)

Intra-class patch swap for self-distillation

open access: yesNeurocomputing
Accepted for publication in ...
Hongjun Choi   +3 more
openaire   +2 more sources

Deep Contrastive Representation Learning With Self-Distillation

open access: yesIEEE Transactions on Emerging Topics in Computational Intelligence
Recently, contrastive learning (CL) is a promising way of learning discriminative representations from time series data. In the representation hierarchy, semantic information extracted from lower levels is the basis of that captured from higher levels ...
Zhiwen Xiao   +7 more
semanticscholar   +1 more source

Robust Representation Learning with Self-Distillation for Domain Generalization

open access: yes, 2023
Domain generalization is a challenging problem in machine learning, where the goal is to train a model that can generalize well to unseen target domains without prior knowledge of these domains.
Jayavelu, Senthilnath, Singh, Ankur
core  

Ionic‐Electronic Hydrogel‐Liquid Metal Composite Bilayer with Tissue‐Adaptive and Adhesive Properties for Closed‐Loop Neuroprosthetic System

open access: yesAdvanced Functional Materials, EarlyView.
A hydrogel–liquid metal composite peripheral nerve interface (HLB‐PNI) combines electrically durable electrodes and tissue‐adhesive hydrogel for tissue‐adaptive implantation. In nerve‐injured rats, it enables the diagnosis of sensory‐motor connectivity via stimulation and neural signal recording.
Yewon Kim   +5 more
wiley   +1 more source

Contrastive Learning or Masked Autoencoder? Understanding and Improving Self-Supervised Knowledge Distillation

open access: yesIEEE Access
Lying at the intersection of self-supervised learning (SSL) and knowledge distillation (KD), Self-supervised KD (SSKD) differs from classical KD frameworks by assuming the teacher model is pretrained without labels.
Taegoo Kang, Sung-Ho Bae, Chaoning Zhang
doaj   +1 more source

Teaching Yourself: A Self-Knowledge Distillation Approach to Action Recognition

open access: yesIEEE Access, 2021
Knowledge distillation, which is a process of transferring complex knowledge learned by a heavy network, i.e., a teacher, to a lightweight network, i.e., a student, has emerged as an effective technique for compressing neural networks.
Duc-Quang Vu, Ngan Le, Jia-Ching Wang
doaj   +1 more source

Efficient Semantic Segmentation via Self-Attention and Self-Distillation [PDF]

open access: yes, 2022
Lightweight models are pivotal in efficient semantic segmentation, but they often suffer from insufficient context information due to limited convolution and small receptive field.
An, S, Liao, Q, Lu, Z, Xue, J-H
core  

Zwitterionic Self‐Assembled Monolayer for Simultaneous Noise Suppression and Hole Extraction in High‐Performance Near‐Infrared Organic Photodetectors

open access: yesAdvanced Functional Materials, EarlyView.
This study presents a new hole transporting material (HTM) mechanism for self‐assembled monolayers in near‐infrared organic photodetectors. The formation of zwitterions induces a strong electric field that significantly increases the work function of HTM‐coated indium tin oxide substrates. The devices exhibit low dark current and noise, along with high
Jiyoung Shin   +9 more
wiley   +1 more source

Data-Efficient Language-Supervised Zero-Shot Learning with Self-Distillation [PDF]

open access: green, 2021
Ruizhe Cheng   +4 more
openalex   +1 more source

Personalized Federated Learning via Backbone Self-Distillation

open access: yesACM Multimedia Asia 2023, 2023
In practical scenarios, federated learning frequently necessitates training personalized models for each client using heterogeneous data. This paper proposes a backbone self-distillation approach to facilitate personalized federated learning. In this approach, each client trains its local model and only sends the backbone weights to the server.
Pengju Wang   +4 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy