Results 121 to 130 of about 4,930,132 (374)
Self-Distillation Amplifies Regularization in Hilbert Space
Knowledge distillation introduced in the deep learning context is a method to transfer knowledge from one architecture to another. In particular, when the architectures are identical, this is called self-distillation. The idea is to feed in predictions of the trained model as new target values for retraining (and iterate this loop possibly a few times).
Mobahi, Hossein +2 more
openaire +2 more sources
Learning from All Sides: Diversified Positive Augmentation via Self-distillation in Recommendation
Personalized recommendation relies on user historical behaviors to provide user-interested items, and thus seriously struggles with the data sparsity issue.
Lin, Leyu +5 more
core
The role of novel thiophene‐based ligands with halogen substitutions in enhancing the chiroptical and optoelectronic properties of 2D chiral HOIPs has been investigated. By tailoring ligand design, enhanced CD and CPL properties are achieved, with improved CPL discrimination in photodetectors.
Boesung Kwon +4 more
wiley +1 more source
Timestamp-Guided Knowledge Distillation for Robust Sensor-Based Time-Series Forecasting
Accurate time-series forecasting plays a vital role in sensor-driven applications such as energy monitoring, traffic flow prediction, and environmental sensing.
Jiahe Yan +5 more
doaj +1 more source
SDPose: Tokenized Pose Estimation via Circulation-Guide Self-Distillation [PDF]
Recently, transformer-based methods have achieved state-of-the-art prediction quality on human pose estimation(HPE). Nonetheless, most of these top-performing transformer-based models are too computation-consuming and storage-demanding to deploy on edge ...
Sicheng Chen +9 more
semanticscholar +1 more source
Federated Learning on Heterogeneous Data via Adaptive Self-Distillation
Federated Learning (FL) is a machine learning paradigm that enables clients to jointly train a global model by aggregating the locally trained models without sharing any local training data. In practice, there can often be substantial heterogeneity (e.g.,
Chakraborty, Anirban +4 more
core
Coagulative granular hydrogels are composed of packed thrombin‐functionalized microgels that catalyze the conversion of fibrinogen into a secondary fibrin network, filling the interstitial voids. This bio‐inspired approach stabilizes the biomaterial to match the robustness of bulk hydrogels without compromising injectability, mimicking the initial ...
Zhipeng Deng +16 more
wiley +1 more source
Knowledge Distillation Using Hierarchical Self-Supervision Augmented Distribution [PDF]
Yang, Chuanguang +3 more
openalex +1 more source
Models of the human skin must combine the relevant biological contents and suitable biomaterials with the correct spatial organization. Performing compound screening on such in vitro models also requires fast and reproducible production methods of the models.
Elisa Lenzi +7 more
wiley +1 more source
Noise robust distillation of self-supervised speech models via correlation metrics [PDF]
Fabian Ritter-Gutierrez +6 more
openalex +1 more source

