Results 41 to 50 of about 215,342 (268)
Regularizing deep networks using efficient layerwise adversarial training
Adversarial training has been shown to regularize deep neural networks in addition to increasing their robustness to adversarial examples. However, its impact on very deep state of the art networks has not been fully investigated.
Chellappa, Rama +3 more
core +1 more source
A3T: Adversarially Augmented Adversarial Training
accepted for an oral presentation in Machine Deception Workshop, NIPS ...
Erraqabi, Akram +3 more
openaire +2 more sources
Multi-Class Triplet Loss With Gaussian Noise for Adversarial Robustness
Deep Neural Networks (DNNs) classifiers performance degrades under adversarial attacks, such attacks are indistinguishably perturbed relative to the original data.
Benjamin Appiah +4 more
doaj +1 more source
A Robust Method to Protect Text Classification Models against Adversarial Attacks
Text classification is one of the main tasks in natural language processing. Recently, adversarial attacks have shown a substantial negative impact on neural network-based text classification models. There are few defenses to strengthen model predictions
BALA MALLIKARJUNARAO GARLAPATI +2 more
doaj +1 more source
Active Learning‐Guided Accelerated Discovery of Ultra‐Efficient High‐Entropy Thermoelectrics
An active learning framework is introduced for the accelerated discovery of high‐entropy chalcogenides with superior thermoelectric performance. Only 80 targeted syntheses, selected from 16206 possible combinations, led to three high‐performance compositions, demonstrating the remarkable efficiency of data‐driven guidance in experimental materials ...
Hanhwi Jang +8 more
wiley +1 more source
Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training
The adversarial robustness of image quality assessment (IQA) models to adversarial attacks is emerging as a critical issue. Adversarial training has been widely used to improve the robustness of neural networks to adversarial attacks, but little in-depth
Anna Chistyakova +6 more
doaj +1 more source
Disentangling Factors of Variation with Cycle-Consistent Variational Auto-Encoders
Generative models that learn disentangled representations for different factors of variation in an image can be very useful for targeted data augmentation.
D He +4 more
core +1 more source
This review highlights the role of self‐assembled monolayers (SAMs) in perovskite solar cells, covering molecular engineering, multifunctional interface regulation, machine learning (ML) accelerated discovery, advanced device architectures, and pathways toward scalable fabrication and commercialization for high‐efficiency and stable single‐junction and
Asmat Ullah, Ying Luo, Stefaan De Wolf
wiley +1 more source
Adversarial Discriminative Domain Adaptation
Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They also can improve recognition despite the presence of domain shift or dataset bias: several adversarial ...
Darrell, Trevor +3 more
core +1 more source
The article overviews past and current efforts on caloric materials and systems, highlighting the contributions of Ames National Laboratory to the field. Solid‐state caloric heat pumping is an innovative method that can be implemented in a wide range of cooling and heating applications.
Agata Czernuszewicz +5 more
wiley +1 more source

