Results 41 to 50 of about 217,331 (267)
Regularizing deep networks using efficient layerwise adversarial training
Adversarial training has been shown to regularize deep neural networks in addition to increasing their robustness to adversarial examples. However, its impact on very deep state of the art networks has not been fully investigated.
Chellappa, Rama +3 more
core +1 more source
A3T: Adversarially Augmented Adversarial Training
accepted for an oral presentation in Machine Deception Workshop, NIPS ...
Erraqabi, Akram +3 more
openaire +2 more sources
Improving Adversarial Robustness via Distillation-Based Purification
Despite the impressive performance of deep neural networks on many different vision tasks, they have been known to be vulnerable to intentionally added noise to input images.
Inhwa Koo, Dong-Kyu Chae, Sang-Chul Lee
doaj +1 more source
Computational Modeling Meets 3D Bioprinting: Emerging Synergies in Cardiovascular Disease Modeling
Emerging advances in three‐dimensional bioprinting and computational modeling are reshaping cardiovascular (CV) research by enabling more realistic, patient‐specific tissue platforms. This review surveys cutting‐edge approaches that merge biomimetic CV constructs with computational simulations to overcome the limitations of traditional models, improve ...
Tanmay Mukherjee +7 more
wiley +1 more source
Increasing the Robustness of Image Quality Assessment Models Through Adversarial Training
The adversarial robustness of image quality assessment (IQA) models to adversarial attacks is emerging as a critical issue. Adversarial training has been widely used to improve the robustness of neural networks to adversarial attacks, but little in-depth
Anna Chistyakova +6 more
doaj +1 more source
Multi-Class Triplet Loss With Gaussian Noise for Adversarial Robustness
Deep Neural Networks (DNNs) classifiers performance degrades under adversarial attacks, such attacks are indistinguishably perturbed relative to the original data.
Benjamin Appiah +4 more
doaj +1 more source
A Robust Method to Protect Text Classification Models against Adversarial Attacks
Text classification is one of the main tasks in natural language processing. Recently, adversarial attacks have shown a substantial negative impact on neural network-based text classification models. There are few defenses to strengthen model predictions
BALA MALLIKARJUNARAO GARLAPATI +2 more
doaj +1 more source
This review highlights the role of self‐assembled monolayers (SAMs) in perovskite solar cells, covering molecular engineering, multifunctional interface regulation, machine learning (ML) accelerated discovery, advanced device architectures, and pathways toward scalable fabrication and commercialization for high‐efficiency and stable single‐junction and
Asmat Ullah, Ying Luo, Stefaan De Wolf
wiley +1 more source
GanDef: A GAN based Adversarial Training Defense for Neural Network Classifier
Machine learning models, especially neural network (NN) classifiers, are widely used in many applications including natural language processing, computer vision and cybersecurity.
Khalil, Issa +2 more
core +3 more sources
All‐Optical Reconfigurable Physical Unclonable Function for Sustainable Security
An all‐optical reconfigurable physical unclonable function (PUF) is demonstrated using plasmonic coupling–induced sintering of optically trapped gold nanoparticles, where Brownian motion serves as a robust entropy source. The resulting optical PUF exhibits high encoding density, strong resistance to modeling attacks, and practical authentication ...
Jang‐Kyun Kwak +4 more
wiley +1 more source

