Results 51 to 60 of about 172,371 (266)
It has been shown that adversaries can craft example inputs to neural networks which are similar to legitimate inputs but have been created to purposely cause the neural network to misclassify the input.
Athalye Anish +18 more
core +1 more source
Adversarial Example Decomposition
Research has shown that widely used deep neural networks are vulnerable to carefully crafted adversarial perturbations. Moreover, these adversarial perturbations often transfer across models. We hypothesize that adversarial weakness is composed of three sources of bias: architecture, dataset, and random initialization.
He, Horace +5 more
openaire +2 more sources
Active Learning‐Guided Accelerated Discovery of Ultra‐Efficient High‐Entropy Thermoelectrics
An active learning framework is introduced for the accelerated discovery of high‐entropy chalcogenides with superior thermoelectric performance. Only 80 targeted syntheses, selected from 16206 possible combinations, led to three high‐performance compositions, demonstrating the remarkable efficiency of data‐driven guidance in experimental materials ...
Hanhwi Jang +8 more
wiley +1 more source
Organic Electrochemical Transistors for Neuromorphic Devices and Applications
Organic electrochemical transistors are emerging as promising platforms for neuromorphic devices that emulate neuronal and synaptic activities and can seamlessly integrate with biological systems. This review focuses on resultant organic artificial neurons, synapses, and integrated devices, with an emphasis on their ability to perform neuromorphic ...
Kexin Xiang +4 more
wiley +1 more source
Human-Producible Adversarial Examples
Submitted to ICLR ...
Khachaturov, David +5 more
openaire +2 more sources
Adversarial examples for models of code [PDF]
Neural models of code have shown impressive results when performing tasks such as predicting method names and identifying certain kinds of bugs. We show that these models are vulnerable to adversarial examples , and introduce a novel approach for attacking trained models of code using ...
Yefet, Noam, Alon, Uri, Yahav, Eran
openaire +2 more sources
This review highlights the role of self‐assembled monolayers (SAMs) in perovskite solar cells, covering molecular engineering, multifunctional interface regulation, machine learning (ML) accelerated discovery, advanced device architectures, and pathways toward scalable fabrication and commercialization for high‐efficiency and stable single‐junction and
Asmat Ullah, Ying Luo, Stefaan De Wolf
wiley +1 more source
Maxwell’s Demon in MLP-Mixer: towards transferable adversarial attacks
Models based on MLP-Mixer architecture are becoming popular, but they still suffer from adversarial examples. Although it has been shown that MLP-Mixer is more robust to adversarial attacks compared to convolutional neural networks (CNNs), there has been
Haoran Lyu +5 more
doaj +1 more source
Materials and System Design for Self‐Decision Bioelectronic Systems
This review highlights how self‐decision bioelectronic systems integrate sensing, computation, and therapy into autonomous, closed‐loop platforms that continuously monitor and treat diseases, marking a major step toward intelligent, self‐regulating healthcare technologies.
Qiankun Zeng +9 more
wiley +1 more source
Assessing Optimizer Impact on DNN Model Sensitivity to Adversarial Examples
Deep Neural Networks (DNNs) have been gaining state-of-the-art achievement compared with many traditional Machine Learning (ML) models in diverse fields. However, adversarial examples challenge the further deployment and application of DNNs. Analysis has
Yixiang Wang +5 more
doaj +1 more source

