Results 121 to 130 of about 237,731 (274)

Dual-Mode Method for Generating Adversarial Examples to Attack Deep Neural Networks

open access: yesIEEE Access
Deep neural networks yield desirable performance in text, image, and speech classification. However, these networks are vulnerable to adversarial examples. An adversarial example is a sample generated by inserting a small amount of noise into an original
Hyun Kwon, Sunghwan Kim
doaj   +1 more source

Camouflaged Adversarial Example Generation Method for the Form of Motion Blur in Traffic Scenes [PDF]

open access: yesJisuanji gongcheng
In the domain of autonomous driving perception systems, Convolutional Neural Network(CNN) plays a pivotal role as a fundamental technology in vehicle perception and decision making. However, adversarial attacks pose a substantial threat to the safety and
ZHANG Zhaoxin, HUANG Shize, ZHANG Bingjie, SHEN Tuo
doaj   +1 more source

Inverse Design of Alloys via Generative Algorithms: Optimization and Diffusion within Learned Latent Space

open access: yesAdvanced Intelligent Discovery, EarlyView.
This work presents a novel generative artificial intelligence (AI) framework for inverse alloy design through operations (optimization and diffusion) within learned compact latent space from variational autoencoder (VAE). The proposed work addresses challenges of limited data, nonuniqueness solutions, and high‐dimensional spaces.
Mohammad Abu‐Mualla   +4 more
wiley   +1 more source

Boosting adversarial robustness via feature refinement, suppression, and alignment

open access: yesComplex & Intelligent Systems
Deep neural networks are vulnerable to adversarial attacks, bringing high risk to numerous security-critical applications. Existing adversarial defense algorithms primarily concentrate on optimizing adversarial training strategies to improve the ...
Yulun Wu   +6 more
doaj   +1 more source

Friend-Guard Textfooler Attack on Text Classification System

open access: yesIEEE Access
Deep neural networks provide good performance for image classification, text classification, speech classification, and pattern analysis. However, such neural networks are vulnerable to adversarial examples.
Hyun Kwon
doaj   +1 more source

Artificial Intelligence for Bone: Theory, Methods, and Applications

open access: yesAdvanced Intelligent Discovery, EarlyView.
Advances in artificial intelligence (AI) offer the potential to improve bone research. The current review explores the contributions of AI to pathological study, biomarker discovery, drug design, and clinical diagnosis and prognosis of bone diseases. We envision that AI‐driven methodologies will enable identifying novel targets for drugs discovery. The
Dongfeng Yuan   +3 more
wiley   +1 more source

Deep Learning‐Assisted Coherent Raman Scattering Microscopy

open access: yesAdvanced Intelligent Discovery, EarlyView.
The analytical capabilities of coherent Raman scattering microscopy are augmented through deep learning integration. This synergistic paradigm improves fundamental performance via denoising, deconvolution, and hyperspectral unmixing. Concurrently, it enhances downstream image analysis including subcellular localization, virtual staining, and clinical ...
Jianlin Liu   +4 more
wiley   +1 more source

Deep Learning‐Assisted Design of Mechanical Metamaterials

open access: yesAdvanced Intelligent Discovery, EarlyView.
This review examines the role of data‐driven deep learning methodologies in advancing mechanical metamaterial design, focusing on the specific methodologies, applications, challenges, and outlooks of this field. Mechanical metamaterials (MMs), characterized by their extraordinary mechanical behaviors derived from architected microstructures, have ...
Zisheng Zong   +5 more
wiley   +1 more source

Robust Decision Trees Against Adversarial Examples

open access: yes, 2019
Although adversarial examples and model robustness have been extensively studied in the context of linear models and neural networks, research on this issue in tree-based models and how to make tree-based models robust against adversarial examples is still limited.
Chen, H, Zhang, H, Boning, D, Hsieh, CJ
openaire   +3 more sources

Large Language Model in Materials Science: Roles, Challenges, and Strategic Outlook

open access: yesAdvanced Intelligent Discovery, EarlyView.
Large language models (LLMs) are reshaping materials science. Acting as Oracle, Surrogate, Quant, and Arbiter, they now extract knowledge, predict properties, gauge risk, and steer decisions within a traceable loop. Overcoming data heterogeneity, hallucinations, and poor interpretability demands domain‐adapted models, cross‐modal data standards, and ...
Jinglan Zhang   +4 more
wiley   +1 more source

Home - About - Disclaimer - Privacy