Results 131 to 140 of about 5,561,446 (302)
Adversarial Sample Detection in Computer Vision:A Survey [PDF]
With the increase in data volume and improvement in hardware performance,deep learning(DL) has made significant progress in the field of computer vision.However,deep learning models are vulnerable to adversarial samples,causing significant changes in the
ZHANG Xin, ZHANG Han, NIU Manyu, JI Lixia
doaj +1 more source
CrossMatAgent is a multi‐agent framework that combines large language models and diffusion‐based generative AI to automate metamaterial design. By coordinating task‐specific agents—such as describer, architect, and builder—it transforms user‐provided image prompts into high‐fidelity, printable lattice patterns.
Jie Tian +12 more
wiley +1 more source
AdvGuard: Fortifying Deep Neural Networks Against Optimized Adversarial Example Attack
Deep neural networks (DNNs) provide excellent performance in image recognition, speech recognition, video recognition, and pattern analysis. However, they are vulnerable to adversarial example attacks.
Hyun Kwon, Jun Lee
doaj +1 more source
This work presents a novel generative artificial intelligence (AI) framework for inverse alloy design through operations (optimization and diffusion) within learned compact latent space from variational autoencoder (VAE). The proposed work addresses challenges of limited data, nonuniqueness solutions, and high‐dimensional spaces.
Mohammad Abu‐Mualla +4 more
wiley +1 more source
Dual-Mode Method for Generating Adversarial Examples to Attack Deep Neural Networks
Deep neural networks yield desirable performance in text, image, and speech classification. However, these networks are vulnerable to adversarial examples. An adversarial example is a sample generated by inserting a small amount of noise into an original
Hyun Kwon, Sunghwan Kim
doaj +1 more source
Camouflaged Adversarial Example Generation Method for the Form of Motion Blur in Traffic Scenes [PDF]
In the domain of autonomous driving perception systems, Convolutional Neural Network(CNN) plays a pivotal role as a fundamental technology in vehicle perception and decision making. However, adversarial attacks pose a substantial threat to the safety and
ZHANG Zhaoxin, HUANG Shize, ZHANG Bingjie, SHEN Tuo
doaj +1 more source
Artificial Intelligence for Bone: Theory, Methods, and Applications
Advances in artificial intelligence (AI) offer the potential to improve bone research. The current review explores the contributions of AI to pathological study, biomarker discovery, drug design, and clinical diagnosis and prognosis of bone diseases. We envision that AI‐driven methodologies will enable identifying novel targets for drugs discovery. The
Dongfeng Yuan +3 more
wiley +1 more source
Boosting adversarial robustness via feature refinement, suppression, and alignment
Deep neural networks are vulnerable to adversarial attacks, bringing high risk to numerous security-critical applications. Existing adversarial defense algorithms primarily concentrate on optimizing adversarial training strategies to improve the ...
Yulun Wu +6 more
doaj +1 more source
Friend-Guard Textfooler Attack on Text Classification System
Deep neural networks provide good performance for image classification, text classification, speech classification, and pattern analysis. However, such neural networks are vulnerable to adversarial examples.
Hyun Kwon
doaj +1 more source
Deep Learning‐Assisted Coherent Raman Scattering Microscopy
The analytical capabilities of coherent Raman scattering microscopy are augmented through deep learning integration. This synergistic paradigm improves fundamental performance via denoising, deconvolution, and hyperspectral unmixing. Concurrently, it enhances downstream image analysis including subcellular localization, virtual staining, and clinical ...
Jianlin Liu +4 more
wiley +1 more source

