Results 71 to 80 of about 237,731 (274)
Adversarial Example Detection and Classification With Asymmetrical Adversarial Training
The vulnerabilities of deep neural networks against adversarial examples have become a significant concern for deploying these models in sensitive domains.
Kolouri, Soheil +2 more
core
Stable Imitation of Multigait and Bipedal Motions for Quadrupedal Robots Over Uneven Terrains
How are quadrupedal robots empowered to execute complex navigation tasks, including multigait and bipedal motions? Challenges in stability and real‐world adaptation persist, especially with uneven terrains and disturbances. This article presents an imitation learning framework that enhances adaptability and robustness by incorporating long short‐term ...
Erdong Xiao +3 more
wiley +1 more source
Aerial Image Semantic segmentation based on convolution neural networks (CNNs) has made significant process in recent years. Nevertheless, their vulnerability to adversarial example attacks could not be neglected.
Zhen Wang +3 more
doaj +1 more source
Information Transmission Strategies for Self‐Organized Robotic Aggregation
In this review, we discuss how information transmission influences the neighbor‐based self‐organized aggregation of swarm robots. We focus specifically on local interactions regarding information transfer and categorize previous studies based on the functions of the information exchanged.
Shu Leng +5 more
wiley +1 more source
An Adversarial Attack via Penalty Method
Deep learning systems have achieved significant success across various machine learning tasks. However, they are highly vulnerable to attacks. For example, adversarial examples can fool deep learning systems easily by perturbing inputs with small ...
Jiyuan Sun, Haibo Yu, Jianjun Zhao
doaj +1 more source
EITGAN: A Transformation-based Network for recovering adversarial examples
Adversarial examples have been shown to easily mislead neural networks, and many strategies have been proposed to defend them. To address the problem that most transformation-based defense strategies will degrade the accuracy of clean images, we proposed
Junjie Zhao +4 more
doaj +1 more source
Continual Learning for Multimodal Data Fusion of a Soft Gripper
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley +1 more source
Adversarial attack and defense in reinforcement learning-from AI security view
Reinforcement learning is a core technology for modern artificial intelligence, and it has become a workhorse for AI applications ranging from Atrai Game to Connected and Automated Vehicle System (CAV).
Tong Chen +5 more
doaj +1 more source
Foveation-based Mechanisms Alleviate Adversarial Examples [PDF]
We show that adversarial examples, i.e., the visually imperceptible perturbations that result in Convolutional Neural Networks (CNNs) fail, can be alleviated with a mechanism based on foveations---applying the CNN in different image regions. To see this,
Boix, Xavier +4 more
core +1 more source
CellPolaris decodes how transcription factors guide cell fate by building gene regulatory networks from transcriptomic data using transfer learning. It generates tissue‐ and cell‐type‐specific networks, identifies master regulators in cell state transitions, and simulates TF perturbations in developmental processes.
Guihai Feng +27 more
wiley +1 more source

