Results 61 to 70 of about 22,784,147 (354)
Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks [PDF]
Deep learning algorithms have been shown to perform extremely well on many classical machine learning problems. However, recent studies have shown that deep learning, like other machine learning techniques, is vulnerable to adversarial samples: inputs ...
Nicolas Papernot +4 more
semanticscholar +1 more source
Information Transmission Strategies for Self‐Organized Robotic Aggregation
In this review, we discuss how information transmission influences the neighbor‐based self‐organized aggregation of swarm robots. We focus specifically on local interactions regarding information transfer and categorize previous studies based on the functions of the information exchanged.
Shu Leng +5 more
wiley +1 more source
Anomaly-Based Intrusion on IoT Networks Using AIGAN-a Generative Adversarial Network
Adversarial attacks have threatened the credibility of machine learning models and cast doubts over the integrity of data. The attacks have created much harm in the fields of computer vision, and natural language processing.
Zhipeng Liu +5 more
doaj +1 more source
Adversarial Image Translation: Unrestricted Adversarial Examples in Face Recognition Systems
Kazuya Kakizaki and Kosuke Yoshida share equal contributions. Accepted at AAAI Workshop on Artificial Intelligence Safety (2020)
Kakizaki, Kazuya, Yoshida, Kosuke
openaire +2 more sources
A Survey on Adversarial Recommender Systems
Latent-factor models (LFM) based on collaborative filtering (CF), such as matrix factorization (MF) and deep CF methods, are widely used in modern recommender systems (RS) due to their excellent performance and recommendation accuracy. However, success has been accompanied with a major new arising challenge: Many applications ...
Deldjoo, Yashar +2 more
openaire +2 more sources
Continual Learning for Multimodal Data Fusion of a Soft Gripper
Models trained on a single data modality often struggle to generalize when exposed to a different modality. This work introduces a continual learning algorithm capable of incrementally learning different data modalities by leveraging both class‐incremental and domain‐incremental learning scenarios in an artificial environment where labeled data is ...
Nilay Kushawaha, Egidio Falotico
wiley +1 more source
Adversarial Robustness by One Bit Double Quantization for Visual Classification
In this paper, we propose a novel robust visual classification framework that uses double quantization (dquant) to defend against adversarial examples in a specific attack scenario called “subsequent adversarial examples” where test images ...
Maungmaung Aprilpyone +2 more
doaj +1 more source
Adversarial Examples Identification in an End-to-End System With Image Transformation and Filters
Deep learning has been receiving great attention in recent years because of its impressive performance in many tasks. However, the widespread adoption of deep learning also becomes a major security risk for those systems as recent researches have pointed
Dang Duy Thang, Toshihiro Matsui
doaj +1 more source
Adversarial Attacks Against Binary Similarity Systems
In recent years, binary analysis gained traction as a fundamental approach to inspect software and guarantee its security. Due to the exponential increase of devices running software, much research is now moving towards new autonomous solutions based on deep learning models, as they have been showing state-of-the-art performances in solving binary ...
Capozzi, Gianluca +3 more
openaire +4 more sources
CellPolaris decodes how transcription factors guide cell fate by building gene regulatory networks from transcriptomic data using transfer learning. It generates tissue‐ and cell‐type‐specific networks, identifies master regulators in cell state transitions, and simulates TF perturbations in developmental processes.
Guihai Feng +27 more
wiley +1 more source

