Results 31 to 40 of about 5,739,313 (302)

Towards Optimal Structured CNN Pruning via Generative Adversarial Learning [PDF]

open access: yesComputer Vision and Pattern Recognition, 2019
Structured pruning of filters or neurons has received increased focus for compressing convolutional neural networks. Most existing methods rely on multi-stage optimizations in a layer-wise manner for iteratively pruning and retraining which may not be ...
Shaohui Lin   +7 more
semanticscholar   +1 more source

A Brute-Force Black-Box Method to Attack Machine Learning-Based Systems in Cybersecurity

open access: yesIEEE Access, 2020
Machine learning algorithms are widely utilized in cybersecurity. However, recent studies show that machine learning algorithms are vulnerable to adversarial examples.
Sicong Zhang, Xiaoyao Xie, Yang Xu
doaj   +1 more source

Survey on adversarial attacks and defense of face forgery and detection

open access: yes网络与信息安全学报, 2023
Face forgery and detection has become a research hotspot.Face forgery methods can produce fake face images and videos.Some malicious videos, often targeting celebrities, are widely circulated on social networks, damaging the reputation of victims and ...
Shiyu HUANG, Feng YE, Tianqiang HUANG, Wei LI, Liqing HUANG, Haifeng LUO
doaj   +3 more sources

Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning [PDF]

open access: yesIEEE Transactions on Pattern Analysis and Machine Intelligence, 2017
We propose a new regularization method based on virtual adversarial loss: a new measure of local smoothness of the conditional label distribution given input.
Takeru Miyato   +3 more
semanticscholar   +1 more source

Generative Adversarial Learning for Intelligent Trust Management in 6G Wireless Networks [PDF]

open access: yesIEEE Network, 2022
The emerging sixth generation (6G) is the integration of heterogeneous wireless networks, which can seamlessly support anywhere and anytime networking. But high quality of trust should be offered by 6G to meet mobile user expectations.
Liu Yang   +5 more
semanticscholar   +1 more source

EIFDAA: Evaluation of an IDS with function-discarding adversarial attacks in the IIoT

open access: yesHeliyon, 2023
The complexity of the Industrial Internet of Things (IIoT) presents higher requirements for intrusion detection systems (IDSs). An adversarial attack is a threat to the security of machine learning-based IDSs.
Shiming Li   +4 more
doaj   +1 more source

RecGURU: Adversarial Learning of Generalized User Representations for Cross-Domain Recommendation [PDF]

open access: yesWeb Search and Data Mining, 2021
Cross-domain recommendation can help alleviate the data sparsity issue in traditional sequential recommender systems. In this paper, we propose the RecGURU algorithm framework to generate a Generalized User Representation (GUR) incorporating user ...
Chenglin Li   +7 more
semanticscholar   +1 more source

Adversarial Metric Learning [PDF]

open access: yesProceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence, 2018
In the past decades, intensive efforts have been put to design various loss functions and metric forms for metric learning problem. These improvements have shown promising results when the test data is similar to the training data. However, the trained models often fail to produce reliable distances on the ambiguous test pairs due to the different ...
Chen, Shuo   +5 more
openaire   +2 more sources

Impact of adversarial examples on deep learning models for biomedical image segmentation [PDF]

open access: yes, 2019
Deep learning models, which are increasingly being used in the field of medical image analysis, come with a major security risk, namely, their vulnerability to adversarial examples.
C Pena-Betancor   +3 more
core   +4 more sources

Self-supervised Learning of Adversarial Example: Towards Good Generalizations for Deepfake Detection [PDF]

open access: yesComputer Vision and Pattern Recognition, 2022
Recent studies in deepfake detection have yielded promising results when the training and testing face forgeries are from the same dataset. However, the problem remains challenging when one tries to generalize the detector to forgeries created by unseen ...
Liang Chen   +4 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy