Results 31 to 40 of about 156,834 (158)

Are Accuracy and Robustness Correlated?

open access: yes, 2016
Machine learning models are vulnerable to adversarial examples formed by applying small carefully chosen perturbations to inputs that cause unexpected classification errors.
Boult, Terrance E.   +2 more
core   +1 more source

Data Driven Exploratory Attacks on Black Box Classifiers in Adversarial Domains

open access: yes, 2017
While modern day web applications aim to create impact at the civilization level, they have become vulnerable to adversarial activity, where the next cyber-attack can take any shape and can originate from anywhere. The increasing scale and sophistication
Kantardzic, Mehmed, Sethi, Tegjyot Singh
core   +1 more source

Adversarial support vector machine learning [PDF]

open access: yesProceedings of the 18th ACM SIGKDD international conference on Knowledge discovery and data mining, 2012
Many learning tasks such as spam filtering and credit card fraud detection face an active adversary that tries to avoid detection. For learning problems that deal with an active adversary, it is important to model the adversary's attack strategy and develop robust learning models to mitigate the attack. These are the two objectives of this paper.
Yan Zhou   +3 more
openaire   +1 more source

Denial of Service (DoS) Defences against Adversarial Attacks in IoT Smart Home Networks using Machine Learning Methods

open access: yesNUST Journal of Engineering Sciences, 2022
The availability of information and its integrity and confidentiality are important factors in information and communication of the system security. The DDoS attack generally means Distributed denial of services generates many enormous packets to slow ...
Zahid Iqbal   +3 more
doaj   +1 more source

Adversarial attacks on medical machine learning

open access: yesScience, 2019
Emerging vulnerabilities demand new ...
Finlayson, Samuel G.   +5 more
openaire   +4 more sources

Adversarial Machine Learning in Text Processing: A Literature Survey

open access: yesIEEE Access, 2022
Machine learning algorithms represent the intelligence that controls many information systems and applications around us. As such, they are targeted by attackers to impact their decisions.
Izzat Alsmadi   +11 more
doaj   +1 more source

Adversarial Controls for Scientific Machine Learning

open access: yesACS Chemical Biology, 2018
New machine learning methods to analyze raw chemical and biological data are now widely accessible as open-source toolkits. This positions researchers to leverage powerful, predictive models in their own domains. We caution, however, that the application of machine learning to experimental research merits careful consideration.
Kangway V. Chuang, Michael J. Keiser
openaire   +4 more sources

Breaking Machine Learning Models with Adversarial Attacks and its Variants

open access: yesProceedings of the International Florida Artificial Intelligence Research Society Conference
Machine learning models can be by adversarial attacks, subtle, imperceptible perturbations to inputs that cause the model to produce erroneous outputs.
Pavan Reddy
doaj   +1 more source

Investigation of the impact effectiveness of adversarial data leakage attacks on the machine learning models [PDF]

open access: yesITM Web of Conferences
Machine learning solutions have been successfully applied in many aspects, so it is now important to ensure the security of the machine learning models themselves and develop appropriate solutions and approaches.
Parfenov Denis   +3 more
doaj   +1 more source

Hardening quantum machine learning against adversaries

open access: yesNew Journal of Physics, 2018
Security for machine learning has begun to become a serious issue for present day applications. An important question remaining is whether emerging quantum technologies will help or hinder the security of machine learning. Here we discuss a number of ways that quantum information can be used to help make quantum classifiers more secure or private.
Nathan Wiebe, Ram Shankar Siva Kumar
openaire   +3 more sources

Home - About - Disclaimer - Privacy