Results 41 to 50 of about 882,964 (360)
RAPPOR: Randomized Aggregatable Privacy-Preserving Ordinal Response [PDF]
Randomized Aggregatable Privacy-Preserving Ordinal Response, or RAPPOR, is a technology for crowdsourcing statistics from end-user client software, anonymously, with strong privacy guarantees.
Ú. Erlingsson, A. Korolova, Vasyl Pihur
semanticscholar +1 more source
Privacy is a Janus-faced value. It enables us to shut the world out, but the forms it takes and the extent to which it is protected are fundamentally public matters. Not surprisingly, then, privacy and its protection are the object of some of our most intractable conflicts over the proper role of the state and the rights and duties of individuals. This
openaire +3 more sources
Doppelganger Obfuscation — Exploring theDefensive and Offensive Aspects of Hardware Camouflaging
Hardware obfuscation is widely used in practice to counteract reverse engineering. In recent years, low-level obfuscation via camouflaged gates has been increasingly discussed in the scientific community and industry.
Max Hoffmann, Christof Paar
doaj +3 more sources
Deep learning (DL) has exhibited its exceptional performance in fields like intrusion detection. Various augmentation methods have been proposed to improve data quality and eventually to enhance the performance of DL models.
Yixiang Wang +4 more
doaj +1 more source
The Privacy Pragmatic as Privacy Vulnerable [PDF]
*Abstract: *Alan Westin’s well-known and often-used privacy segmentation fails to describe privacy markets or consumer choices accurately. The segmentation divides survey respondents into “privacy fundamentalists,” “privacy pragmatists,” and the “privacy unconcerned.” It describes the average consumer as a “privacy pragmatist” who influences market ...
openaire +2 more sources
The problem of analyzing the effect of privacy concerns on the behavior of selfish utility-maximizing agents has received much attention lately. Privacy concerns are often modeled by altering the utility functions of agents to consider also their privacy loss [4, 14, 20, 28].
Or Sheffet, Salil Vadhan, Yiling Chen
openaire +3 more sources
Deep neural networks are susceptible to various inference attacks as they remember information about their training data. We design white-box inference attacks to perform a comprehensive privacy analysis of deep learning models.
Milad Nasr, R. Shokri, Amir Houmansadr
semanticscholar +1 more source
Certified Robustness to Adversarial Examples with Differential Privacy [PDF]
Adversarial examples that fool machine learning models, particularly deep neural networks, have been a topic of intense research interest, with attacks and defenses being developed in a tight back-and-forth.
Mathias Lécuyer +4 more
semanticscholar +1 more source
A Comprehensive Survey of Privacy-preserving Federated Learning
The past four years have witnessed the rapid development of federated learning (FL). However, new privacy concerns have also emerged during the aggregation of the distributed intermediate results.
Xuefei Yin, Yanming Zhu, Jiankun Hu
semanticscholar +1 more source
End-to-end privacy preserving deep learning on multi-institutional medical imaging
Using large, multi-national datasets for high-performance medical imaging AI systems requires innovation in privacy-preserving machine learning so models can train on sensitive data without requiring data transfer.
Georgios Kaissis +13 more
semanticscholar +1 more source

