Results 81 to 90 of about 924,754 (218)

On the Use of Penalty MCMC for Differential Privacy [PDF]

open access: yesarXiv, 2016
We view the penalty algorithm of Ceperley and Dewing (1999), a Markov chain Monte Carlo (MCMC) algorithm for Bayesian inference, in the context of data privacy. Specifically, we study differential privacy of the penalty algorithm and advocate its use for data privacy.
arxiv  

Wormhole Whispers: Reflecting User Privacy Data Boundaries Through Algorithm Visualization

open access: yesApplied Sciences
As user interactions on online social platforms increase, so does the public’s concern over the exposure of user privacy data. However, ordinary users often lack a clear and intuitive understanding of how their personal online information flows and how ...
Xiaoxiao Wang   +3 more
doaj   +1 more source

Translating Commercial Health Data Privacy Ethics into Change. [PDF]

open access: yesAm J Bioeth, 2023
Spector-Bagdady K, Price WN.
europepmc   +1 more source

The Privacy Policy Permission Model: A Unified View of Privacy Policies [PDF]

open access: yesTransactions on Data Privacy, volume 14, number 1, pages 1-36, year 2021
Organizations use privacy policies to communicate their data collection practices to their clients. A privacy policy is a set of statements that specifies how an organization gathers, uses, discloses, and maintains a client's data. However, most privacy policies lack a clear, complete explanation of how data providers' information is used. We propose a
arxiv  

Constructing Privacy Channels from Information Channels [PDF]

open access: yesarXiv, 2019
Data privacy protection studies how to query a dataset while preserving the privacy of individuals whose sensitive information is contained in the dataset. The information privacy model protects the privacy of an individual by using a noisy channel, called privacy channel, to filter out most information of the individual from the query's output. This
arxiv  

Multi-P$^2$A: A Multi-perspective Benchmark on Privacy Assessment for Large Vision-Language Models [PDF]

open access: yesarXiv
Large Vision-Language Models (LVLMs) exhibit impressive potential across various tasks but also face significant privacy risks, limiting their practical applications. Current researches on privacy assessment for LVLMs is limited in scope, with gaps in both assessment dimensions and privacy categories.
arxiv  

Home - About - Disclaimer - Privacy