Results 11 to 20 of about 2,849 (140)

Minimax Mixing Time of the Metropolis-Adjusted Langevin Algorithm for Log-Concave Sampling [PDF]

open access: green, 2021
We study the mixing time of the Metropolis-adjusted Langevin algorithm (MALA) for sampling from a log-smooth and strongly log-concave distribution. We establish its optimal minimax mixing time under a warm start. Our main contribution is two-fold. First, for a $d$-dimensional log-concave density with condition number $κ$, we show that MALA with a warm ...
Keru Wu, Scott C. Schmidler, Yuansi Chen
openalex   +4 more sources

Nonconvex sampling with the Metropolis-adjusted Langevin algorithm [PDF]

open access: green, 2019
The Langevin Markov chain algorithms are widely deployed methods to sample from distributions in challenging high-dimensional and non-convex statistics and machine learning applications. Despite this, current bounds for the Langevin algorithms are slower than those of competing algorithms in many important situations, for instance when sampling from ...
Oren Mangoubi, Nisheeth K. Vishnoi
openalex   +3 more sources

Metropolis-adjusted Subdifferential Langevin Algorithm [PDF]

open access: green
The Metropolis-Adjusted Langevin Algorithm (MALA) is a widely used Markov Chain Monte Carlo (MCMC) method for sampling from high-dimensional distributions. However, MALA relies on differentiability assumptions that restrict its applicability.
Ning Ning
openalex   +3 more sources

On the Computational Complexity of Metropolis-Adjusted Langevin Algorithms for Bayesian Posterior Sampling [PDF]

open access: green, 2022
In this paper, we examine the computational complexity of sampling from a Bayesian posterior (or pseudo-posterior) using the Metropolis-adjusted Langevin algorithm (MALA). MALA first employs a discrete-time Langevin SDE to propose a new state, and then adjusts the proposed state using Metropolis-Hastings rejection. Most existing theoretical analyses of
Rong Tang, Yun Yang
openalex   +3 more sources

Large deviations for Independent Metropolis Hastings and Metropolis-adjusted Langevin algorithm [PDF]

open access: green
In this paper, we prove large deviation principles for the empirical measures associated with the Independent Metropolis Hastings (IMH) sampler and the Metropolis-adjusted Langevin Algorithm (MALA). These are the first large deviation results for empirical measures of Markov chains arising from specific Metropolis-Hastings methods on a continuous state
Federica Milinanni, Pierre Nyquist
  +5 more sources

Fast sampling from constrained spaces using the Metropolis-adjusted Mirror Langevin algorithm [PDF]

open access: green, 2023
We propose a new method called the Metropolis-adjusted Mirror Langevin algorithm for approximate sampling from distributions whose support is a compact and convex set. This algorithm adds an accept-reject filter to the Markov chain induced by a single step of the Mirror Langevin algorithm (Zhang et al., 2020), which is a basic discretisation of the ...
Vishwak Srinivasan   +2 more
openalex   +3 more sources

Optimal scaling results for Moreau-Yosida Metropolis-adjusted Langevin algorithms [PDF]

open access: hybridBernoulli
We consider a recently proposed class of MCMC methods which uses proximity maps instead of gradients to build proposal mechanisms which can be employed for both differentiable and non-differentiable targets. These methods have been shown to be stable for a wide class of targets, making them a valuable alternative to Metropolis-adjusted Langevin ...
Francesca R. Crucinio   +3 more
openalex   +4 more sources

Particle Metropolis adjusted Langevin algorithms for state space models [PDF]

open access: green, 2014
Particle MCMC is a class of algorithms that can be used to analyse state-space models. They use MCMC moves to update the parameters of the models, and particle filters to propose values for the path of the state-space model. Currently the default is to use random walk Metropolis to update the parameter values.
Christopher Nemeth, Paul Fearnhead
  +6 more sources

A Metropolis-Adjusted Langevin Algorithm for Sampling Jeffreys Prior [PDF]

open access: green
Inference and estimation are fundamental in statistics, system identification, and machine learning. When prior knowledge about the system is available, Bayesian analysis provides a natural framework for encoding it through a prior distribution. In practice, such knowledge is often too vague to specify a full prior distribution, motivating the use of ...
Shi, Yibo   +2 more
  +5 more sources

Home - About - Disclaimer - Privacy