Results 21 to 30 of about 1,380,157 (288)
This article presents an approximation of discrete Markov decision processes with small noise on Borel spaces with an infinite horizon and an expected total discounted cost by the corresponding deterministic Markov process.
Portillo-Ramírez Gustavo +3 more
doaj +1 more source
Rate of Convergence for Cardy’s Formula [PDF]
We show that crossing probabilities in 2D critical site percolation on the triangular lattice in a piecewise analytic Jordan domain converge with power law rate in the mesh size to their limit given by the Cardy-Smirnov formula. We use this result to obtain new upper and lower bounds of exp(O(sqrt(log log R))) R^(-1/3) for the probability that the ...
Nachmias, Asaf +2 more
openaire +3 more sources
Convergence of the empirical spectral measure of unitary Brownian motion [PDF]
Let $\{U^N_t\}_{t\ge 0}$ be a standard Brownian motion on $\mathbb{U}(N)$. For fixed $N\in\mathbb{N}$ and $t>0$, we give explicit bounds on the $L_1$-Wasserstein distance of the empirical spectral measure of $U^N_t$ to both the ensemble-averaged spectral
Meckes, Elizabeth, Melcher, Tai
core +3 more sources
Rate of Convergence Towards Hartree Dynamics
We consider a system of N bosons interacting through a two-body potential with, possibly, Coulomb-type singularities. We show that the difference between the many-body Schr\"odinger evolution in the mean-field regime and the effective nonlinear Hartree ...
A. Elgart +14 more
core +1 more source
Convergence Rates for Markov Chains [PDF]
Summary: This is an expository paper that presents various ideas related to nonasymptotic rates of convergence for Markov chains. Such rates are of great importance for stochastic algorithms that are widely used in statistics and in computer science. They also have applications to analysis of card shuffling and other areas.
openaire +2 more sources
On Convergence Rate of MRetrace
Off-policy is a key setting for reinforcement learning algorithms. In recent years, the stability of off-policy learning for value-based reinforcement learning has been guaranteed even when combined with linear function approximation and bootstrapping ...
Xingguo Chen +4 more
doaj +1 more source
Tight Global Linear Convergence Rate Bounds for Douglas-Rachford Splitting
Recently, several authors have shown local and global convergence rate results for Douglas-Rachford splitting under strong monotonicity, Lipschitz continuity, and cocoercivity assumptions. Most of these focus on the convex optimization setting.
Giselsson, Pontus
core +1 more source
Convergence Rates for Generalized Descents [PDF]
d-descents are permutation statistics that generalize the notions of descents and inversions. It is known that the distribution of d-descents of permutations of length n satisfies a central limit theorem as n goes to infinity. We provide an explicit formula for the mean and variance of these statistics and obtain bounds on the rate of convergence using
openaire +2 more sources
Persistence in real exchange rate convergence [PDF]
AbstractIn this paper we use a long memory framework to examine the validity of the Purchasing Power Parity (PPP) hypothesis using both monthly and quarterly data for a panel of 47 countries over a 50 year period (1957–2009). The analysis focuses on the long memory parameter d that allows us to obtain different convergence classifications depending on ...
Thanasis Stengos, M. Ege Yazgan
openaire +2 more sources
Discretizing the Heston Model: An Analysis of the Weak Convergence Rate
In this manuscript we analyze the weak convergence rate of a discretization scheme for the Heston model. Under mild assumptions on the smoothness of the payoff and on the Feller index of the volatility process, respectively, we establish a weak ...
Altmayer, Martin, Neuenkirch, Andreas
core +1 more source

