Results 261 to 270 of about 102,876 (329)
Some of the next articles are maybe not open access.

Spectral gap of nonreversible Markov chains

The Annals of Applied Probability, 2023
We define the spectral gap of a Markov chain on a finite state space as the second-smallest singular value of the generator of the chain, generalizing the usual definition of spectral gap for reversible chains.
Sourav Chatterjee
semanticscholar   +1 more source

Gain-Scheduled Finite-Time Synchronization for Reaction–Diffusion Memristive Neural Networks Subject to Inconsistent Markov Chains

IEEE Transactions on Neural Networks and Learning Systems, 2020
An innovative class of drive-response systems that are composed of Markovian reaction–diffusion memristive neural networks, where the drive and response systems follow inconsistent Markov chains, is proposed in this article.
Xiaona Song   +3 more
semanticscholar   +1 more source

Permanental sequences related to a Markov chain example of Kolmogorov

Stochastic Processes and their Applications, 2020
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Marcus, Michael B., Rosen, Jay
openaire   +1 more source

A probabilistic approach to convex (ϕ)-entropy decay for Markov chains

The Annals of Applied Probability, 2020
We study the exponential dissipation of entropic functionals for continuous time Markov chains and the associated convex Sobolev inequalities, including MLSI and Beckner inequalities.
Giovanni Conforti
semanticscholar   +1 more source

An instructive example of absorbing Markov chain

International Journal of Mathematical Education in Science and Technology, 1996
Absorbing Markov chains have been used for modelling various phenomena. Typical examples used when the subject is introduced to students include the gambler's ruin problem and the accounts receivable analysis. Once an absorbing Markov chain model is developed, however, students are provided with a very limited number of tools to analyse it, such as the
openaire   +1 more source

A learning scheme for stationary probabilities of large markov chains with examples

2008 46th Annual Allerton Conference on Communication, Control, and Computing, 2008
We describe a reinforcement learning based scheme to estimate the stationary distribution of subsets of states of large Markov chains. dasiaSplit samplingpsila ensures that the algorithm needs to just encode the state transitions and will not need to know any other property of the Markov chain.
V. S. Borkar   +3 more
openaire   +1 more source

Bounding Mean First Passage Times in Population Continuous-Time Markov Chains

International Conference on Quantitative Evaluation of Systems, 2019
We consider the problem of bounding mean first passage times for a class of continuous-time Markov chains that captures stochastic interactions between groups of identical agents.
Michael Backenköhler   +2 more
semanticscholar   +1 more source

Examples of Markov Chains with Larger State Spaces

2010
In Chapter 6, we took advantage of the simplicity of 2-state chains to intro- duce fundamental ideas of Markov dependence and long-run behavior using only elementary mathematics. Markov chains taking more than two values are needed in many simulations of practical importance.
Eric A. Suess, Bruce E. Trumbo
openaire   +1 more source

Optimal Linear Responses for Markov Chains and Stochastically Perturbed Dynamical Systems

Journal of statistical physics, 2018
The linear response of a dynamical system refers to changes to properties of the system when small external perturbations are applied. We consider the little-studied question of selecting an optimal perturbation so as to (i) maximise the linear response ...
Fadi Antown   +2 more
semanticscholar   +1 more source

Home - About - Disclaimer - Privacy