Results 51 to 60 of about 2,698,128 (173)

Quantum Markov chains associated with open quantum random walks [PDF]

open access: yes, 2018
In this paper we construct (nonhomogeneous) quantum Markov chains associated with open quantum random walks. The quantum Markov chain, like the classical Markov chain, is a fundamental tool for the investigation of the basic properties such as reducibility/irreducibility, recurrence/transience, accessibility, ergodicity, etc, of the underlying dynamics.
arxiv   +1 more source

Double Coset Markov Chains [PDF]

open access: yesarXiv, 2022
Let $G$ be a finite group. Let $H, K$ be subgroups of $G$ and $H \backslash G / K$ the double coset space. Let $Q$ be a probability on $G$ which is constant on conjugacy classes ($Q(s^{-1} t s) = Q(t)$). The random walk driven by $Q$ on $G$ projects to a Markov chain on $H \backslash G /K$.
arxiv  

Observer-Based Controller Design for a Class of Nonlinear Networked Control Systems with Random Time-Delays Modeled by Markov Chains

open access: yesJournal of Control Science and Engineering, 2017
This paper investigates the observer-based controller design problem for a class of nonlinear networked control systems with random time-delays. The nonlinearity is assumed to satisfy a global Lipschitz condition and two dependent Markov chains are ...
Yanfeng Wang   +3 more
doaj   +1 more source

On the rate of convergence for a class of Markovian queues with group services

open access: yesDiscrete and Continuous Models and Applied Computational Science, 2020
There are many queuing systems that accept single arrivals, accumulate them and service only as a group. Examples of such systems exist in various areas of human life, from traffic of transport to processing requests on a computer network. Therefore, our
Anastasia L. Kryukova
doaj   +1 more source

Infinite dimensional entangled Markov chains [PDF]

open access: yesarXiv, 2004
We continue the analysis of nontrivial examples of quantum Markov processes. This is done by applying the construction of entangled Markov chains obtained from classical Markov chains with infinite state--space. The formula giving the joint correlations arises from the corresponding classical formula by replacing the usual matrix multiplication by the ...
arxiv  

Dynamical Systems and Markov Chains [PDF]

open access: yesarXiv, 2020
This project is going to work with one example of stochastic matrix to understand how Markov chains evolve and how to use them to make faster and better decisions only looking to the present state of the system.
arxiv  

Bounds on the rate of convergence for one class of inhomogeneous Markovian queueing models with possible batch arrivals and services

open access: yesInternational Journal of Applied Mathematics and Computer Science, 2018
In this paper we present a method for the computation of convergence bounds for four classes of multiserver queueing systems, described by inhomogeneous Markov chains.
Zeifman Alexander   +5 more
doaj   +1 more source

SimInf: An R Package for Data-Driven Stochastic Disease Spread Simulations

open access: yesJournal of Statistical Software, 2019
We present the R package SimInf which provides an efficient and very flexible framework to conduct data-driven epidemiological modeling in realistic large scale disease spread simulations.
Stefan Widgren   +3 more
doaj   +1 more source

Strong Stationary Duality for Möbius Monotone Markov Chains: Unreliable Networks [PDF]

open access: yesarXiv, 2011
For Markov chains with a partially ordered finite state space we show strong stationary duality under the condition of M\"obius monotonicity of the chain. We show relations of M\"obius monotonicity to other definitions of monotone chains. We give examples of dual chains in this context which have transitions only upwards.
arxiv  

State-feedback stabilization of Markov jump linear systems with randomly observed Markov states [PDF]

open access: yes, 2014
In this paper we study the state-feedback stabilization of a discrete-time Markov jump linear system when the observation of the Markov chain of the system, called the Markov state, is time-randomized by another Markov chain. Embedding the Markov state into an extended Markov chain, we transform the given system with time-randomized observations to ...
arxiv   +1 more source

Home - About - Disclaimer - Privacy