Results 11 to 20 of about 401,816 (284)
Background. In the conditions of modern military conflicts, the problem of effective control of large groups of robots is actualized. The large amount of data associated with the operation of robotic systems (RC) determines the existence of ...
S.V. Ivanov +3 more
doaj +1 more source
Line spectrum extraction method based on hidden Markov model
To solve the difficult problem of line spectrum detection with a low signal‐to‐noise ratio, a line spectrum extraction algorithm based on a hidden Markov model (HMM) is proposed.
Kai Ma +4 more
doaj +1 more source
Geometric branching reproduction Markov processes
We present a model of a continuous-time Markov branching process with the infinitesimal generating function defined by the geometric probability distribution.
Assen Tchorbadjieff, Penka Mayster
doaj +1 more source
Controlled Discrete-Time Semi-Markov Random Evolutions and Their Applications
In this paper, we introduced controlled discrete-time semi-Markov random evolutions. These processes are random evolutions of discrete-time semi-Markov processes where we consider a control. applied to the values of random evolution.
Anatoliy Swishchuk, Nikolaos Limnios
doaj +1 more source
Markov Processes in Data Center Networks
A data center network is an important infrastructure in various applications of modern information technologies. Data centers store files with useful information, but the lifetime of these data centers is limited.
Fan-Qi Ma, Rui-Na Fan
doaj +1 more source
Synchronizing Objectives for Markov Decision Processes [PDF]
We introduce synchronizing objectives for Markov decision processes (MDP). Intuitively, a synchronizing objective requires that eventually, at every step there is a state which concentrates almost all the probability mass.
Mahsa Shirmohammadi +2 more
doaj +1 more source
Feature Markov Decision Processes [PDF]
General purpose intelligent learning agents cycle through (complex,non-MDP) sequences of observations, actions, and rewards. On the other hand, reinforcement learning is well-developed for small finite state Markov Decision Processes (MDPs). So far it is
Hutter, Marcus
core +4 more sources
Estimation and control using sampling-based Bayesian reinforcement learning
Real-world autonomous systems operate under uncertainty about both their pose and dynamics. Autonomous control systems must simultaneously perform estimation and control tasks to maintain robustness to changing dynamics or modelling errors.
Patrick Slade +3 more
doaj +1 more source
Abstraction-Based Planning for Uncertainty-Aware Legged Navigation
This article addresses the problem of temporal-logic-based planning for bipedal robots in uncertain environments. We first propose an Interval Markov Decision Process abstraction of bipedal locomotion (IMDP-BL). Motion perturbations from multiple sources
Jesse Jiang, Samuel Coogan, Ye Zhao
doaj +1 more source
Rank-Driven Markov Processes [PDF]
32 pages, 2 colour ...
Grinfeld, Michael +2 more
openaire +5 more sources

