Continuous-Time Markov Decision Processes with Exponential Utility [PDF]
In this paper, we consider a continuous-time Markov decision process (CTMDP) in Borel spaces, where the certainty equivalent with respect to the exponential utility of the total undiscounted cost is to be minimized. The cost rate is nonnegative.
Zhang, Y
core +7 more sources
Average optimality for continuous-time Markov decision processes in polish spaces
This paper is devoted to studying the average optimality in continuous-time Markov decision processes with fairly general state and action spaces. The criterion to be maximized is expected average rewards.
Guo, Xianping, Rieder, Ulrich
core +3 more sources
Discounted continuous-time constrained Markov decision processes in Polish spaces
This paper is devoted to studying constrained continuous-time Markov decision processes (MDPs) in the class of randomized policies depending on state histories. The transition rates may be unbounded, the reward and costs are admitted to be unbounded from
Guo, Xianping, Song, Xinyuan
core +4 more sources
Multistate Markov model for functional recovery in stroke: probability of state transition and prognosis prediction [PDF]
Stroke rehabilitation involves complex transitions between different functional states. This study developed and validated a five-state Markov model to quantify state transition probabilities and predict functional outcomes in stroke patients, providing ...
Bowen Li, Chunxiao Wan
doaj +2 more sources
Transmission Power Rate Control for EHD With Temporal and Complete Deaths
This paper proposes a continuous-time communication model of an energy harvesting device (EHD) in the scenario that such EHD suffers from the temporal death caused by the energy depletion and completion death caused by the destruction of its hardware or ...
Yun Li, Shengda Tang, Zhicheng Tan
doaj +1 more source
Sufficiency of Markov Policies for Continuous-Time Jump Markov Decision Processes [PDF]
One of the basic facts known for discrete-time Markov decision processes is that, if the probability distribution of an initial state is fixed, then for every policy it is easy to construct a (randomized) Markov policy with the same marginal distributions of state-action pairs as for the original policy. This equality of marginal distributions implies
Eugene A. Feinberg +2 more
openaire +2 more sources
In the context of distributed defense, multi-sensor networks are required to be able to carry out reasonable planning and scheduling to achieve the purpose of continuous, accurate and rapid target detection.
Zhen Zhang +3 more
doaj +1 more source
Optimality of a Network Monitoring Agent and Validation in a Real Probe
The evolution of commodity hardware makes it possible to use this type of equipment to implement traffic monitoring systems. A preliminary empirical evaluation of a network traffic probe based on Linux indicates that the system performance has ...
Luis Zabala, Josu Doncel, Armando Ferro
doaj +1 more source
Impulsive Control for Continuous-Time Markov Decision Processes [PDF]
In this paper our objective is to study continuous-time Markov decision processes on a general Borel state space with both impulsive and continuous controls for the infinite time horizon discounted cost. The continuous-time controlled process is shown to be nonexplosive under appropriate hypotheses.
Dufour, François, Piunovskiy, Alexei B.
openaire +5 more sources
Multiscale modelling and analysis of collective decision making in swarm robotics. [PDF]
We present a unified approach to describing certain types of collective decision making in swarm robotics that bridges from a microscopic individual-based description to aggregate properties.
Matthias Vigelius +2 more
doaj +1 more source

