Results 321 to 330 of about 1,517,214 (350)
Some of the next articles are maybe not open access.
2009
Markov chains provide a useful modeling tool for determining expected profits or costs associated with certain types of systems. The key characteristic that allows for a Markov model is a probability law in which the future behavior of the system is independent of the past behavior given the present condition of the system. When this Markov property is
Richard M. Feldman+1 more
openaire +2 more sources
Markov chains provide a useful modeling tool for determining expected profits or costs associated with certain types of systems. The key characteristic that allows for a Markov model is a probability law in which the future behavior of the system is independent of the past behavior given the present condition of the system. When this Markov property is
Richard M. Feldman+1 more
openaire +2 more sources
UAV Path Planning in a Dynamic Environment via Partially Observable Markov Decision Process
IEEE Transactions on Aerospace and Electronic Systems, 2013A path-planning algorithm to guide unmanned aerial vehicles (UAVs) for tracking multiple ground targets based on the theory of partially observable Markov decision processes (POMDPs) is presented. A variety of features of interest are shown to be easy to
Shankarachary Ragi, E. Chong
semanticscholar +1 more source
2012
Markov chains are useful in describing many discrete event stochastic processes; however, they are not exible enough to model situations where we have to make decisions to control the future trajectories of the system. For this reason, the theory of Markov decision processes (MDPs), also known as controlled Markov chains, has been developed.
Jerzy A. Filar+3 more
openaire +2 more sources
Markov chains are useful in describing many discrete event stochastic processes; however, they are not exible enough to model situations where we have to make decisions to control the future trajectories of the system. For this reason, the theory of Markov decision processes (MDPs), also known as controlled Markov chains, has been developed.
Jerzy A. Filar+3 more
openaire +2 more sources
HTTP-Based Adaptive Streaming for Mobile Clients using Markov Decision Process
International Packet Video Workshop, 2013Due to its simplicity at the server side, HTTP-based adaptive streaming has become a popular choice for streaming on-line contents to a wide range of user devices.
Ayub Bokani, Mahbub Hassan, S. Kanhere
semanticscholar +1 more source
2013
We provide a formal description of the discounted reward MDP framework in Chap. 1, including both the finite- and the infinite-horizon settings and summarizing the associated optimality equations. We then present the well-known exact solution algorithms, value iteration and policy iteration, and outline a framework of rolling-horizon control (also ...
Hyeong Soo Chang+3 more
openaire +2 more sources
We provide a formal description of the discounted reward MDP framework in Chap. 1, including both the finite- and the infinite-horizon settings and summarizing the associated optimality equations. We then present the well-known exact solution algorithms, value iteration and policy iteration, and outline a framework of rolling-horizon control (also ...
Hyeong Soo Chang+3 more
openaire +2 more sources
Markov Decision Processes: Discrete Stochastic Dynamic Programming
, 1994From the Publisher: The past decade has seen considerable theoretical and applied research on Markov decision processes, as well as the growing use of these models in ecology, economics, communications engineering, and other fields where outcomes are ...
M. Puterman
semanticscholar +1 more source
Residential Energy Management in Smart Grid: A Markov Decision Process-Based Approach
2013 IEEE International Conference on Green Computing and Communications and IEEE Internet of Things and IEEE Cyber, Physical and Social Computing, 2013The deployment of advanced information and communication technologies has helped in the transformation of the traditional power grids into smart grids by introducing demand side management in residential area.
S. Misra+5 more
semanticscholar +1 more source
Balancing codification and personalization for knowledge reuse: a Markov decision process approach
Journal of Knowledge Management, 2013Purpose – This paper aims to provide a systematic framework for organizations to analyze their knowledge reuse processes, and balance codification and personalization within their knowledge strategy according to cost/benefit analysis.
Hongmei Liu, K. Chai, James F. Nebus
semanticscholar +1 more source
Adaptive Maintenance Policies for Aging Devices Using a Markov Decision Process
IEEE Transactions on Power Systems, 2013In competitive environments, most equipment are operated closer to or at their limits and as a result, equipment's maintenance schedules may be affected by system conditions.
S. Abeygunawardane+2 more
semanticscholar +1 more source
Privacy in stochastic control: A Markov Decision Process perspective
Allerton Conference on Communication, Control, and Computing, 2013Cyber physical systems, which rely on the joint functioning of information and physical systems, are vulnerable to information leakage through the actions of the controller.
P. Venkitasubramaniam
semanticscholar +1 more source