Results 1 to 10 of about 937,486 (177)

Underwater chemical plume tracing based on partially observable Markov decision process [PDF]

open access: goldInternational Journal of Advanced Robotic Systems, 2019
Chemical plume tracing based on autonomous underwater vehicle uses chemical as a guidance to navigate and search in the unknown environments. To solve the key issue of tracing and locating the source, this article proposes a path-planning strategy based ...
Jiu Hai-Feng   +3 more
doaj   +3 more sources

Nonapproximability Results for Partially Observable Markov Decision Processes [PDF]

open access: diamondJournal Of Artificial Intelligence Research, Volume 14, pages 83-103, 2001, 2011
We show that for several variations of partially observable Markov decision processes, polynomial-time algorithms for finding control policies are unlikely to or simply don't have guarantees of finding policies within a constant factor or a constant summand of optimal. Here "unlikely" means "unless some complexity classes collapse," where the collapses
Christopher Lusena   +2 more
arxiv   +8 more sources

On Anderson Acceleration for Partially Observable Markov Decision Processes [PDF]

open access: green2021 60th IEEE Conference on Decision and Control (CDC), 2021
This paper proposes an accelerated method for approximately solving partially observable Markov decision process (POMDP) problems offline. Our method carefully combines two existing tools: Anderson acceleration (AA) and the fast informed bound (FIB) method.
Melike Ermis, Mingyu Park, Insoon Yang
semanticscholar   +6 more sources

Guided Soft Actor Critic: A Guided Deep Reinforcement Learning Approach for Partially Observable Markov Decision Processes [PDF]

open access: goldIEEE Access, 2021
Most real-world problems are essentially partially observable, and the environmental model is unknown. Therefore, there is a significant need for reinforcement learning approaches to solve them, where the agent perceives the state of the environment ...
Mehmet Haklidir, Hakan Temeltas
doaj   +3 more sources

Structural Estimation of Partially Observable Markov Decision Processes [PDF]

open access: greenarXiv, 2020
In many practical settings control decisions must be made under partial/imperfect information about the evolution of a relevant state variable. Partially Observable Markov Decision Processes (POMDPs) is a relatively well-developed framework for modeling and analyzing such problems.
Yanling Chang   +3 more
arxiv   +5 more sources

Partially Observable Markov Decision Processes in Robotics: A Survey [PDF]

open access: yesIEEE Transactions on Robotics, 2023
Noisy sensing, imperfect control, and environment changes are defining characteristics of many real-world robot tasks. The partially observable Markov decision process (POMDP) provides a principled mathematical framework for modeling and solving robot decision and control tasks under uncertainty.
Lauri, Mikko   +3 more
openaire   +5 more sources

Entropy-Regularized Partially Observed Markov Decision Processes [PDF]

open access: yesIEEE Transactions on Automatic Control, 2021
We investigate partially observed Markov decision processes (POMDPs) with cost functions regularized by entropy terms describing state, observation, and control uncertainty. Standard POMDP techniques are shown to offer bounded-error solutions to these entropy-regularized POMDPs, with exact solutions possible when the regularization involves the joint ...
Timothy L. Molloy, Girish N. Nair
arxiv   +3 more sources

Speeding Up the Convergence of Value Iteration in Partially Observable Markov Decision Processes [PDF]

open access: diamondJournal Of Artificial Intelligence Research, Volume 14, pages 29-51, 2001, 2011
Partially observable Markov decision processes (POMDPs) have recently become popular among many AI researchers because they serve as a natural model for planning under uncertainty. Value iteration is a well-known algorithm for finding optimal policies for POMDPs. It typically takes a large number of iterations to converge.
Nevin L. Zhang, Weixiong Zhang
arxiv   +3 more sources

Quantum partially observable Markov decision processes [PDF]

open access: greenPhysical Review A, 2014
We present quantum observable Markov decision processes (QOMDPs), the quantum analogs of partially observable Markov decision processes (POMDPs). In a QOMDP, an agent is acting in a world where the state is represented as a quantum state and the agent can choose a superoperator to apply. This is similar to the POMDP belief state, which is a probability
Jennifer Barry   +2 more
openalex   +5 more sources

A Discrete Partially Observable Markov Decision Process Model for the Maintenance Optimization of Oil and Gas Pipelines [PDF]

open access: goldAlgorithms, 2023
Corrosion is one of the major causes of failure in pipelines for transporting oil and gas products. To mitigate the impact of this problem, organizations perform different maintenance operations, including detecting corrosion, determining corrosion ...
Ezra Wari, Weihang Zhu, Gino Lim
doaj   +2 more sources

Home - About - Disclaimer - Privacy