Results 21 to 30 of about 30,788 (321)

Recent Advances in Deep Reinforcement Learning Applications for Solving Partially Observable Markov Decision Processes (POMDP) Problems Part 2—Applications in Transportation, Industries, Communications and Networking and More Topics

open access: yesMachine Learning and Knowledge Extraction, 2021
The two-part series of papers provides a survey on recent advances in Deep Reinforcement Learning (DRL) for solving partially observable Markov decision processes (POMDP) problems.
Xuanchen Xiang, Simon Foo, Huanyu Zang
doaj   +1 more source

Experimental Design for Partially Observed Markov Decision Processes [PDF]

open access: yesSIAM/ASA Journal on Uncertainty Quantification, 2018
39 pages, 3 ...
Thorbergsson, Leifur, Hooker, Giles
openaire   +3 more sources

Recent Advances in Deep Reinforcement Learning Applications for Solving Partially Observable Markov Decision Processes (POMDP) Problems: Part 1—Fundamentals and Applications in Games, Robotics and Natural Language Processing

open access: yesMachine Learning and Knowledge Extraction, 2021
The first part of a two-part series of papers provides a survey on recent advances in Deep Reinforcement Learning (DRL) applications for solving partially observable Markov decision processes (POMDP) problems.
Xuanchen Xiang, Simon Foo
doaj   +1 more source

Human-in-the-Loop Synthesis for Partially Observable Markov Decision Processes [PDF]

open access: yes, 2018
We study planning problems where autonomous agents operate inside environments that are subject to uncertainties and not fully observable. Partially observable Markov decision processes (POMDPs) are a natural formal model to capture such problems ...
cassandra   +20 more
core   +6 more sources

Structural Estimation of Partially Observable Markov Decision Processes

open access: yesIEEE Transactions on Automatic Control, 2023
In many practical settings control decisions must be made under partial/imperfect information about the evolution of a relevant state variable. Partially Observable Markov Decision Processes (POMDPs) is a relatively well-developed framework for modeling and analyzing such problems.
Yanling Chang   +3 more
openaire   +2 more sources

Decentralized Control of Partially Observable Markov Decision Processes using Belief Space Macro-actions [PDF]

open access: yes, 2015
The focus of this paper is on solving multi-robot planning problems in continuous spaces with partial observability. Decentralized partially observable Markov decision processes (Dec-POMDPs) are general models for multi-robot coordination problems, but ...
Agha-mohammadi, Ali-akbar   +3 more
core   +7 more sources

Decentralized Coordination of Multi-Agent Systems Based on POMDPs and Consensus for Active Perception

open access: yesIEEE Access, 2023
This work presents the method based on the Partially Observable Markov Decision Processes (POMDP) and consensus protocol. The main idea is to share the belief and reach the consensus on the belief state in order to improve local decision making.
Marijana Peti   +2 more
doaj   +1 more source

Entropy Maximization for Partially Observable Markov Decision Processes

open access: yesIEEE Transactions on Automatic Control, 2022
14 pages, 10 figures.
Yagiz Savas   +4 more
openaire   +3 more sources

Learning State-Variable Relationships in POMCP: A Framework for Mobile Robots

open access: yesFrontiers in Robotics and AI, 2022
We address the problem of learning relationships on state variables in Partially Observable Markov Decision Processes (POMDPs) to improve planning performance.
Maddalena Zuccotto   +4 more
doaj   +1 more source

A Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes

open access: yesIEEE Access, 2018
In recent years, reinforcement learning (RL) has achieved remarkable success due to the growing adoption of deep learning techniques and the rapid growth of computing power.
Tuyen P. Le   +2 more
doaj   +1 more source

Home - About - Disclaimer - Privacy