Results 91 to 100 of about 13,931 (229)

Kalman Based Finite State Controller for Partially Observable Domains

open access: yesInternational Journal of Advanced Robotic Systems, 2006
A real world environment is often partially observable by the agents either because of noisy sensors or incomplete perception. Moreover, it has continuous state space in nature, and agents must decide on an action for each point in internal continuous ...
Alp Sardag, H. Levent Akin
doaj   +1 more source

Improving Automated Driving through Planning with Human Internal States

open access: yes, 2020
This work examines the hypothesis that partially observable Markov decision process (POMDP) planning with human driver internal states can significantly improve both safety and efficiency in autonomous freeway driving.
Kochenderfer, Mykel, Sunberg, Zachary
core  

Influence-Optimistic Local Values for Multiagent Planning --- Extended Version [PDF]

open access: yes, 2015
Recent years have seen the development of methods for multiagent planning under uncertainty that scale to tens or even hundreds of agents. However, most of these methods either make restrictive assumptions on the problem domain, or provide approximate ...
Oliehoek, Frans A.   +2 more
core   +1 more source

Deterministic POMDPs Revisited

open access: yes, 2012
We study a subclass of POMDPs, called Deterministic POMDPs, that is characterized by deterministic actions and observations. These models do not provide the same generality of POMDPs yet they capture a number of interesting and challenging problems, and permit more efficient algorithms.
openaire   +2 more sources

Optimizing Expectation with Guarantees in POMDPs

open access: yesProceedings of the AAAI Conference on Artificial Intelligence, 2017
A standard objective in partially-observable Markov decision processes (POMDPs) is to find a policy that maximizes the expected discounted-sum payoff. However, such policies may still permit unlikely but highly undesirable outcomes, which is problematic especially in safety-critical applications.
Chatterjee, Krishnendu   +4 more
openaire   +1 more source

A POMDP Framework for Coordinated Guidance of Autonomous UAVs for Multitarget Tracking

open access: yesEURASIP Journal on Advances in Signal Processing, 2009
This paper discusses the application of the theory of partially observable Markov decision processes (POMDPs) to the design of guidance algorithms for controlling the motion of unmanned aerial vehicles (UAVs) with onboard sensors to improve tracking of ...
doaj   +1 more source

Joint Resource Scheduling of the Time Slot, Power, and Main Lobe Direction in Directional UAV Ad Hoc Networks: A Multi-Agent Deep Reinforcement Learning Approach

open access: yesDrones
Directional unmanned aerial vehicle (UAV) ad hoc networks (DUANETs) are widely applied due to their high flexibility, strong anti-interference capability, and high transmission rates.
Shijie Liang   +5 more
doaj   +1 more source

Safe Policy Improvement for POMDPs via Finite-State Controllers [PDF]

open access: green, 2023
Thiago D. Simão   +2 more
openalex   +1 more source

Rao-Blackwellized POMDP Planning

open access: yes2025 IEEE International Conference on Robotics and Automation (ICRA)
Partially Observable Markov Decision Processes (POMDPs) provide a structured framework for decision-making under uncertainty, but their application requires efficient belief updates. Sequential Importance Resampling Particle Filters (SIRPF), also known as Bootstrap Particle Filters, are commonly used as belief updaters in large approximate POMDP ...
Lee, Jiho   +3 more
openaire   +2 more sources

Home - About - Disclaimer - Privacy