Results 91 to 100 of about 13,931 (229)
Kalman Based Finite State Controller for Partially Observable Domains
A real world environment is often partially observable by the agents either because of noisy sensors or incomplete perception. Moreover, it has continuous state space in nature, and agents must decide on an action for each point in internal continuous ...
Alp Sardag, H. Levent Akin
doaj +1 more source
Risk Aware Adaptive Belief-dependent Probabilistically Constrained Continuous POMDP Planning [PDF]
Andrey Zhitnikov, Vadim Indelman
openalex +1 more source
Improving Automated Driving through Planning with Human Internal States
This work examines the hypothesis that partially observable Markov decision process (POMDP) planning with human driver internal states can significantly improve both safety and efficiency in autonomous freeway driving.
Kochenderfer, Mykel, Sunberg, Zachary
core
Influence-Optimistic Local Values for Multiagent Planning --- Extended Version [PDF]
Recent years have seen the development of methods for multiagent planning under uncertainty that scale to tens or even hundreds of agents. However, most of these methods either make restrictive assumptions on the problem domain, or provide approximate ...
Oliehoek, Frans A. +2 more
core +1 more source
Deterministic POMDPs Revisited
We study a subclass of POMDPs, called Deterministic POMDPs, that is characterized by deterministic actions and observations. These models do not provide the same generality of POMDPs yet they capture a number of interesting and challenging problems, and permit more efficient algorithms.
openaire +2 more sources
Optimizing Expectation with Guarantees in POMDPs
A standard objective in partially-observable Markov decision processes (POMDPs) is to find a policy that maximizes the expected discounted-sum payoff. However, such policies may still permit unlikely but highly undesirable outcomes, which is problematic especially in safety-critical applications.
Chatterjee, Krishnendu +4 more
openaire +1 more source
A POMDP Framework for Coordinated Guidance of Autonomous UAVs for Multitarget Tracking
This paper discusses the application of the theory of partially observable Markov decision processes (POMDPs) to the design of guidance algorithms for controlling the motion of unmanned aerial vehicles (UAVs) with onboard sensors to improve tracking of ...
doaj +1 more source
Directional unmanned aerial vehicle (UAV) ad hoc networks (DUANETs) are widely applied due to their high flexibility, strong anti-interference capability, and high transmission rates.
Shijie Liang +5 more
doaj +1 more source
Safe Policy Improvement for POMDPs via Finite-State Controllers [PDF]
Thiago D. Simão +2 more
openalex +1 more source
Rao-Blackwellized POMDP Planning
Partially Observable Markov Decision Processes (POMDPs) provide a structured framework for decision-making under uncertainty, but their application requires efficient belief updates. Sequential Importance Resampling Particle Filters (SIRPF), also known as Bootstrap Particle Filters, are commonly used as belief updaters in large approximate POMDP ...
Lee, Jiho +3 more
openaire +2 more sources

