Results 1 to 10 of about 50 (44)

Resilient Dynamic Programming [PDF]

open access: yesAlgorithmica, 2015
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
CAMINITI, SAVERIO   +3 more
openaire   +3 more sources

Empirical Dynamic Programming [PDF]

open access: yesMathematics of Operations Research, 2016
We propose empirical dynamic programming algorithms for Markov decision processes. In these algorithms, the exact expectation in the Bellman operator in classical value iteration is replaced by an empirical estimate to get “empirical value iteration” (EVI).
Haskell, William B.   +2 more
openaire   +2 more sources

Stochastic Integer Programming by Dynamic Programming [PDF]

open access: yesStatistica Neerlandica, 1985
AbstractStochastic integer programming is a suitable tool for modeling hierarchical decision situations with combinatorial features. In continuation of our work on the design and analysis of heuristics for such problems, we now try to find optimal solutions.
B.J. Lageweg   +4 more
openaire   +5 more sources

Linear programming and dynamics [PDF]

open access: yesUral mathematical journal, 2015
Summary: In a Hilbert space we consider the linear boundary value problem of optimal control based on the linear dynamics and the terminal linear programming problem at the right end of the time interval. There is provided a saddle-point method to solve it. Convergence of the method is proved.
Antipin, A. S., Khoroshilova, E. V.
openaire   +3 more sources

Dynamic Policy Programming [PDF]

open access: yes, 2010
Submitted to Journal of Machine Learning ...
Gheshlaghi Azar, M.   +2 more
openaire   +3 more sources

DISCRETE DYNAMIC PROGRAMMING [PDF]

open access: yesThe Annals of Mathematical Statistics, 1962
We consider a system with a finite number $S$ of states $s$, labeled by the integers $1, 2, \cdots, S$. Periodically, say once a day, we observe the current state of the system, and then choose an action $a$ from a finite set $A$ of possible actions. As a joint result of the current state $s$ and the chosen action $a$, two things happen: (1) we receive
openaire   +2 more sources

A NONLINEAR PROGRAMMING METHOD FOR DYNAMIC PROGRAMMING [PDF]

open access: yesMacroeconomic Dynamics, 2013
A nonlinear programming formulation is introduced to solve infinite-horizon dynamic programming problems. This extends the linear approach to dynamic programming by using ideas from approximation theory to approximate value functions. Our numerical results show that this nonlinear programming is efficient and accurate, and avoids inefficient ...
Cai, Yongyang   +4 more
openaire   +2 more sources

Robust Dynamic Programming [PDF]

open access: yesMathematics of Operations Research, 2005
In this paper we propose a robust formulation for discrete time dynamic programming (DP). The objective of the robust formulation is to systematically mitigate the sensitivity of the DP optimal policy to ambiguity in the underlying transition probabilities.
openaire   +2 more sources

Dynamic Ic and Dynamic Programming [PDF]

open access: yesSSRN Electronic Journal, 2019
This paper develops a dynamic programming method when the one-stage deviation principle in the sense of mechanism design literature doesn’t hold. The commonly used dynamic programming method is valid only if the one-stage deviation principle in the sense of mechanism design literature is satisfied; it doesn't hold in every model, and the one-stage ...
openaire   +2 more sources

Quantum Dynamic Programming

open access: yesPhysical Review Letters
32 pages; stronger results than v1, with significant rewriting of the main ...
Jeongrak Son   +3 more
openaire   +3 more sources

Home - About - Disclaimer - Privacy