Results 111 to 120 of about 8,699 (152)
Some of the next articles are maybe not open access.

Dynamic Programming and HJB Equations

1999
In this chapter we turn to study another powerful approach to solving optimal control problems, namely, the method of dynamic programming. Dynamic programming, originated by R. Bellman in the early 1950s, is a mathematical technique for making a sequence of interrelated decisions, which can be applied to many optimization problems (including optimal ...
Jiongmin Yong, Xun Yu Zhou
openaire   +1 more source

A power penalty method for discrete HJB equations

Optimization Letters, 2020
zbMATH Open Web Interface contents unavailable due to conflicting licenses.
Kai Zhang, Xiaoqi Yang
openaire   +3 more sources

Ergodic Control for Constrained Diffusions: Characterization Using HJB Equations

SIAM Journal on Control and Optimization, 2004
Summary: Recently in [A. Budhiraja, SIAM J. Control Optim. 42, No. 2, 532--558 (2003; Zbl 1037.93073)] an ergodic control problem for a class of diffusion processes, constrained to take values in a polyhedral cone, was considered. The main result of that paper was that under appropriate conditions on the model, there is a Markov control for which the ...
Borkar, Vivek, Budhiraja, Amarjit
openaire   +2 more sources

A new iterative method for discrete HJB equations

Numerische Mathematik, 2008
The goal of this paper is to propose a successive relaxation iterative algorithm for discrete Hamilton-Jacobi-Bellman equation: \((1) \max_{1\leq j\leq K} \{A^JU-F^J\}=0\) where \(A^j \in \mathbb R^{n \times n}, F^j \in \mathbb R^n, j=1,2,\dots K\). Equation (1) is a system of nonsmooth nonlinear equations. A successive iterative scheme, similar to the
Zhou, Shuzi, Zou, Zhanyong
openaire   +2 more sources

Hamiltonian systems, HJB equations, and stochastic controls

Proceedings of the 36th IEEE Conference on Decision and Control, 2002
Pontraygin's maximum principle (MP) involving the Hamiltonian system and Bellman's dynamic programming (DP) involving the HJB equation are the two most important approaches in modern optimal control theory. However, these two approaches have been developed separately in literature and it has been a long-standing, yet fundamentally important problem to ...
openaire   +1 more source

Modifications of the PCPT method for HJB equations

AIP Conference Proceedings, 2016
In this paper we will revisit the modification of the piecewise constant policy timestepping (PCPT) method for solving Hamilton-Jacobi-Bellman (HJB) equations. This modification is called piecewise predicted policy timestepping (PPPT) method and if properly used, it may be significantly faster.
I. Kossaczký, M. Ehrhardt, M. Günther
openaire   +1 more source

Pathwise Stochastic Control Problems and Stochastic HJB Equations

SIAM Journal on Control and Optimization, 2007
In this paper we study a class of pathwise stochastic control problems in which the optimality is allowed to depend on the paths of exogenous noise (or information). Such a phenomenon can be illustrated by considering a particular investor who wants to take advantage of certain extra information but in a completely legal manner.
Rainer Buckdahn, Jin Ma
openaire   +1 more source

HJB equation based learning scheme for neural networks

2017 International Joint Conference on Neural Networks (IJCNN), 2017
A control theoretic approach is presented in this paper for both batch and instantaneous updates of weights in feed-forward neural networks. The popular Hamilton-Jacobi-Bellman (HJB) equation has been used to generate an optimal weight update law. The main contribution in this paper is that a closed form solutions for both optimal cost and weight ...
Vipul Arora   +3 more
openaire   +1 more source

Viscosity Solutions for HJB Equations

2014
The theory of viscosity solutions was originated by M.G. Crandall and P.L. Lions in the early 80s for the Hamilton–Jacobi equations and later P.L. Lions developed it for the HJB equations (Lions, J Commun PDE 8:1101–1134, 1983; Acta Math 16:243–278, 1988; Viscosity solutions of fully nonlinear second-order equations and optimal stochastic control in ...
openaire   +1 more source

Markov chain approximation methods on generalized HJB equation

2007 46th IEEE Conference on Decision and Control, 2007
This work is concerned with numerical methods for a class of stochastic control optimizations and stochastic differential games. Numerical procedures based on Markov chain approximation techniques are developed in a framework of generalized Hamilton-Jacobi-Bellman equations.
null Xueping Li, Q. S. Song
openaire   +1 more source

Home - About - Disclaimer - Privacy