Results 281 to 290 of about 522,534 (332)
Some of the next articles are maybe not open access.
Stochastic optimal structural control: Stochastic optimal open-loop feedback control
Advances in Engineering Software, 2012zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire +2 more sources
Pathwise Optimality in Stochastic Control
SIAM Journal on Control and Optimization, 2000This paper deals with the pathwise optimality for stochastic control problems over an infinite time horizon. The authors considered the following problems. For an admissible control \(u_t\) and its response \(x^u_t\), the running cost is given by \(J_T(u)=\int^T_0 c(x^u_t,u_t)dt\).
DAI PRA, PAOLO +2 more
openaire +4 more sources
1987
In the long history of mathematics, stochastic optimal control is a rather recent development. Using Bellman’s Principle of Optimality along with measure-theoretic and functional-analytic methods, several mathematicians such as H. Kushner, W. Fleming, R. Rishel. W.M. Wonham and J.M.
openaire +1 more source
In the long history of mathematics, stochastic optimal control is a rather recent development. Using Bellman’s Principle of Optimality along with measure-theoretic and functional-analytic methods, several mathematicians such as H. Kushner, W. Fleming, R. Rishel. W.M. Wonham and J.M.
openaire +1 more source
1970
H. J. Kushner has obtained the differential equation satisfied by the optimal feedback control law for a stochastic control system in which the plant dynamics and observations are perturbed by independent additive Gaussian white noise processes.
openaire +1 more source
H. J. Kushner has obtained the differential equation satisfied by the optimal feedback control law for a stochastic control system in which the plant dynamics and observations are perturbed by independent additive Gaussian white noise processes.
openaire +1 more source
2018
In previous chapters we assumed that the state variables of the system are known with certainty. When the variables are outcomes of a random phenomenon, the state of the system is modeled as a stochastic process. Specifically, we now face a stochastic optimal control problem where the state of the system is represented by a controlled stochastic ...
openaire +2 more sources
In previous chapters we assumed that the state variables of the system are known with certainty. When the variables are outcomes of a random phenomenon, the state of the system is modeled as a stochastic process. Specifically, we now face a stochastic optimal control problem where the state of the system is represented by a controlled stochastic ...
openaire +2 more sources
Automatica, 1969
It is indicated that optimal stochastic control is still in its infancy, and that at the present time it has little use in practice although a wide class of problems can be precisely stated. A brief survey of the problem involved in attempting to formulate and to solve optimal stochastic control problems is discussed along with the corresponding ...
openaire +1 more source
It is indicated that optimal stochastic control is still in its infancy, and that at the present time it has little use in practice although a wide class of problems can be precisely stated. A brief survey of the problem involved in attempting to formulate and to solve optimal stochastic control problems is discussed along with the corresponding ...
openaire +1 more source
Stochastic Optimal Control Subject to Ambiguity
IFAC Proceedings Volumes, 2011The aim of this paper is to address optimality of control strategies for stochastic control systems subject to uncertainty and ambiguity. Uncertainty corresponds to the case when the true dynamics and the nominal dynamics are dierent but they are dened on the same state space.
Charalambous, Charalambos D. +5 more
openaire +2 more sources
Optimal Control Problem of Stochastic Systems
Lobachevskii Journal of Mathematics, 2021zbMATH Open Web Interface contents unavailable due to conflicting licenses.
openaire +2 more sources
1998
Abstract This chapter gives a self‐contained introduction to optimal control of stochastic differential equations. We derive the Hamilton‐Jacobi‐Bellman equation as well as a verification theorem. The general theory is then applied to optimal consumption and investment problems.
openaire +1 more source
Abstract This chapter gives a self‐contained introduction to optimal control of stochastic differential equations. We derive the Hamilton‐Jacobi‐Bellman equation as well as a verification theorem. The general theory is then applied to optimal consumption and investment problems.
openaire +1 more source

