Results 301 to 310 of about 10,716,551 (348)
Some of the next articles are maybe not open access.
Mathematics of Operations Research, 2018
We present a novel method for deriving tight Monte Carlo confidence intervals for solutions of stochastic dynamic programming equations. Taking some approximate solution to the equation as an input, we construct pathwise recursions with a known bias. Suitably coupling the recursions for lower and upper bounds ensures that the method is applicable even
Christian Bender +2 more
openaire +3 more sources
We present a novel method for deriving tight Monte Carlo confidence intervals for solutions of stochastic dynamic programming equations. Taking some approximate solution to the equation as an input, we construct pathwise recursions with a known bias. Suitably coupling the recursions for lower and upper bounds ensures that the method is applicable even
Christian Bender +2 more
openaire +3 more sources
IEEE Transactions on Smart Grid, 2019
This paper focuses on economical operation of a microgrid (MG) in real-time. A novel dynamic energy management system is developed to incorporate efficient management of energy storage system into MG real-time dispatch while considering power flow ...
P. Zeng, Hepeng Li, Haibo He, Shuhui Li
semanticscholar +1 more source
This paper focuses on economical operation of a microgrid (MG) in real-time. A novel dynamic energy management system is developed to incorporate efficient management of energy storage system into MG real-time dispatch while considering power flow ...
P. Zeng, Hepeng Li, Haibo He, Shuhui Li
semanticscholar +1 more source
IEEE/CAA Journal of Automatica Sinica
Reinforcement learning (RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming (ADP) within the control community.
Ding Wang +4 more
semanticscholar +1 more source
Reinforcement learning (RL) has roots in dynamic programming and it is called adaptive/approximate dynamic programming (ADP) within the control community.
Ding Wang +4 more
semanticscholar +1 more source
Offline-Online Approximate Dynamic Programming for Dynamic Vehicle Routing with Stochastic Requests
Transportation Science, 2019Although increasing amounts of transaction data make it possible to characterize uncertainties surrounding customer service requests, few methods integrate predictive tools with prescriptive optimization procedures to meet growing demand for small-volume
M. Ulmer +3 more
semanticscholar +1 more source
Robust Dual Dynamic Programming
Operational Research, 2019In the paper “Robust Dual Dynamic Programming,” Angelos Georghiou, Angelos Tsoukalas, and Wolfram Wiesemann propose a novel solution scheme for addressing planning problems with long horizons.
A. Georghiou, A. Tsoukalas, W. Wiesemann
semanticscholar +1 more source
Dynamic Programming Alignment Accuracy
Journal of Computational Biology, 1998Algorithms for generating alignments of biological sequences have inherent statistical limitations when it comes to the accuracy of the alignments they produce. Using simulations, we measure the accuracy of the standard global dynamic programming method and show that it can be reasonably well modelled by an "edge wander" approximation to the ...
I, Holmes, R, Durbin
openaire +2 more sources
Iterative Dynamic Programming, 2019
From the Publisher: Dynamic programming is a powerful method for solving optimizationproblems, but has a number of drawbacks that limit its use to solving problems of very low dimension. To overcome these limitations, author Rein Luus suggested using it
R. Luus
semanticscholar +1 more source
From the Publisher: Dynamic programming is a powerful method for solving optimizationproblems, but has a number of drawbacks that limit its use to solving problems of very low dimension. To overcome these limitations, author Rein Luus suggested using it
R. Luus
semanticscholar +1 more source
Management Science, 1975
Numerically valued reward processes are found in most dynamic programming models. Mitten, however, recently formulated finite horizon sequential decision processes in which a real-valued reward need not be earned at each stage. Instead of the cardinality assumption implicit in past models, Mitten assumes that a decision maker has a preference order ...
openaire +2 more sources
Numerically valued reward processes are found in most dynamic programming models. Mitten, however, recently formulated finite horizon sequential decision processes in which a real-valued reward need not be earned at each stage. Instead of the cardinality assumption implicit in past models, Mitten assumes that a decision maker has a preference order ...
openaire +2 more sources
Complexity of stochastic dual dynamic programming
Mathematical programming, 2019Stochastic dual dynamic programming is a cutting plane type algorithm for multi-stage stochastic optimization originated about 30 years ago. In spite of its popularity in practice, there does not exist any analysis on the convergence rates of this method.
Guanghui Lan
semanticscholar +1 more source

