Volume 18, Number 4, October-December 2012
|Page(s)||1005 - 1026|
|Published online||16 January 2012|
Dynamic programming principle for stochastic recursive optimal control problem with delayed systems∗
Department of Mathematics, China University of Mining
2 School of Mathematics, Shandong University, Jinan 250100, P.R. China
Revised: 6 May 2011
In this paper, we study one kind of stochastic recursive optimal control problem for the systems described by stochastic differential equations with delay (SDDE). In our framework, not only the dynamics of the systems but also the recursive utility depend on the past path segment of the state process in a general form. We give the dynamic programming principle for this kind of optimal control problems and show that the value function is the viscosity solution of the corresponding infinite dimensional Hamilton-Jacobi-Bellman partial differential equation.
Mathematics Subject Classification: 49L20 / 60H10 / 93E20
Key words: Stochastic differential equation with delay / recursive optimal control problem / dynamic programming principle / Hamilton-Jacobi-Bellman equation
This work is partly supported by the Natural Science Foundation of P.R. China (10921101) and Shandong Province (JQ200801 and 2008BS01024), the National Basic Research Program of P.R. China (973 Program, No. 2007CB814904) and the Science Fund for Distinguished Young Scholars of Shandong University (2009JQ004), the Fundamental Research Funds for the Central Universities (2010QS05).
© EDP Sciences, SMAI, 2012
Current usage metrics show cumulative count of Article Views (full-text article views including HTML views, PDF and ePub downloads, according to the available data) and Abstracts Views on Vision4Press platform.
Data correspond to usage on the plateform after 2015. The current usage metrics is available 48-96 hours after online publication and is updated daily on week days.
Initial download of the metrics may take a while.