Dynamic programming principle for stochastic recursive optimal control problem with delayed systems∗
Department of Mathematics, China University of Mining
2 School of Mathematics, Shandong University, Jinan 250100, P.R. China
Revised: 6 May 2011
In this paper, we study one kind of stochastic recursive optimal control problem for the systems described by stochastic differential equations with delay (SDDE). In our framework, not only the dynamics of the systems but also the recursive utility depend on the past path segment of the state process in a general form. We give the dynamic programming principle for this kind of optimal control problems and show that the value function is the viscosity solution of the corresponding infinite dimensional Hamilton-Jacobi-Bellman partial differential equation.
Mathematics Subject Classification: 49L20 / 60H10 / 93E20
Key words: Stochastic differential equation with delay / recursive optimal control problem / dynamic programming principle / Hamilton-Jacobi-Bellman equation
This work is partly supported by the Natural Science Foundation of P.R. China (10921101) and Shandong Province (JQ200801 and 2008BS01024), the National Basic Research Program of P.R. China (973 Program, No. 2007CB814904) and the Science Fund for Distinguished Young Scholars of Shandong University (2009JQ004), the Fundamental Research Funds for the Central Universities (2010QS05).
© EDP Sciences, SMAI, 2012