Deterministic state-constrained optimal control problems without controllability assumptions
Laboratoire Jacques-Louis Lions, Université Pierre et Marie Curie
75252 Paris Cedex 05, France. email@example.com
2 UFR de Mathématiques, Site Chevaleret, Université Paris-Diderot, 75205 Paris Cedex, France.
3 CEREMADE, UMR CNRS 7534, Université Paris-Dauphine, Place de Lattre de Tassigny, 75775 Paris Cedex 16, France. firstname.lastname@example.org
4 Projet Commands, INRIA Saclay & ENSTA, 32 Bd Victor, 75739 Paris Cedex 15, France. Hasnaa.Zidani@ensta.fr
Revised: 14 January 2010
Revised: 19 March 2010
In the present paper, we consider nonlinear optimal control problems with constraints on the state of the system. We are interested in the characterization of the value function without any controllability assumption. In the unconstrained case, it is possible to derive a characterization of the value function by means of a Hamilton-Jacobi-Bellman (HJB) equation. This equation expresses the behavior of the value function along the trajectories arriving or starting from any position x. In the constrained case, when no controllability assumption is made, the HJB equation may have several solutions. Our first result aims to give the precise information that should be added to the HJB equation in order to obtain a characterization of the value function. This result is very general and holds even when the dynamics is not continuous and the state constraints set is not smooth. On the other hand we study also some stability results for relaxed or penalized control problems.
Mathematics Subject Classification: 35B37 / 49J15 / 49Lxx / 49J45 / 90C39
Key words: Optimal control problem / state constraints / Hamilton-Jacobi equation
© EDP Sciences, SMAI, 2010