Approximate maximum principle for discrete approximations of optimal control systems with nonsmooth objectives and endpoint constraints∗
Department of Mathematics, Wayne State University
2 Department of Mathematics and Computer Science, Pennsylvania State University Harrisburg, Middletown, PA 17110, U.S.A.
Revised: 17 August 2012
The paper studies discrete/finite-difference approximations of optimal control problems governed by continuous-time dynamical systems with endpoint constraints. Finite-difference systems, considered as parametric control problems with the decreasing step of discretization, occupy an intermediate position between continuous-time and discrete-time (with fixed steps) control processes and play a significant role in both qualitative and numerical aspects of optimal control. In this paper we derive an enhanced version of the Approximate Maximum Principle for finite-difference control systems, which is new even for problems with smooth endpoint constraints on trajectories and occurs to be the first result in the literature that holds for nonsmooth objectives and endpoint constraints. The results obtained establish necessary optimality conditions for constrained nonconvex finite-difference control systems and justify stability of the Pontryagin Maximum Principle for continuous-time systems under discrete approximations.
Mathematics Subject Classification: 49K15 / 49M25 / 49J52 / 49J53 / 93C55
Key words: Discrete and continuous control systems / discrete approximations / constrained optimal control / maximum principles
Research of this author was partially supported by the USA National Science Foundation under grant DMS-1007132, by the Australian Research Council under grant DP-12092508, by the European Regional Development Fund (FEDER), and by the following Portuguese agencies: Foundation for Science and Technology, Operational Program for Competitiveness Factors, and Strategic Reference Framework under grant PTDC/MAT/111809/2009.
© EDP Sciences, SMAI, 2013