Volume 17, Number 2, April-June 2011
|Page(s)||322 - 352|
|Published online||31 March 2010|
Discrete mechanics and optimal control: An analysis*
Department of Mathematics, Faculty of Electrical Engineering, Computer Science and Mathematics, University of Paderborn, 33098 Paderborn, Germany. firstname.lastname@example.org
2 Zentrum Mathematik, Technische Universität München, 85747 Garching, Germany. email@example.com
3 Control and Dynamical Systems, California Institute of Technology 107-81, Pasadena, CA 91125, USA. firstname.lastname@example.org
Revised: 17 September 2009
The optimal control of a mechanical system is of crucial importance in many application areas. Typical examples are the determination of a time-minimal path in vehicle dynamics, a minimal energy trajectory in space mission design, or optimal motion sequences in robotics and biomechanics. In most cases, some sort of discretization of the original, infinite-dimensional optimization problem has to be performed in order to make the problem amenable to computations. The approach proposed in this paper is to directly discretize the variational description of the system's motion. The resulting optimization algorithm lets the discrete solution directly inherit characteristic structural properties from the continuous one like symmetries and integrals of the motion. We show that the DMOC (Discrete Mechanics and Optimal Control) approach is equivalent to a finite difference discretization of Hamilton's equations by a symplectic partitioned Runge-Kutta scheme and employ this fact in order to give a proof of convergence. The numerical performance of DMOC and its relationship to other existing optimal control methods are investigated.
Mathematics Subject Classification: 49M25 / 49N99 / 65K10
Key words: Optimal control / discrete mechanics / discrete variational principle / convergence
© EDP Sciences, SMAI, 2010
Initial download of the metrics may take a while.