ESAIM: Control, Optimisation and Calculus of Variations

Research Article

Discrete mechanics and optimal control: An analysis*

Ober-Blöbaum, Sinaa1, Junge, Olivera2 and Marsden, Jerrold E.a3

a1 Department of Mathematics, Faculty of Electrical Engineering, Computer Science and Mathematics, University of Paderborn, 33098 Paderborn, Germany.

a2 Zentrum Mathematik, Technische Universität München, 85747 Garching, Germany.

a3 Control and Dynamical Systems, California Institute of Technology 107-81, Pasadena, CA 91125, USA.


The optimal control of a mechanical system is of crucial importance in many application areas. Typical examples are the determination of a time-minimal path in vehicle dynamics, a minimal energy trajectory in space mission design, or optimal motion sequences in robotics and biomechanics. In most cases, some sort of discretization of the original, infinite-dimensional optimization problem has to be performed in order to make the problem amenable to computations. The approach proposed in this paper is to directly discretize the variational description of the system's motion. The resulting optimization algorithm lets the discrete solution directly inherit characteristic structural properties from the continuous one like symmetries and integrals of the motion. We show that the DMOC (Discrete Mechanics and Optimal Control) approach is equivalent to a finite difference discretization of Hamilton's equations by a symplectic partitioned Runge-Kutta scheme and employ this fact in order to give a proof of convergence. The numerical performance of DMOC and its relationship to other existing optimal control methods are investigated.

(Received October 8 2008)

(Revised September 17 2009)

(Online publication March 31 2010)

Key Words:

  • Optimal control;
  • discrete mechanics;
  • discrete variational principle;
  • convergence

Mathematics Subject Classification:

  • 49M25;
  • 49N99;
  • 65K10


*  Research partially supported by the University of Paderborn, Germany and AFOSR grant FA9550-08-1-0173.