TRANSFORMATION PRESERVING CONTROLLABILITY FOR NONLINEAR OPTIMAL CONTROL PROBLEMS WITH JOINT BOUNDARY CONDITIONS

. In this paper we develop a new approach for optimal control problems with general jointly varying state endpoints (also called coupled endpoints). We present a new transformation of a nonlinear optimal control problem with jointly varying state endpoints and pointwise equality control constraints into an equivalent optimal control problem of the same type but with separately varying state endpoints in double dimension. Our new transformation preserves among other properties the controllability (normality) of the considered optimal control problems. At the same time it is well suited even for the calculus of variations problems with joint state endpoints, as well as for optimal control problems with free initial and/or ﬁnal time. This work is motivated by the results on the second order Sturm–Liouville eigenvalue problems with joint endpoints by Dwyer and Zettl (1994) and by the sensitivity result for nonlinear optimal control problems with separated state endpoints by the authors (2018).


Introduction
In this paper we develop a new approach to optimal control problems with general jointly varying state endpoints (also called coupled endpoints), including the special case with periodic or antiperiodic endpoints. Necessary or sufficient optimality conditions for such optimal control problems are usually derived by a direct analysis of the problem, in particular when dealing with the nonnegativity or coercivity of the second variation, the Jacobi system and the Riccati differential equation, or the Hamilton-Jacobi theory, see e.g. [1, 2, 4, 5, 19, 26-29, 35, 36].
In the literature one can also find an alternative approach based on transforming the problem with joint endpoints into an augmented problem with separated endpoints in double dimension. This method has been used in particular in the context of the Jacobi system (or the linear Hamiltonian system) and the corresponding Riccati equation, see e.g. [1, 14-16, 20, 36], and it consists of doubling the dimension of the original state and adjoint variables by suitable constant quantities. The utility of this method has been revived due to the development during the last two decades of the theory of linear Hamiltonian and symplectic systems without the controllability (normality) assumption. For instance, important results in the continuous and discrete oscillation and spectral theory for problems with joint endpoints were derived by means of the above transformation to the separated endpoints setting in double dimension, see e.g. [3,10,12,13,15,16,30,32,34].
On the other hand, for the nonlinear optimal control problems, meaningful second order necessary optimality conditions, or sufficient optimality conditions for non-isolated candidates are derived in the literature under a certain controllability (normality) assumption imposed on the linearized system, see e.g. [17,22,23,[25][26][27]33,36,39,40]. In [36, pg. 886], the transformation described above was suggested as a tool to extend to the joint endpoints case the results on optimality conditions established therein for the separated endpoints problem and involving the second variation phrased in terms of the associated linear Hamiltonian system and the corresponding Riccati differential equation. The extended Riccati type sufficient condition for nonlinear periodic optimal control problems from [1, Theorem 2.1] is an example of the above approach, although the authors prove their result directly without an explicit usage of this transformation. Note that the latter condition is a special case of the solvability condition for the augmented Riccati differential equation presented in [14,Theorem 6.2(iii)].
In this paper we present a new transformation of a general nonlinear optimal control problem with jointly varying state endpoints and pointwise equality control constraints into an equivalent optimal control problem of the same type but with separately varying state endpoints in double dimension (see the results in Section 3). The advantage of our new transformation is that it preserves the controllability (normality) of the considered optimal control problems, as well as the normality of the associated Jacobi systems. This transformation is suitable and new even for the calculus of variations problems with jointly varying endpoints, as it preserves the structure of this type of problem. This work is motivated by the results on the second order Sturm-Liouville eigenvalue problems with joint endpoints in [8,9] by Dwyer and Zettl. The differential equation considered in [9] represents the Jacobi equation (or the Euler-Lagrange equation) for a calculus of variations problem. There it is suggested to flip the interval [a, b] over its midpoint c := (a + b)/2 (1.1) and to consider an augmented eigenvalue problem with separated endpoints in double dimension over the half-interval [a, c] or [c, b]. In this paper we extend this method to the general setting of nonlinear optimal control problems. The new transformation method allows, for example, to extend the sensitivity results for optimal control problems with separated state endpoints in [33,Theorem 4.3] to arbitrary jointly varying endpoints (Theorem 4.1). Note that the result in [33] is an extension to separable endpoints of the fixed initial state constraints in [22,24,25]. In other words, we demonstrate that this new transformation is a useful tool that serves to extend to jointly varying endpoints the results known for separated endpoints. This transformation is also well-suited for optimal control problems with free initial and/or final time, as it allows to separate the initial free time and state constraints from the final free time and state constraints. The paper is organized as follows. In Section 2 we introduce the optimal control problem (C) on the interval [a, b] with joint state endpoints, including the notion of normality, the strong and weak Pontryagin principles, and the second variation. In Section 3 we develop the transformation of this problem into an equivalent augmented optimal control problem (C) with separated state endpoints. In section 4 we illustrate the utility of this method by deriving a new sensitivity result for optimal control problems with jointly varying state endpoints. In Section 5 we discuss the transformation in the context of optimal control problems with free initial and/or final time. It is worth mentioning that the transformation developed in this paper is not restricted to the spaces of controls and states considered here, but it applies to any other spaces, e.g., to L ∞ controls and absolutely continuous states.

Optimal control problem with joint state endpoints
Consider the nonlinear optimal control problem over the interval [a, b] with jointly varying state endpoints minimize J(x, u) := K(x(a), x(b)) + b a L(t, x(t), u(t)) dt (C) subject to x(·) ∈ C 1 p ([a, b], R n ) and u(·) ∈ C p ([a, b], R m ) such thaṫ x(t) = f (t, x(t), u(t)), t ∈ [a, b], (2.1) ψ(t, u(t)) = 0, t ∈ [a, b], (2.2) ϕ(x(a), x(b)) = 0. (2.3) We use the notation C 1 p (I, V ) and C p (I, V ) for the spaces of piecewise continuously differentiable functions and piecewise continuous functions on the interval I with values in the target space V . We assume that n, m, k, r ∈ N are given dimensions with k ≤ m and r ≤ 2n and Given that u(·) ∈ C p ([a, b], R m ), we require that equations (2.1) and (2.2) are satisfied at the left and right limits of its discontinuity points. The Hamiltonian corresponding to problem (C) is defined by H(t, x, u, p, λ, λ 0 ) := p T f (t, x, u) + λ 0 L(t, x, u) + λ T ψ(t, u). (

2.4)
A pair (x(·), u(·)) is feasible for problem (C) if x(·) ∈ C 1 p ([a, b], R n ) and u(·) ∈ C p ([a, b], R m ) and it satisfies equations (2.1)-(2.3). A feasible pair (x(·),û(·)) is a strong local minimum, respectively, a weak local minimum, for problem (C) if there exists δ > 0 such that we have If the inequality J(x,û) < J(x, u) holds for all such feasible pairs (x(·), u(·)) = (x(·),û(·)), then (x(·),û(·)) is a strict strong local minimum, respectively, a strict weak local minimum for problem (C). It is clear that if a pair (x(·),û(·)) is a (strict) strong local minimum for problem (C), then it is also a (strict) weak local minimum for problem (C). We shall use the following definition from [38,Section 2]. A function h(t, ·, ·) is continuous at the feasible pair (x(·), u(·)) uniformly in t on [a, b] if for every ε > 0 there exists δ > 0 such that for all t ∈ [a, b] and all ( In order to simplify the presentation of this paper and concentrate on the transformation of the endpoints constraints, we use the following assumptions on the regularity of the data h(t, x, u) := (L(t, x, u), f (t, x, u), ψ(t, u)), K(x, y), and ϕ(x, y) of problem (C) near a reference pair (x(·), u(·)). We wish to keep in mind that the results of this paper remain valid when these regularity assumptions are weakened, for instance, to encompass the case of the L ∞ -controls and absolutely continuous states. Let (x(·), u(·)) be a feasible pair and denote by B δ (y) the open ball centered at y of the radius δ. The variable in the subscript indicates the partial derivative with respect to that variable.
(A1) There exists δ 1 > 0 such that for each t ∈ [a, b] the function h(t, ·, ·) is differentiable on B δ 1 (x(t), u(t)) and the functions h and ∇ (x,u) h are continuous; the functions K(·, ·) and ϕ(·, ·) are continuously differentiable on B δ 1 (x(a), x(b)) and the matrices ∇ϕ(x(a), x(b)) and ψ u (t, u(t)) for t ∈ [a, b] have full rank. (A2) There exists δ 1 > 0 such that for each t ∈ [a, b] the function h(t, ·, ·) is twice differentiable on B δ 1 (x(t), u(t)) and the functions h, ∇ (x,u) h, and ∇ 2 (x,u) h are continuous; the functions K(·, ·) and ϕ(·, ·) are twice continuously differentiable on B δ 1 (x(a), x(b) and the matrices ∇ϕ(x(a), x(b)) and ψ u (t, u(t)) for t ∈ [a, b] have full rank. Assume that (A1) holds at (x(·), u(·)). Then the linearization of constraints (2.1)-(2.3) at the feasible pair (x(·), u(·)) has the forṁ , R m ) represent the linearizations of the state x(·) and of the control u(·), and where the matrix functions In some cases we will split the full-rank matrix ) that denotes the matrix function for which the columns of Y (t) form an orthonormal basis for the kernel of N (t), i.e., Definition 2.1. Assume that (A0) is satisfied at (x(·), u(·)). We say that the feasible pair (x(·), u(·)) satisfies the strong Pontryagin principle for problem (C) if there exist a constant λ 0 ≥ 0, a vector γ ∈ R r , and a function p(·) ∈ C 1 p ([a, b], R n ) satisfying (i) the nondegeneracy condition (iv) s the minimality condition min u∈R m , ψ(t,u)=0 where the matrices A(t) and M are defined in (2.6).
Due to the full rank property of M , the nondegeneracy condition (2.7) is equivalent to the seemingly stronger condition λ 0 + p(·) Cp[a,b] +|γ| = 0. Note that the subscript "s" in condition (iv) s above refers to the strong Pontryagin principle, while the subscript "w" in condition (iv) w below refers to the weak Pontryagin principle.
The adjoint equation (2.8) combined with stationarity condition (2.11) can be written in terms of the Hamiltonian H in (2.4) as Also, due to the full rank property of M and N (t) on [a, b], the nondegeneracy condition (2.7) is now equivalent to Remark 2.3. If (A1) holds at (x(·), u(·)) and if the minimality condition (2.10) of Definition 2.1 is satisfied, then the stationarity condition (2.11) holds. In fact, by the regularity of control constraints at u(t) (i.e., N (t) is of full rank) and by the Kuhn-Tucker condition (see e.g. [21]), there exists λ(t) satisfying the stationarity condition (iv) w . That this λ(·) ∈ C p ([a, b], R k ) is again a result of the constraints regularity.
In the following we define the notion of normality of problem (C).
Remark 2.5. (i) When the normality in (C) holds for either the weak or the strong Pontryagin principle, the value of λ 0 could be taken to be 1. In this case, the remaining multipliers are unique due to the full rank assumptions on N (t) for all t ∈ [a, b] and/or on M .
(ii) It follows from Remark 2.3 that when assumption (A1) holds, the normality of (C) with respect to the weak Pontryagin principle implies the normality of (C) with respect to the strong Pontryagin principle.
Remark 2.6. The normality of a feasible pair (x(·), u(·)) satisfying the weak Pontryagin principle is shown in [41, Theorem 2.1] or in [17,Proposition 4.5] to be equivalent to the Mcontrollability of the linear system (2.5). That is, for any vector d ∈ R r there exists a vector α ∈ R n and a function v Furthermore, by [17,Proposition 4.6], the M -controllability of system (2.5) is equivalent to the condition where the matrices E(·) ∈ C p ([a, b], R r×(m−k) ) and D ∈ R r×n are defined by and Φ(·) ∈ C 1 p ([a, b], R n×n ) is the fundamental matrix of the systemΦ = A(t) Φ for t ∈ [a, b] with Φ(a) = I. Note that (Φ T ) −1 (·) is the fundamental matrix of the homogeneous part of the adjoint equation (2.8).
With problem (C) and a normal feasible pair (x(·), u(·)) we associate the Jacobi system (or the linear Hamiltonian system) where η(·), ξ(·) ∈ C 1 p ([a, b], R n ) and the coefficients A(·), B(·), C(·) ∈ C p ([a, b], R n×n ) are given on the interval [a, b] by the formulas (2.6) and (2.17). In the above formulas we assume that the matrix Y T (t)R(t)Y (t) is invertible on [a, b]. This property is a consequence of the strengthened Legendre condition (2.20). Note that the adjoint variable ξ(t) is taken to be the negative of the corresponding adjoint variable obtained from the weak Pontryagin principle, when it is applied to the second variation J ′′ (η, v).
for some ω ∈ R r is the trivial solution ξ(t) ≡ 0 on [a, b]. The matrices A(t) and B(t) appearing in (2.24) are defined by (2.22). This means that, under the assumption that the matrix Y T (t)R(t)Y (t) is invertible on [a, b], the normality of problem (C) is equivalent to the normality of the corresponding accessory problem (i.e., minimizing the second variation J ′′ (η, v)).

Transformation to separated state endpoints
Following (1.1) we denote the midpoint of the interval [a, b] by c. Together with problem (C) we consider the augmented nonlinear optimal control problem over the interval [c, b] with separated endpoints Given the data (L, f, ψ, K, ϕ) of problem (C) we define the corresponding data of problem (C) by the functions We split the augmented variablesx,ũ,p,λ,γ, and the augmented functionsf (·, ·, ·) andψ(·, ·) into two entries, whose dimensions are compatible with problem (C), namelỹ And for an easy comparison with the original problem (C) we also define the corresponding functionsK : Notice that the ranges ofx(·) andũ(·) and of the functionsf (·, ·, ·) andψ(·, ·) in problem (C) have double dimensions of the corresponding data for (C). Furthermore, the time interval . The definitions of a feasible pair (x(·),ũ(·)) and of a (strict) strong or weak local minimum for problem (C) are analogous to those for problem (C) with the distinction that we now use the norm z(·) Cp[c,b] . When considering for a feasible pair (x(·),ũ(·)) of problem (C) the notions of strong and weak Pontryagin principles stated in Definitions 2.1 and 2.2, we obtain the augmented multipliers where we used the standard partitions from (3.5). The gradients of the functions in (3.4) arẽ Given the data and a feasible pair (x(·), u(·)) for problem (C), we define in accordance with notation (3.5) for t ∈ [c, b] the augmented pair (x(·),ũ(·)) bỹ Conversely, given the data of problem (C) in (3.4) and a feasible pair (x(·),ũ(·)) for (C) we define for t ∈ [a, b] the functions In the next two results we make precise the relationship regarding the feasibility and the local (strong or weak) optimality between the pairs (x(·), u(·)) and (x(·),ũ(·)) in their respective problems (C) and (C). Here the data of (C) is obtained via those of (C) through equations (3.4) and (3.6).
Conversely, assume that (x(·),ũ(·)) is a feasible pair for problem (C) and define (x(·), u(·)) by (3.11). Then we have u(·) ∈ C p ([a, b], R m ). Moreover, the first equality in (3.3) and the second last equality in (3.4) This shows that constraint (2.2) holds. The validity of constraint (2.3) is verified by Thus, we conclude that the pair (x(·), u(·)) is feasible for problem (C). Finally, we have where we substituted s := a + b − t in the first integral of the second equation.
Proof. The result follows from Proposition 3.1, since the values of J(x, u) andJ(x,ũ) are the same under the definitions in (3.10) and (3.11).
The following result indicates that our transformation preserves the normality (controllability) for either the strong or the weak Pontryagin principle.
Corollary 3.5 (Normality for strong and weak Pontryagin principle). If the problem (C) is normal at a feasible pair (x(·), u(·)) satisfying the strong, respectively, the weak Pontryagin principle, then the problem (C) is normal at the pair (x(·),ũ(·)) defined by (3.10) and which satisfies the strong, respectively, the weak Pontryagin principle. Conversely, if the problem (C) is normal at a feasible pair (x(·),ũ(·)) satisfying the strong, respectively, the weak Pontryagin principle, then the problem (C) is normal at the pair (x(·), u(·)) defined by (3.11) and which satisfies the strong, respectively, the weak Pontryagin principle.
Proof. The results follow respectively from Propositions 3.3 and 3.4, since by (3.12) and (3.20) the multipliers λ 0 andλ 0 are transformed into each other.
Proof. The statement follows by applying Proposition 3.1 to the second variation J ′′ over the constraints (2.5) and to the second variationJ ′′ over the constraints (3.29).
Proposition 3.8 (Second variation). The second variation J ′′ is coercive (nonnegative) if and only if the second variationJ ′′ is coercive (nonnegative).
According to (2.19) and (2.20), the Legendre-Clebsch condition is defined in problem (C) as and the strengthened Legendre-Clebsch condition for problem (C) is

Remark 3.9. The matrix functionỸ(·) in (3.38) and (3.39) can be taken to be the block diagonal matrix functionỸ (·) defined in (3.19). The reason for this is that the columns of each of the matricesỸ(t) andỸ (t) form an orthonormal basis for the kernel ofÑ (t) on [c, b], and henceỸ(t) =Ỹ (t) Z(t) for all
, then the matrix Z(t) is unique and it is given by the formula Z(t) =Ỹ T (t)Ỹ(t) on [c, b] and thus, Z(·) ∈ C p ([c, b], R 2(m−k)×2(m−k) ).
Proof. By splitting the Legendre-Clebsch condition (2.19)   Next we comment about the preservation of the controllability (normality) of the Jacobi systems for problems (C) and (C).
In the last part of this section we discuss the transformation of problem (C), which is announced in [36, pg. 886] in the context of the Jacobi system (2.21), and compare it with the transformation presented above. We note that the outcome of this transformation was essentially used in [1, Theorem 2.1] for deriving an extended Riccati type sufficient condition for a periodic nonlinear optimal control problem. In the latter reference, the authors use a cascade system of three differential equations, which consists of the Riccati matrix equation associated with the Jacobi system (2.21) together with a linear differential equation and an integrator. This cascade system is equivalent to the augmented Riccati matrix equation associated with the corresponding Jacobi system (3.54) displayed below.
Remark 3.13. Together with the basic problem (C) we consider the augmented nonlinear optimal control problem over the interval [a, b] with separated endpoints Note that the statex(·) is augmented into double dimension 2n, while the controlū(·) remains in the original dimension m. Given the data (L, f, ψ, K, ϕ) of problem (C) we define the corresponding data of problem (C) by the functions ,ψ(t,ū) := ψ(t,ū), In the spirit of (3.10) and (3.11) the feasible pairs (x(·),ū(·)) for problem (C) and (x(·), u(·)) for problem (C) are related by the formulas and It follows that the feasibility and the (strict) strong/weak local minimality of the pairs (x(·), u(·)) for problem (C) and (x(·),ū(·)) for problem (C) connected by equations (3.48) and with the corresponding multipliers given bȳ Note that if problem (C) is a calculus of variations problem (i.e., m = n, f (t, x, u) = u, and ψ(t, u) = 0), then the corresponding augmented problem (C) does not preserve this structure. The coefficients of the associated second variationJ ′′ (η,v) of problem (C) at a feasible pair (x(·),ū(·)) are according to (2.17) given bȳ Finally, the corresponding augmented Jacobi system has the forṁ whereη(·),ξ(·) ∈ C 1 p ([a, b], R 2n ) and where the coefficientsĀ(·),B(·),C(·) ∈ C p ([a, b], R 2n×2n ) are given, in view of (2.22) and (3.53), by the formulas with A(t), B(t), C(t) defined in (2.22). Therefore, we conclude that the transformation of problem (C) into augmented problem (C) enjoys the same properties (regarding the feasibility, the strong/weak local optimality, the strong/weak Pontryagin principle, the normality of these problems, including the normality of their Jacobi systems, as well as the nonnegativity and coercivity of the second variation and the validity of the Legendre condition) as the transformation into augmented problem (C), with the exception that problem (C) is not suitable when the calculus of variations structure is required to be preserved.
Remark 3.14. Following the preceding remark, the transformation into augmented problem (C) is suitable for nonlinear optimal control problems on time scales, such as in [6,7,17,31,33]. In this context the sensitivity result in [33,Theorem 4.3] extends (in the spirit of Theorem 4.1 below) to nonlinear optimal control problems on time scales with jointly varying endpoints.

Application of transformation to sensitivity analysis
As an application of the results established in Section 3 we present a generalization to the jointly varying state endpoints constraints of the sensitivity analysis result known in [33, Theorem 4.3] for nonlinear optimal control problems with separated endpoints. Consider the following parametric nonlinear optimal control problem associated with problem (C) of Section 2 minimize J(x, u, ω) : Here the data of problem (C ω ) are similar to those of problem (C) with the special distinction that they now depend also on the parameter ω ∈ R d . The Hamiltonian corresponding to problem (C ω ), when it is normal, is defined by For the remaining part of this section we fix a valueω ∈ R d and a pair (x(·),û(·)) feasible for problem (Cω). We will assume the following hypothesis (Â2) to hold on the data near ((x(·),û(·),ω), where we set h(t, x, u, ω) := L(t, x, u, ω), f (t, x, u, ω), ψ(t, u, ω) .
All the notations used earlier for problem (C) carry over for problem (C ω ), and the "hat" placed over the functions indicates the evaluations at the "hat" functions. For instance,Ŷ (·) denotes the matrix function in C p ([a, b], R m×(m−k) ) such that for all t ∈ [a, b] the columns ofŶ (t) form an orthonormal basis for the kernel ofN (t), and Now, we are in a position to extend the result in [33,Theorem 4.3], specialized to the continuous time setting, from the separated state endpoints to the jointly varying state endpoints.
Theorem 4.1 (Sensitivity analysis). Let the assumption (Â2) hold at (x(·),û(·),ω) and the problem (Cω) be normal at the feasible pair (x(·),û(·)) satisfying the weak Pontryagin principle. Let (λ 0 = 1,γ,p(·),λ(·)) be the corresponding multipliers from Definition 2.2 such that for some α > 0 the coercivity of the second variation of J(x, u,ω) at (x(·),û(·)) holds, that is, Then there exists ε > 0 such that for all ω ∈ R d with |ω −ω| < ε the problem (C ω ) has a strict weak local minimum (x(·, ω), u(·, ω)) with the associated multipliers λ 0 (ω) = 1, γ(ω) ∈ R r , p(·, ω) ∈ C 1 p ([a, b], R n ), λ(·, ω) ∈ C p ([a, b], R k ) that are C 1 functions in the argument ω and satisfy the weak Pontryagin principle corresponding to problem (C ω ), that is, Proof. The proof is based on applying the transformation introduced in Section 3 to the problem (C ω ) in order to obtain an augmented problem (C ω ) on the interval [c, b] with separated state endpoints. With the help of the results developed in Section 3, we shall verify that all the hypotheses of [33,Theorem 4.3] are satisfied by the transformed problem and hence, the sensitivity analysis result of that theorem is valid for problem (C ω ). Then, we employ the inverse of this transformation to establish the statement of this theorem for our problem (C ω ).

Optimal control problem with free time
In this section we discuss the transformation from Section 3 for optimal control problems with free initial and final times. In particular, we show that this transformation preserves the corresponding strong or weak Pontryagin principle. To simplify the presentation, we focus on the autonomous case. However, the nonautonomous case is also handled with the techniques of this section and the difference between the results for the autonomous and nonautonomous settings is elaborated in Remarks 5.2, 5.3, 5.5, and 5.7. Thus, we consider the nonlinear optimal control problem minimize J(a, b, x, u) := K(a, b, x(a), x(b)) + b a L(x(t), u(t)) dt (P) subject to x(·) ∈ C 1 p ([a, b], R n ) and u(·) ∈ C p ([a, b], R m ) such thaṫ x(t) = f (x(t), u(t)), ψ(u(t)) = 0, t ∈ [a, b], ϕ(a, b, x(a), x(b)) = 0.
The functions L, f , ψ have the same dimensions as in the problem (C) from Section 2 with the distinction that they do not explicitly depend on the variable t. On the other hand, the functions K and ϕ depend on the varying initial and final times a and b. The Hamiltonian H is also defined via (2.4) with the data of problem (P).
(ii) Conditions (5.5)-(5.6) show that if both functions K and ϕ are independent of t 1 or independent of t 2 , then in the strong and weak Pontryagin principle for problem (P) we have (5.8) In particular, condition (5.8) is satisfied in the strong and weak Pontryagin principle for problem (P) if the initial time is fixed and the final time is free, or if the initial time is free and the final time is fixed.
Based on the transformation presented in Section 3 we consider the augmented optimal control problem with separated state endpoints and separated initial and final times minimizeJ(c, b,τ ,x,ũ) :=K c (c,τ (c),x(c)) +K b (b,τ (b),x(b)) + Given the data (L, f, ψ, K, ϕ) of problem (P) we define the corresponding data of problem (P) according to (3.4) and (3.5), i.e., we set where the Hamiltonian of problem (P) is H(x,ũ,p,λ,λ 0 ) :=p Tf (x,ũ) +λ 0L (x,ũ) +λ Tψ (ũ). (5.15) Remark 5.3. The functionsL,f ,ψ, andH in (5.13) and (5.15) are autonomous, as they do not depend explicitly on the variable t ∈ [c, b] and hence, they are independent of the state variablẽ τ . When the data L, f , and H (but not ψ) for the original problem (P) are t-dependent, and so are the corresponding data for problem (P), this case is also addressed in this paper in a similar manner. However, as we shall see in Remarks 5.5 and 5.7, the functionsL,f , andH now depend also on the new state variableτ . Furthermore, the adjoint functionq corresponding to the state variableτ , and the HamiltonianH evaluated at the corresponding optimal solution, are not necessarily constant.