NECESSARY OPTIMALITY CONDITION FOR THE MINIMAL TIME CRISIS RELAXING TRANSVERSE CONDITION VIA REGULARIZATION

. We derive necessary optimality conditions for the time of crisis problem under a more general hypothesis than the usual one encountered in the hybrid setting, which requires that any optimal solution should cross the boundary of the constraint set transversely. Doing so, we apply the Pontryagin Maximum Principle to a sequence of regular optimal control problems whose integral cost approximates the time of crisis. Optimality conditions are derived by passing to the limit in the Hamiltonian system (without the use of the hybrid maximum principle). This convergence result essentially relies on the boundedness of the sequence of adjoint vectors in L ∞ . Our main contribution is to relate this property to the boundedness in L 1 of a suitable sequence which allows to avoid the use of the transverse hypothesis on optimal paths. An example with non-transverse trajectories for which necessary conditions are derived highlights the use of this new condition.


Introduction
This paper proposes a novel approach to derive optimality conditions for the so-called time of crisis problem [7] as well as (new) sufficient conditions ensuring the well-posedness of this method.Such conditions will slightly differ of those available in the literature that involve the behavior of an (a priori unknown) optimal path.
Originally, the time of crisis problem was introduced in [15], and it consists in minimizing the total time spent by a solution of a control system outside a given constraint set.It is of particular interest whenever it is not possible to maintain the system in such a set.In that case, alternative strategies consist in finding a control policy such that the associated solution spends the minimum of time outside the constraint set.The time of crisis arises in the context of viability theory [2,3], see, e.g., a case study in ecology in [9], and more generally whenever one is unable to maintain a controlled dynamics within a prescribed constraint set over a time window.
From a theoretical point of view, the formulation of the time of crisis involves a discontinuous function w.r.t. the state, namely the indicator function of the constraint set.The integrand is then equal to 0 or 1 depending if the state of the system lies inside or outside the constrained set (hence, it can be viewed as a particular instance of a hybrid optimal control problem).In particular, the Pontryagin Maximum Principle (PMP), see [18], cannot be applied straightforwardly to derive necessary conditions.Various approaches have been proposed in the literature to study this issue and we now wish to give an overview of the available methods in order to highlight the differences with our approach.
In the first paper about the time of crisis (see [15]), the optimal control problem was tackled via a dynamic programming approach only.The question of necessary optimality conditions has been investigated more recently in [7] using the so-called hybrid maximum principle (HMP) which is an extension of the PMP adapted to hybrid systems (see [10,16,17]).In [7], the authors provide necessary conditions by a direct application of this principle that requires a so-called transverse hypothesis on optimal trajectories which is as follows: any optimal solution crosses the boundary of the constraint set transversely.As in [17], this hypothesis is crucial for the derivation of necessary conditions (in particular, for an accurate definition of the jump of the covector at a crossing time).Thanks to this hypothesis, it is also shown in [7,8] that extremals of a regularized1 optimal control problem converge, up to a sub-sequence, to an extremal of the time of crisis problem.The methodology is in line with [17] using properties of variation vectors.Note that first and second order conditions have also been derived in [6] using the PMP on a reformulation of the time of crisis problem obtained via an augmentation technique (in the spirit of [12][13][14]).Let us also point out [1,20] in which an approximation technique is introduced in order to obtain necessary conditions.The study of convergence of regularized extremals relies on a similar transversality assumption on optimal paths as in the framework of the HMP.In addition, the approximated optimal control problem involves the (unknown) optimal solution.It is worth noticing that this approach is of interest for obtaining optimality conditions via the use of the PMP on a smooth problem, however, it is not usable from a numerical point of view since the sequence of approximated problems itself involves the optimal solution.
Let us now describe more into details the content of this paper.As we have recalled, the time of crisis can be viewed as an application of the HMP on a discontinuous problem w.r.t. the state.The HMP available in the literature is a powerful tool, but its application requires an optimal solution to satisfy a transverse assumption related to the boundary of the constraint set (see [17]).In practice, this condition is hardly possible to check if one does not know in advance the optimal solution or estimates it with enough accuracy.That is why, we can wonder whether or not it is possible to derive optimality conditions for discontinuous integrands w.r.t. the state without the use of this hypothesis.Doing so, we introduce a sequence of regular optimal control problems whose integral cost approximates the time of crisis (this is made possible using mollifiers).Our contribution is twofold: -We propose in the context of the time of crisis problem a sufficient condition for the derivation of necessary optimality conditions (see our Thm.4.6).This condition relies on the data of the problem and on a sequence of approximated solutions, that can be easily checked.-Necessary conditions for the time of crisis are derived under this condition which also covers the transverse case (recovering optimality conditions that can be alternatively obtained using the HMP in this case).
It is worth noticing that our alternative condition (see Thm. 4.6) does not involve the knowledge of the velocity of the optimal solution at a crossing time.As a byproduct of this approach (and in contrast with [1,20] for instance), the sequence of approximated optimal control problems allows to approach an optimal solution of the original problem with a solution associated with a regular optimal control problem, that can be solved numerically with existing efficient methods.
The paper is organized as follows.In Section 2, we introduce the time of crisis problem and the regularization scheme, and we also apply the PMP on the regularized optimal control problem.In Section 3, we study properties of a sequence of approximated solutions (namely, the integral of the approximated solution computed along the mollifier).This sequence will be crucial to introduce an auxiliary hypothesis in Section 5 that will allow us to derive optimality conditions for the time of crisis problem.Section 4 provides optimality conditions for the time of crisis under the hypothesis that the suitable sequence is bounded (and so, without a transverse hypothesis which is the novelty here).A sufficient condition involving this sequence is presented in Section 5.The last section is devoted to a complete study of an illustrative example that highlights the convergence of the sequence of adjoint vectors to a discontinuous covector having a jump at each crossing time.This phenomenon is not only depicted in a transverse situation, but also when a trajectory hits the boundary of the constrained set tangentially (and leaves the set non-tangentially).

Notations and main hypotheses
Throughout the paper, m, n ≥ 1 are integers and T > 0. Let us introduce the following notations: -For x, y ∈ R n , |x| and x • y, denote the euclidean norm of x and the scalar product between x and y.If A is a square matrix of dimension n, A stands for the norm of a A and its transpose is written A .-Given a mapping f : R n × R m → R n , we denote respectively by D x f (x, u), D u f (x, u) the Jacobian matrix of f w.r.t.variables x and u at some point ( denote the gradient, the partial derivative w.r.t.x i , and the Hessian of a function ϕ : R n → R at some point x ∈ R n .-Given two integers k, p ≥ 1 and a function w : [0, T ] → R k , the notation w L p (I ; R k ) will stand for the L p -norm of w over some time interval we denote by g(t ± ) the right and left limits (when it exists) of g at point t.In the same way, we shall denote by ġ(t ± ) the right and left derivative of a scalar function g (when it exists).
In the sequel, we consider two non-empty subsets U and K of R m and R n respectively, as well as two mappings ϕ : R n → R and f : R n × R m → R n (the dynamics).Throughout the paper, we suppose that the following hypotheses are satisfied: (H1) The set U is compact and f is of class C 1 with linear growth, i.e., there is c ≥ 0 such that for every (x, u) ∈ R n × U , one has |f (x, u)| ≤ c(|x| + 1).(H2) For every (x, p) ∈ R 2n , the set that the set K is with non-empty interior and is the 0-sub-level set of ϕ: (H4) For every x ∈ ∂K (the boundary of K), one has ϕ(x) = 0 and ∇ϕ(x) = 0.
Note that (H2) is fulfilled whenever the dynamics f is affine w.r.t. to the control u and U is convex.

The time of crisis
Throughout the paper, we consider the admissible control set 2 The compactness property actually follows from the continuity of f and Dxf and the compactness of U .
Given an initial condition x 0 ∈ R n , the minimal time crisis problem3 (over [0, T ]) is defined as where 1 K c denotes the characteristic function of the complement of K, i.e., 1 Under the previous assumptions, the existence of an optimal control for (2.1) follows easily from the upper semi-continuity of 1 K c (see, e.g., [7], Prop.2.1 for more details).An important feature for studying optimality conditions of (2.1) is the notion of crossing time that we recall below.
Definition 2.1.Given a solution x(•) of (2.2), let us define the absolutely function ρ as: We shall say that a crossing time is "outward" if x crosses ∂K from K to K c , and "inward" if it crosses from K c to K.
(ii) A crossing time τ is transverse if moreover the function ρ admits non-null left and right limits, i.e., (negative for an outward crossing time, positive for an inward crossing time.) Remark 2.2.Definition 2.1 (i) is equivalent to say that τ is an isolated root of ρ such that the map t → ρ(t)(t − τ ) is locally of constant sign (positive from from K to K c , negative from from K c to K).
Throughout the paper, we suppose that the following assumption holds true: Assumption 2.3.Any optimal solution x * of (2.1) has a finite number r ≥ 1 of (alternated) crossing times In particular, we will not consider optimal trajectories that may cross the boundary of K an infinite number of times over [0, T ] (such as chattering [23]).

Regularization scheme
The approach developed in the present paper is an approximation procedure of Problem (2.1) with a sequence of regular problems that can be solved with standard optimality conditions (such as the PMP) or existing numerical tools.It will allow us to recover the conditions obtained for instance in [7] using the HMP [17].As mentioned in the introduction, other authors considered regularization of problems similar to (2.1), but requiring an a priori knowledge of an optimal control [1,20], which therefore cannot be used as a practical approximation, in contrast with our approach.In addition, we shall see that the sufficient condition for the derivation of optimality conditions that we obtain in Section 5 does not involve the assumption that each crossing time of an optimal solution is transverse, as it is required in the HMP.Instead, this condition relies only on the boundedness in L 1 of a suitable sequence related to the regularized problem, that can be tested numerically.
Let us now introduce a regularized scheme associated with (2.1).Doing so, we consider approximations of the Heaviside step function 4 , denoted here G, by a sequence of functions that fulfill the following properties.
(H5) For any n ∈ N, the map G n : R → [0, 1] is C 2 and non decreasing.The sequence (G n ) n is such that Moreover, there exists two sequences of numbers (a n ) n ∈ R n − and (b n ) n ∈ R n + resp.increasing and decreasing and of limit 0 when n → +∞ such that: In view of its definition, each function G n is Lipschitz continuous, and its Lipschitz constant L n is such that L n → +∞ whenever n → +∞.Note also that the sequence (h n ) n defined as is a C 1 -mollifier, i.e., for every n ∈ N, the support of h n is contained in [a n , b n ] and one has R h n (σ)dσ = 1 for all n ∈ N. In addition, one has sup σ∈R |h n (σ)| → +∞ whenever n → +∞.A typical example of such a sequence of functions is given by: (see Fig. 1).In the sequel, we consider the following sequence of optimal control problems where x u (•) is the unique solution to (2.2).The existence of an optimal control of (2.6) is straightforward using Filippov's existence Theorem [22] under our assumptions.Hereafter, we denote by (x n , u n ) an optimal pair of (2.6).Following [5], we can show that, up to a sub-sequence, the sequence x n converges stronglyweakly 5 to an optimal pair (x * , u * ) of (2.1).Let us stress that in general, (u n ) may not converge point-wise to u * .However, more can be said about this point whenever the system is affine w.r.t. the control and under additional assumptions (such as the absence of singular arcs), see e.g.[8], Remark 4.3 or [19].

Optimality conditions for the regularized problem
We are now in a position to apply the Pontryagin Maximum Principle on Problem (2.6).Let H n : R n × R n × R m → R be the Hamiltonian6 associated with (2.6) defined as: Since (x n , u n ) is optimal for (2.6), there is an absolutely continuous map p n : [0, T ] → R n satisfying the adjoint equation ṗn = − ∂Hn ∂x almost everywhere and p n (T ) = 0, that is, as well as the Hamiltonian condition which can be written ) is called an extremal (recall that only normal extremals occur here as there is no terminal condition).Let us observe that the problem is autonomous, therefore, the Hamiltonian is conserved along any extremal.For every n ∈ N, there is Hn ∈ R such that: Remark 2.4.Since (x n ) n strongly-weakly converges to x * which has r crossing times, it can be observed that t → h n (ϕ(x n (t))) takes arbitrarily large values in (2.7).Hence, we can expect the sequence ( ṗn ) n to be unbounded in L ∞ ([0, T ] ; R n ).We shall see that whenever every crossing time of x * is transverse, then (p n ) n is indeed bounded in L ∞ ([0, T ] ; R n ) even though this is not the case for ( ṗn ) n .This is actually the main difficulty for studying the behavior of (2.7) and for passing to the limit when n → +∞.
The boundedness of the sequence (p n ) n is related to the behavior of the sequence (I n ) n defined as (2.9) As this integral will play a crucial role in the establishment of optimality conditions, passing to the limit into (2.7) when n → +∞, we devote the next session to the analysis of its properties.

Properties of the sequence of integrals (I n ) n
We start by proving that ( Recall that the limiting path x * has a finite number of crossing times (τ i ) 1≤i≤r (recall Assump.2.3).Set with the convention τ 0 := 0, τ r+1 := T , and define the subsets The following property will be used at several places.Property 3.1.For all η ∈ (0, δ), there is N ∈ N such that for all n ≥ N and t ∈ J η , one has h n (ϕ(x n (t))) = 0.
Proof.Take η ∈ (0, δ).Since (x n ) n uniformly converges to x * and as ϕ(x * (t)) = 0 if and only if t ∈ {τ 1 , . . ., τ r }, there are N 1 ∈ N and γ > 0 such that Now, recall that both sequences (a n ) n , (b n ) n defining the the support of h n converge to zero, whence the result.
Next, we have the following equivalence between the boundedness of (p n ) n and ( Proof.Since p n (T ) = 0 for every n ∈ N, the boundedness of (p n ) n easily follows from the boundedness of (I n ) n using Gronwall's Lemma and the fact that ( Let us now assume that (p From (H4) and using the uniform convergence of (x n ) n to x * , there are η ∈ (0, δ), N 1 ∈ N, and γ > 0 such that From property 3.1, there is N ∈ N such that for every n ≥ N and every t ∈ J η , one has h n (ϕ(x n (t))) = 0.It follows that for every n ≥ N 2 := max(N, N 1 ), one has: Now, from the hypotheses on f and ϕ, there is C ≥ 0 such that: By an integration by parts, we obtain using the terminal condition p n (T ) = 0 for every n ∈ N: As the sequence (x n ) n , ( ẋn ) n , and (p n ) n are uniformly bounded, we deduce that there is C ≥ 0 such that for every n ∈ N, one has We have thus proved that the sequence )) = 0 for every t ∈ J η and every n ≥ N , the result follows.This proposition will be used later on to obtain optimality conditions for (2.1) by letting n → +∞ in (2.7) and by reasoning by contradiction supposing that (p n ) n is unbounded in L ∞ ([0, T ] ; R n ).We now aim at studying convergence properties of the sequence (I n ) n .For 1 ≤ i ≤ r and n ∈ N, define the integrals Ĩi,n (η) : Lemma 3.3.Suppose that (I n ) n is bounded.Then, for any i ∈ {1, . . ., r}, there exists i ∈ R + such that for every η ∈ (0, δ], Ĩi,n (η) → i whenever n → +∞.
Proof.Fix i ∈ {1, . . ., r}.Since (I n ) n is bounded so is ( Ĩi,n (η)) n for η ∈ (0, δ], hence, we may assume that there exists i ∈ R + such that (up to a sub-sequence), Ĩi,n (δ) → i when n → +∞.We can then write Let then Recall that (x n ) n uniformly converges to x * when n → +∞.Thus, there exists N ∈ N such that: Now, both sequence (a n ) n and (b n ) n converge to zero, hence there exists N ≥ N such that Because the support of This proves that Ĩi,n (η) → i when n → +∞.Since η ∈ (0, δ] is arbitrary, the result follows. Thanks to this lemma, we can now show the following result which provides a kernel-type property7 of (I n ) and that will be crucial hereafter to study the convergence of (p n ) n .
In addition, η i goes to zero as ε ↓ 0.
Proof.For η ∈ (0, δ], one can write: Let then ε > 0 be fixed and set M := sup n I n .By continuity of x * at t = τ i , there exists Without any loss of generality, one can choose η i such that it goes to zero as ε tends to 0 because t → (g Since for every η ∈ (0, Now, the sequence (x n ) n uniformly converges to x * over [0, T ].Hence, there exists N ∈ N such that for all n ≥ N , one has |g(x n (t)) − g(x * (t))| ≤ ε 3M for every t ∈ [0, T ].This gives us the following property: The last step is to apply Lemma 3.3 for η = η i which provides the next property: Let us underline that for an arbitrary sequence (x n ) n satisfying (2.2), which (up to a sub-sequence) converges strongly-weakly to a solution x of (2.2), then, the associated sequence of integrals (I n ) n is not necessarily bounded even if the limiting trajectory x has a transverse crossing time (see example below).
Example 3.5.Consider the scalar dynamics where u(t) ∈ [0, 1], together with the set K := R − (associated with ϕ(x) := x).As nominal path, we consider x(t , thus τ = 1 is a transverse crossing time.For this example, we suppose for convenience that the mollifier (h n ) n is such that a n = 0 and b n > 0 for every n ∈ N. Let us denote by c n ∈ (a n , b n ) the unique point at which h n achieves its maximum (one has h n (c n ) → +∞ whenever n → +∞).Next, we consider the sequence of absolutely continuous function (x n ) n defined as where (ζ n ) n converges to 0 when n → +∞, (ξ n ) n and (ξ n ) n are with values in (0, 1), and: (3.10) Equality (3.10) guarantees the continuity of x n at t = 1 + ξ n for all n ∈ N. Sequences (ξ n ) n and (ξ n ) n will be made more precise hereafter and (3.10) allows to uniquely define the sequence Then, we we can easily see that I n → +∞.Indeed, I n can be written Recall that ϕ(x) = x, thus by changing the integration variable t into s = 1+cn 1−ξn t − 1, resp.s n , we obtain that for every n ∈ N, one has I 1 n ≤ 1−ξn 1+cn and I 3 n ≤ 1.We deduce that (I 3 n ) n is bounded and so is (I 1 n ) n because (c n ) n and (ξ n ) n converge to zero.Now, for all n ∈ N, we have which shows that I n → +∞.In addition, we easily check that the sequence (x n ) n strongly-weakly converges to x.First, one has sup whenever n → +∞.Second, one can verify that ( ẋn ) n converges a.e. to ẋ (actually for every t ∈ [0, 2]\{1}) which is enough to ensure the weak convergence of ( ẋn ) n to ẋ in L 2 ([0, 2] ; R).
Nevertheless, we shall see later on that this phenomenon does not occur whenever (x n ) n is a minimizing sequence obtained from (2.6).This is due to the application of Pontryagin's Principle that provides additional properties on the extremal (x n , p n , u n ) preventing (I n ) n to blow up whenever every crossing time of the optimal path x * is transverse.

Optimality conditions for the time crisis problem
In this section, we give necessary optimality conditions without the HMP, i.e., by passing to the limit into the state-adjoint system satisfied by the extremal (x n , p n , u n ).Let us start by giving a definition of a covector associated with Problem (2.1).Doing so, we define the Hamiltonian H : Definition 4.1.Given a solution (x * , u * ) of (2.1) with r crossing times, we say that a piecewise absolutely continuous function p : [0, T ] → R n is a covector associated to x * if p is absolutely continuous on [0, T ]\{τ 1 , . . ., τ r } and satisfies the following conditions.
Remark 4.2.Equality (4.4) amounts to say that p has a jump at t = τ i in the normal cone to the set K (meaning here that the jump is in the direction of ∇ϕ(x(τ i )), being assumed that K has a smooth boundary).
To establish optimality conditions for (2.1), we start by proving the convergence of (p n ) n .Doing so, we proceed step by step.
Then, up to a sub-sequence, the pair (x n , p n ) n strongly-weakly converges over Proof.In view of Property 3.1, for n large enough, the pair (x n , p n ) satisfies the system n is bounded, we may assume that (p n (t 0 )) n converges (up to a subsequence).Because (x n ) n uniformly converges to x * , we deduce that x n (t 0 ) → x * (t 0 ).Using (H1)-(H2), the result of compactness of solutions of a control system (see, e.g., [11], Thm.1.11) yields the result.
Proof.Let η ∈ (0, δ).By using the previous lemma, for every 1 ≤ i ≤ r, we obtain the existence of an absolutely continuous function p η defined over J η and satisfying (4.6) over J η .We now argue that p η does not depend on η by considering a sequence of positive numbers (η k ) k such that η k ↓ 0 which allows us to define p η k for every k ∈ N, as previously (over J η k ).We then obtain p η k+1 (t) = p η k (t) for every t ∈ J η k because J η k ⊂ J η k+1 for every k ∈ N.This shows that we can then define a function p : [0, T ]\{τ 1 , . . ., τ r } → R n without any ambiguity by the equality p = p η on every set J η , η ∈ (0, δ].By construction, p does not depend on η, which is as wanted. Let us now address the question of the constancy of the Hamiltonian along (x * , p, u * ) and the Hamiltonian condition (4.2).Proof.Recall that x * has r crossing times τ i , i = 1, . . ., r.Let 0 ≤ i ≤ r and t 0 ∈ (τ i , τ i+1 ) be a Lebesgue point of the measurable function t → p(t) • ẋ * (t).From (2.8), we obtain for any u ∈ U and ν > 0 (small enough): Letting ν ↓ 0 then gives Since u ∈ U is arbitrary and almost every point of [0, T ] is a Lebesgue point of t → p(t) • ẋ * (t), we obtain (4.2).
Thanks to the previous lemma, we can now give the main result of this section, namely that (x * , p, u * ) is an extremal of Problem (2.1).Theorem 4.6.Suppose that the sequence of integrals (I n ) n is bounded or that the sequence (p n ) n is bounded in L ∞ ([0, T ] ; R n ).Then, there exists a non-null covector p : [0, T ] → R n associated with x * in the sense of Definition 4.1.In addition, the sequence (p n ) n strongly-weakly converges to p over every compact subset of [0, T ]\{τ 1 , . . ., τ r }.

Let us now show (4.3). Since (p
Given t 1 , t 2 ∈ (τ i , τ i+1 ), we can thus write: where A := R × sup t∈[0,T ] |D x f (x * (t), u * (t)) |.This inequality shows that p(•) satisfies the Cauchy criterion at t = τ + i which proves that the right limit p(τ + i ) exists.Similarly, one obtains the existence of a left limit p(τ − i ).We can repeat this argumentation at every crossing time τ i which gives (4.3).
Let us now prove the jump formula (4.4).Doing so, consider a sequence (ε k ) k such that ε k ↓ 0 and let us apply Proposition 3.4 with ∇ϕ in place of g (ϕ being of class C 2 ).For every k ∈ N, there exist η k ∈ (0, δ] and N k ∈ N such that for every n ≥ N k , one has First, we let n go to infinity (k being fixed) which gives: which is the desired property.The last step is to show that for every 1 ≤ i ≤ r, one has i = 0. Consider the map which is continuous on each time interval (τ i−1 , τ i ).As p admits left and right limits at each τ i , so is h.Consider i ∈ {1, . . ., r} such that x * crosses ∂K from K to K c at t = τ i .One has then

and again a contradiction with (4.5).
Let us stress that this result does not involve Assumption (H') (to be found below in Sect.5) which is about the transversality of x * .Using the constancy of the Hamiltonian along an extremal, the jump formula can be also written as follows (see also [17]).Corollary 4.7.Assume that the sequence of integrals (I n ) n is bounded.If ẋ * admits left and right derivative at a crossing time τ i , i ∈ {1, . . ., r} such that ∇ϕ(x * (τ i )) • ẋ * (τ − i ) = 0 or ∇ϕ(x * (τ i )) • ẋ * (τ + i ) = 0, then the jump of the covector p at τ i can be written as: where δ i = +1 resp.δ i = −1 if the crossing time τ i is outward, resp.inward.
Proof.Let us write the conservation of the Hamiltonian in a right and left neighborhood of τ i .If τ i is an outward crossing time, one has then: and p(τ

9) gives the equivalent expression
.
If τ i is an inward crossing time, one can easily check that this amounts to replace 1 by −1 in (4.9).
Remark 4.8.When every crossing time τ i of the optimal solution x * is transverse, one can use the expression (4.8) to determine explicitly the jumps of p, recursively from the terminal time T in a univocal way.Therefore, for any choice of sequences (a n ) n , (b n ) n such that (I n ) n is bounded and (x n ) n converges to x * (up to a subsequence), one obtains the same numbers i .If not, the jumps are determined only implicitly from conditions (4.7) and (4.8).

Sufficient conditions for the boundedness of the sequence
The aim of this section is to give sufficient conditions on the system that ensure the boundedness of (I n ) n .In that case, necessary optimality conditions for an optimal path are guaranteed by Theorem 4.6.Given an optimal solution (x * , u * ) of Problem 2.1, let us introduce the following hypothesis (in the spirit of the Hybrid Maximum Principle that requires also a transverse assumption [17]).
(H ) Every crossing time of x * is transverse.This hypothesis excludes the cases where the optimal solution x * hits the boundary of K tangentially, i.e., at a crossing time τ .Actually, several cases could appear depending if both scalar products are zero in (5.1) or only one.As far as we know, the derivation of necessary conditions in this case has been very little considered in the literature (except in [4]) and remains a thorough open question.We shall see in Section 6 an example of optimal paths that possess non-transverse crossing times.

The transverse case
We start by the following result which covers the case where every crossing times of the optimal solution x * is transverse.Proposition 5.1.Under Hypothesis (H ), the sequence (I n ) n is bounded.
Proof.Suppose by contradiction that (I n ) n is unbounded.Extracting a sub-sequence if necessary, we may assume that I n → +∞ whenever n → +∞.Observe that the function q n := pn In satisfies the differential system where hn (σ) := hn(σ) In , σ ∈ R. By this change of variable, one has obviously so, Proposition 3.2 implies that the sequence (q n ) n is uniformly bounded in L ∞ ([0, T ] ; R n ).One can then repeat the same argumentation than in the proof of Theorem 4.6 on the sequence (q n ) n , excepted the last point about the value of the Hamiltonian.Indeed, as the covector p n has been renormalized, we have no longer the value of the Hamiltonian equal to −G n (ϕ(x n (T )).However, we obtain that there exists a piecewise absolutely continuous function q : [0, T ]\{τ 1 , . . ., τ r } → R n satisfying the following properties: -Up to a sub-sequence, (q n ) n converges to q on every compact set of [0, T ]\{τ 1 , . . ., τ r }.
-The function q satisfies -The Hamiltonian condition is fulfilled almost everywhere: (5.3) -For every 1 ≤ i ≤ r, q admits a limit at τ ± i , i.e., there exists lim t→τ ± i q(t).
-At every crossing time τ i , 1 ≤ i ≤ r, there exists ˜ i ∈ R such that (5.4) Here, we can no longer guarantee that each scalar ˜ i is non null.Nevertheless, Proposition 3.4 implies that for every 1 ≤ i ≤ r and every η ∈ (0, δ] one has Observe that ∀n ∈ N, For every t ∈ [0, T ], one has Hn In → 0 and − Gn(ϕ(xn(t))) In → 0 when n → ∞ because Hn ∈ [−1, 0] and G n (ϕ(x n (t))) ∈ [0, 1] for every n ∈ N and every t ∈ [0, T ].We then obtain that the covector q satisfies: (5.6) by considering Lebesgue points of the map t → q(t) • f (x * (t), u * (t)) and repeating exactly the same argumentation as in the proof of Lemma 4.5.
We claim now that q ≡ 0, i.e., q(•) is non-null over [0, T ].To show this property, it is enough to prove that there exists 1 ≤ i ≤ r such that ˜ i = 0. Suppose then by contradiction that for every 1 ≤ i ≤ r, one has ˜ i = 0.By using Lemma 3.3 with hn in place of h n , we obtain that ∀i ∈ {1, . . ., r}, ∀η ∈ (0, δ], lim Combining (5.7) and (5.8) gives us a contradiction with (5.2), thus we have obtained that there is 1 ≤ i ≤ r such that ˜ i = 0.
To conclude the proof of the proposition, we will finally exhibit a contradiction involving optimality conditions (5.4) and (5.6) satisfied by the covector q.Fix 1 ≤ i ≤ r such that ˜ i = 0. First, using (5.3)-(5.6)at t = τ − i , one has Using (5.4), we get It follows that (5.9) We now proceed with the same reasoning using (5.3)-(5.6)at t = τ + i .We obtain Again, thanks to (5.4), one has It follows that Using again that ˜ i = 0 and ∇ϕ(x * (τ i )) • ẋ * (τ + i ) = 0, we deduce that We claim that (5.9)-(5.10) is a contradiction.Indeed, because ˜ i is non-zero, (5.9)-(5.10)imply that the scalar products ∇ϕ(x * (τ i )) • ẋ * (τ − i ) and ∇ϕ(x * (τ i )) • ẋ * (τ + i ) are of opposite sign.Because at t = τ i , the trajectory crosses the boundary of K, we necessarily have that if it is inward.This contradiction completes the proof of the proposition and shows that (I n ) n is necessarily a bounded sequence.
As a consequence, we recover the HMP of the literature in the transverse case, as stated below.
Corollary 5.2.Under Hypothesis (H ), there exists a non-null covector p : [0, T ] → R n associated with x * in the sense of Definition 4.1.
Proof.Under Hypothesis (H ), the sequence (I n ) n is bounded, thanks to the previous proposition, and so is The result follows from Theorem 4.6.

A sufficient condition on the approximating sequence
Note that condition (H ) is expressed on the optimal solution x , which is usually not known a priori.Instead, we give now conditions on the subsequence (x n ) n , and not on the limiting solution x * , that ensure the boundedness of the sequence of integrals (I n ) n .For n ∈ N, define the absolutely continuous function ρ n as: We know that for each n ∈ N, x n is differentiable almost everywhere over [0, T ] and thus ρ n as well, so that: In addition, (ρ n ) n is uniformly bounded in L ∞ ([0, T ] ; R) thanks to (H1) and (H3).Therefore, for i ∈ {1, . . ., r} and n ∈ N, we can define: ρn (t).
Remark 5.3.In many optimal control problems, optimal solutions x n are piecewise C 1 , and thus the function ρ n admits left and right derivatives at any t ∈ (0, T ).Then, the definition of the numbers l ± i,n simply involves the maximum and minimum of ρn (τ ± i ).Proposition 5.4.If for every 1 ≤ i ≤ r, one has then the sequence (I n ) n is bounded.
Proof.For i = 1, . . ., r, set and define the numbers η − i , η + i (i = 1, . . ., r) as together with η − 1 := τ 1 and η + r := T − τ r .Also, for i ∈ {1, . . ., r}, let us define the integrals: One has clearly We show now that for any i ∈ {1, . . ., r} such that τ i is an outward crossing time, the sequence ( Ĩ− i,n (η − i )) n is bounded.Observe first that one has: (5.12) Let ε ∈ (0, l i ).We claim that there exist Indeed, by definition of l i , there exists N ∈ N such that for every n ≥ N one has We also have for every h > 0 By definition of l − i,n , we deduce that there exists and thus, we obtain (5.13).One can then write (5.14) Observe now that the scalar µ defined as From the uniform convergence of the sequence (x n ) n to x * over [0, T ] and since ϕ is continuous, there exists N ≥ N such that one has From the convergence of the sequence (a n ) n to 0 (recall (H5)), there exists N ≥ N such that one has a n > −µ/2 for any n > N .Since the support of h n is contained in [a n , b n ], we deduce that Finally, from (5.12), (5.14) and (5.15), one obtains This shows that the sequence ( Ĩ− i,n (η − i )) n is bounded.For i ∈ {1, . . ., r} such that τ i is an inward crossing time, we proceed in the same way to show that the sequence ( Ĩ− i,n (η − i )) n is bounded, considering the number l i < 0. The proof of boundedness of the sequences ( Ĩ+ i,n (η + i )) n is analogous.From a practical point of view, x * and its crossing times τ i are known only approximately.Whenever ρ n admits left and right derivatives, condition (5.11) is merely guaranteed whenever ( ρn (t ± )) n is bounded from below by a positive number, or bounded from above by a negative number, locally at each crossing time τ i , 1 ≤ i ≤ r.

A reciprocal property
We have seen previously that under (H'), the sequence (I n ) n is bounded as well as under condition (5.11) which is a sufficient condition (expressed) on the approximated sequence (x n ) n .Thanks to these conditions, we obtained optimality conditions for an optimal solution x * (under the assumption that it has a finite number of crossing times).We now would like to address the converse question, namely, what can be said about x * whenever the sequence (I n ) n is bounded?In that case, we can prove the following result.Proposition 5.5.Suppose that the sequence (I n ) n is bounded and let τ i be a crossing time of x * such that ẋ * (τ ± i ) exist.Then, if τ i is an outward, resp.inward, crossing time, one has ẋ * (τ + i ) • ∇ϕ(x(τ i )) = 0, resp.ẋ * (τ − i ) • ∇ϕ(x(τ i )) = 0.
Proof.Since (I n ) n is bounded, there exists a (non-null) covector p as in Definition 4.1 (see Thm. 4.6).Consider now an outward crossing time τ i .The conditions satisfied by the extremal (x * , p, u * ) imply a jump of p at t = τ i and the constancy of the Hamiltonian as in Definition 4.1.These conditions becomes The Hamiltonian condition at t = τ i also gives us the following inequalities: for every u ∈ U .Suppose by contradiction that ẋ * (τ + i ) • ∇ϕ(x * (τ i )) = 0.It follows that: Using the Hamiltonian condition, we obtain which is a contradiction.It follows that ẋ * (τ + i ) • ∇ϕ(x * (τ i )) = 0 as was to be proved.In the case where τ i is an inward crossing time, we proceed in the same way supposing by contradiction that ẋ * (τ − i ) • ∇ϕ(x * (τ i )) = 0.In that case, the constancy of the Hamiltonian gives us By a similar computation, we obtain using that ẋ * (τ − i ) • ∇ϕ(x * (τ i )) = 0: This is a contradiction, which ends the proof.
This proposition shows that if (I n ) n is bounded, then, at every crossing time τ i of x * for which ẋ(τ ± i ) exists, the trajectory is always transverse "at the exterior" of K, i.e., we can consider the following cases: -case 1 : if τ i is an outward crossing time, then ẋ * (τ In both cases, we can only say that ẋ * (τ − i ) • ∇ϕ(x * (τ i )) ≥ 0 (case 1) or ẋ * (τ + i ) • ∇ϕ(x * (τ i )) ≤ 0 (case 2).If this scalar product is null, we shall say that the crossing time is one-sided transverse.In other words, if τ i is one-sided transverse, then, one has in case 1 and ẋ * (τ − i ) • ∇ϕ(x * (τ i )) > 0 and ẋ * (τ + i ) • ∇ϕ(x * (τ i )) = 0, (5.17) in case 2. Case 1 will be illustrated in Section 6.The previous proposition also allows us to find an interesting property about the sequence (p n ) n whenever there is i ∈ {1, . . ., r} such that ẋ * (τ ± i ) exist and In that case, we shall say that the crossing time τ i is two-sided non transverse (i.e., the trajectory x * enters and leaves K at τ i tangentially).
Remark 5.7.When any optimal solution x * fulfills Hypothesis (H'), then the sequence (I n ) n is bounded for any choice of the sequences (a n ) n , (b n ) n , accordingly to Proposition 5.1.If not, one cannot guarantee a priori that the sequence of (I n ) n is bounded.However, we show in the example of Section 6 that for the one-sided transverse case, a right choice of (a n ) n , (b n ) n allows to obtain the boundedness of (I n ) n and thus the necessary optimality conditions.

Example of an optimal path with a non-transverse crossing time
In this section, we provide an example for which an optimal solution of the time of crisis is non transverse.But still, for this solution, we can write optimality conditions using Theorem 4.6 even though the optimal path possesses a non-transverse crossing time.
We consider the time crisis problem over a finite interval [0, T ] min for the controlled dynamics x(•) in the plane and the set that can be written as K = {x ∈ R 2 ; ϕ(x 2 ) ≤ 0} where ϕ(y) := −(y − 1)(y − 5), y ∈ R.
Let us define the myopic8 feedback strategy as One can straightforwardly check that the solution x generated by the myopic strategy is given by the following expression with τ 1 = 1 2 and τ 2 = 3 2 .Note that x (t) ∈ K c for t ∈ (τ 1 , τ 2 ) and that τ 2 − τ 1 = 1 (see Fig. 2).Also, we set We shall next prove that u is optimal for any T > 0. Before proving this fact, we give a useful property related to the controlled dynamics (6.2).
Lemma 6.1.For any fixed t f > 0, the control u(•) = 0 is optimal for the auxiliary optimal control problem min u(•) x 2 (t f ), Figure 2. Trajectory x provided by the myopic strategy starting at (0, 0) (the set K is depicted in gray).The first crossing time is non-transverse (it is one-sided transverse of case 1, see (5.16)) whereas the second one is transverse.
where x(•) denotes the unique solution of (6.2) associated with an admissible control u(•).
Proof.We are in a position to apply the PMP on this Mayer optimal control problem.The Hamiltonian associated with this problem is and the adjoint equations are Notice that x 1 is always increasing.The switching function is (since p 2 is constant): and a straightforward computation gives φ(t) = −2π sin(πx 1 (t)).
Since x 1 is non constant with bounded derivative, we deduce that no singular arc occurs along an optimal path and that the number of switching times is at most finite.In addition, if u = 1 a.e. on some time interval [t f − δ, t f ] was optimal, one should have p 1 = 0 and thus φ < 0 a.e. on this interval, which is a contradiction with the maximization of the Hamiltonian w.r.t.u.Thus, integrating backward ẋ1 , ṗ1 from t f with the control with equality whenever x = x .We thus conclude that x is also optimal for the regularized problem over [0, T ] (with T > 5 2 ) for every n ∈ N * .We now continue the study of this example showing that Theorem 4.6 can be applied.The next step consists in showing that the sequence of integrals We now proceed to a change of variable in the integral I n .Consider the function t → z(t) = −(4t − 2)(4t − 6) which is increasing from 0 to 4 for t ∈ [ 1 2 , 1] with z (t) = 8 4 − z(t), and decreasing from 4 to 0 for t ∈ [1, 3  2 ] with z (t) = −8 4 − z(t).One can then write Note that for any n ∈ N , one has G n (z) = 0 for z ≥ 1 which gives We deduce that the sequence (I n ) n is bounded as wanted, so Theorem 4.6 does apply for x even if τ 1 is non-transverse.Let us next verify the existence of a covector associated with (x , u ) as in Definition 4.1 (recall that its existence is provided by Thm.4.6).Doing so, recall that u is given by (6.5), that x has two crossing times τ 1 , τ 2 such that 0 < τ 1 < τ 2 < T .Also, let H be the Hamiltonian associated with (6.1): The adjoint system writes and jumps of p at crossing times τ i must satisfy with l i > 0. (6.9) From (6.8), we first get p = 0 over (τ 2 , T ].Note that the Hamiltonian (6.7) is equal to 0 on this interval.At the crossing time τ 2 , the adjoint p 2 jumps from p 2 (τ − 2 ) > 0 to p 2 (τ + 2 ) = 0 accordingly to (6.9).Over the interval (τ 1 , τ 2 ), p 2 is constant equal to p 2 (τ − 2 ) and p 1 = 0 because u = 1 on this interval.The value p 2 (τ − 2 ) = 1 4 is obtained from the conservation of the Hamiltonian (6.7) which has also to be null on (τ 1 , τ 2 ).
We can then conclude that the covector p is given over [0, T ] by the expression Finally, one can straightforwardly check that the control u verifies the maximization of the Hamiltonian (6.7) almost everywhere over [0, T ].Consequently, we have checked that the triple (x , p, u ) satisfies the necessary optimality conditions given by Theorem 4.6.We conclude this section by giving an alternative approach showing that the sequence (p n ) n of covectors associated with the regularized problem P n (whose optimal solution is x ) is bounded.Doing so, fix n ∈ N. The Hamiltonian associated with P n becomes  Finally, over the interval [0, τ 1 ], one has again G n (ϕ(x 2 )) ≡ 0 and p n,2 is thus constant over this interval equal to p n,2 (τ 2 ) = 0. We deduce that p n ≡ 0 over [0, τ 1 ].We can then conclude that p n is given by: A simple consequence of this computation is that (p n ) n is bounded in L ∞ ([0, T ] ; R 2 ) and converges pointwise to the piecewise constant function p given in (6.10) (see Fig. 4).

Conclusion
In this work, we have developed an approach based on a sequence of approximate optimal control problems.The proof of boundedness of the sequence of covectors that we have proposed here presents some analogy with materials of the work [17].However, we have treated the question of boundedness via a suitable sequence of integrals (I n ) n , and without the use of the Hybrid Maximum Principle (that we obtain indeed as a byproduct of this approach).We also did not need to introduce needle-like perturbations and variation vectors to derive necessary optimality conditions.The main feature is that we obtain necessary optimality conditions for the time crisis problem under a more general condition than requiring transverse crossing times for the optimal trajectory.This condition is related to the sequence of integrals (I n ) n and it does not involve the optimal path x neither its velocity.Besides, it allows to deal with non-transverse optimal paths at the boundary of the constraint set as we saw in Section 6.
Our methodology presents several interests from a numerical point of view since we introduced a regularized optimal control problem in place of a discontinuous problem (which is more delicate to handle numerically).In addition, we introduced an auxiliary condition to guarantee necessary conditions for an optimal solution x * .This condition involves an approximated sequence which can be tested numerically and not a solution x * to the initial optimal control problem (that is unknown a priori ).Once this condition is checked (i.e., the boundedness of the sequence (I n ) n ), one can then write the necessary optimality conditions which provides an exact characterization of the limit x .Some interesting issues that are out of the scope of this paper could be the matter of future works.In particular, one would like to better grasp the link between the boundedness of the sequence (I n ) n and the behavior of x * at a crossing time τ (i.e., to find a characterization of the boundedness of (I n ) n in terms of geometrical properties of the optimal path x * at the crossing times), and to deal with optimal paths that could cross the constrained set tangentially on both sides.The methodology that has been developed in this paper could also be used to obtain an extension of necessary optimality conditions for more general hybrid problems whenever an optimal path has a so-called one-sided transverse crossing time, i.e., is tangent to the constrained set only on one side.

Figure 4 .
Figure 4. Illustration of the convergence of (p n ) to the piecewise constant p.