Hamilton-Jacobi equations for optimal control on networks with entry or exit costs

We consider an optimal control on networks in the spirit of the works of Achdou et al. (2013) and Imbert et al. (2013). The main new feature is that there are entry (or exit) costs at the edges of the network leading to a possible discontinuous value function. We characterize the value function as the unique viscosity solution of a new Hamilton-Jacobi system. The uniqueness is a consequence of a comparison principle for which we give two different proofs, one with arguments from the theory of optimal control inspired by Achdou et al. (2014) and one based on partial differential equations techniques inspired by a recent work of Lions and Souganidis (2016).


Introduction
A network (or a graph) is a set of items, referred to as vertices or nodes, which are connected by edges (see Fig. 1 for example). Recently, several research projects have been devoted to dynamical systems and differential equations on networks, in general or more particularly in connection with problems of data transmission or traffic management (see for example Garavello and Piccoli [14] and Engel et al. [12]).
An optimal control problem is an optimization problem where an agent tries to minimize a cost which depends on the solution of a controlled ordinary differential equation (ODE). The ODE is controlled in the sense that it depends on a function called the control. The goal is to find the best control in order to minimize the given cost. In many situations, the optimal value of the problem as a function of the initial state (and possibly of the initial time when the horizon of the problem is finite) is a viscosity solution of a Hamilton-Jacobi-Bellman partial differential equation (HJB equation). Under appropriate conditions, the HJB equation has a unique viscosity solution characterizing by this way the value function. Moreover, the optimal control may be recovered from the solution of the HJB equation, at least if the latter is smooth enough.
The first articles about optimal control problems in which the set of admissible states is a network (therefore the state variable is a continuous one) appeared in 2012: in [2], Achdou et al. derived the HJB equation associated to an infinite horizon optimal control on a network and proposed a suitable notion of viscosity solution. Obviously, the main difficulties arise at the vertices where the network does not have a regular differential structure. As a result, the new admissible test-functions whose restriction to each edge is C 1 are applied. Independently and at the same time, Imbert et al. [17] proposed an equivalent notion of viscosity solution for studying a Hamilton-Jacobi approach to junction problems and traffic flows. Both [2] and [17] contain first results on comparison principles which were improved later. It is also worth mentioning the work by Schieborn and Camilli [22], in which the authors focus on eikonal equations on networks and on a less general notion of viscosity solution. In the particular case of eikonal equations, Camilli and Marchi established in [10] the equivalence between the definitions given in [2,17,22].
Since 2012, several proofs of comparison principles for HJB equations on networks, giving uniqueness of the solution, have been proposed. 1. In [3], Achdou et al. give a proof of a comparison principle for a stationary HJB equation arising from an optimal control with infinite horizon (therefore the Hamiltonian is convex) by mixing arguments from the theory of optimal control and PDE techniques. Such a proof was much inspired by works of Barles et al. [6,7], on regional optimal control problems in R d (with discontinuous dynamics and costs). 2. A different and more general proof, using only arguments from the theory of PDEs was obtained by Imbert and Monneau in [16]. The proof works for quasi-convex Hamiltonians, and for stationary and time-dependent HJB equations. It relies on the construction of suitable vertex test functions. 3. A very simple and elegant proof, working for non convex Hamiltonians, has been very recently given by Lions and Souganidis [19,20].
The goal of this paper is to consider an optimal control problem on a network in which there are entry (or exit) costs at each edge of the network and to study the related HJB equations. The effect of the entry/exit costs is to make the value function of the problem discontinuous. Discontinuous solutions of Hamilton-Jacobi equation have been studied by various authors, see for example Barles [4], Frankowska and Mazzola [13], and in particular Graber et al. [15] for different HJB equations on networks with discontinuous solutions.
To simplify the problem, we will first study the case of junction, i.e., a network of the form G = ∪ N i=1 Γ i with N edges Γ i (Γ i is the closed half line R + e i ) and only one vertex O, where {O} = ∩ N i=1 Γ i . Later, we will generalize our analysis to networks with an arbitrary number of vertices. In the case of the junction described above, our assumptions about the dynamics and the running costs are similar to those made in [3], except that additional costs c i for entering the edge Γ i at O or d i for exiting Γ i at O are added in the cost functional. Accordingly, the value function is continuous on G\ {O}, but is in general discontinuous at the vertex O. Hence, instead of considering the value function v, we split it into the collection (v i ) 1≤i≤N , where v i is continuous function defined on the edge Γ i . More precisely, Our approach is therefore reminiscent of optimal switching problems (impulsional control): in the present case the switches can only occur at the vertex O. Note that our assumptions will ensure that v| Γi\{O} is Lipschitz continuous near O and that lim δ→0 + v (δe i ) does exist. In the case of entry costs for example, our first main result will be to find the relation between This will show that the functions (v i ) 1≤i≤N are (suitably defined) viscosity solutions of the following system Here H i is the Hamiltonian corresponding to edge Γ i . At vertex O, the definition of the Hamiltonian has to be particular, in order to consider all the possibilities when x is close to O. More specifically, if x is close to O and belongs to Γ i then: • The term min j =i {u j (O) + c j } accounts for situations in which the trajectory enters accounts for situations in which the trajectory does not leave Γ i .
• The term H T O accounts for situations in which the trajectory stays at O.
The most important part of the paper will be devoted to two different proofs of a comparison principle leading to the well-poseness of (1.1): the first one uses arguments from optimal control theory coming from Barles et al. [6,7] and Achdou et al. [3]; the second one is inspired by Lions and Souganidis [19] and uses arguments from the theory of PDEs.
The paper is organized as follows: Section 2 deals with the optimal control problems with entry and exit costs: we give a simple example in which the value function is discontinuous at the vertex O, and also prove results on the structure of the value function near O. In Section 3, the new system of (1.1) is defined and a suitable notion of viscosity solutions is proposed. In Section 4, we prove our value functions are viscosity solutions of the above mentioned system. In Section 5, some properties of viscosity sub and super-solution are given and used to obtain the comparison principle. Finally, optimal control problems with entry costs which may be zero and related HJB equations are considered in Section 6.

The geometry
We consider the model case of the junction in R d with N semi-infinite straight edges, N > 1. The edges are denoted by (Γ i ) i=1,N where Γ i is the closed half-line R + e i . The vectors e i are two by two distinct unit vectors in R d . The half-lines Γ i are glued at the vertex O to form the junction G The geodetic distance d (x, y) between two points x, y of G is x, y belong to the same egde Γ i , |x| + |y| if x, y belong to different edges Γ i and Γ j .

The optimal control problem
We consider infinite horizon optimal control problems which have different dynamic and running costs for each and every edge. For i = 1, N , • the set of control on Γ i is denoted by A i • the system is driven by a dynamics f i • there is a running cost i .
Our main assumptions, referred to as [H] hereafter, are as follows: [H0] (Control sets) Let A be a metric space (one can take A = R d ). For i = 1, N , A i is a nonempty compact subset of A and the sets A i are disjoint.
[H1] (Dynamics) For i = 1, N , the function f i : Γ i × A i → R is continuous and bounded by M . Moreover, there exists L > 0 such that Hereafter, we will use the notation F i (x) for the set {f i (x, a) e i : a ∈ A i }.
[H2] (Running costs) For i = 1, N , the function i : Γ i × A i → R is a continuous function bounded by M > 0.
There exists a modulus of continuity ω such that [H3] (Convexity of dynamic and costs) For x ∈ Γ i , the following set is non-empty, closed and convex.
[H4] (Strong controllability) There exists a real number δ > 0 such that Remark 2.1. The assumption that the sets A i are disjoint is not restrictive. Indeed, if A i are not disjoint, then Then M is closed. We also define the function on M by The function f is continuous on M since the sets A i are disjoint.
Definition 2.2 (The speed set and the admissible control set). The setF (x) which contains all the "possible speeds" at x is defined byF For x ∈ G, the set of admissible trajectories starting from x is According to Theorem 1.2 from [3], a solution y x can be associated with several control laws. We introduce the set of admissible controlled trajectories starting from x Notice that, if (y x , α) ∈ T x then y x ∈ Y x . Hereafter, we will denote y x by y x,α if (y x , α) ∈ T x . For any y x,α , we can define the closed set T O = {t ∈ R + : y x,α (t) = O} and the open set T i in R + = [0, +∞) by where K i = 1, n if the trajectory y x,α enters Γ i n times and K i = N if the trajectory y x,α enters Γ i infinite times.  In the sequel, we define two different cost functionals (the first one corresponds to the case when there is a cost for entering the edges and the second one corresponds to the case when there is a cost for exiting the edges): Definition 2.4 (The cost functionals and value functions with entry/exit costs). The costs associated to trajectory (y x,α , α) ∈ T x are defined by where the running cost : Hereafter, to simplify the notation, we will use J (x, α) and J (x, α) instead of J (x; (y x,α , α)) and J (x; (y x,α , α)), respectively. The value functions of the infinite horizon optimal control problem are defined by: v (x) = inf (yx,α,α)∈Tx J (x; (y x,α , α)) (value function with entry cost), and v (x) = inf (yx,α,α)∈Tx J (x; (y x,α , α)) (value function with exit cost).
Remark 2.5. By the definition of the value function, we are mainly interested in a control law α such that J (x, α) < +∞. In such a case, if |K i | = +∞, then we can order {t ik , η ik : k ∈ N} such that Indeed, assuming if lim k→∞ t ik = t < +∞, then in contradiction with J (x, α) < +∞. This means that the state cannot switch edges infinitely many times in finite time, otherwise the cost functional is obviously infinite.
The following example shows that the value function with entry costs is possibly discontinuous (the same holds for the value function with exit costs).
Summarizing, we have the two following cases The graph of the value function with entry costs c 2 ≥ 1 λ = 1 is plotted in Figure 2a.
The graph of the value function with entry costs c 2 = 1 2 Figure 2b. , there exist two positive numbers r 0 and C such that for all Proof of Lemma 2.7. This proof is classical. It is sufficient to consider the case when x 1 and x 2 belong to same edge Γ i , since in the other cases, we will use O as a connecting point between x 1 and x 2 . According to there exist a control law α and τ x1,x2 > 0 such that α (t) = a if 0 ≤ t ≤ τ x1,x2 and . If |x 1 | > |x 2 |, the proof is achieved by replacing a ∈ A i by a ∈ A i such that f i (O, a) = −δ and applying the same argument as above. Proof of Lemma 2.8. The proof of continuity inside the edge is classical by using [H4], see [1] for more details. The proof of Lipschitz continuity is a consequence of Lemma 2.7. Indeed, for x, y belong to Γ i ∩ B (0, ε), by Lemma 2.7 and the definition of value function, we have

Some properties of value function at the vertex
The last inequality follows from the Lemma 2.7. The inequality v i (z) − v i (x) ≤ CC |x − z| is obtained in a similar way. The proof is done.

Let us define the tangential Hamiltonian H T O at vertex O by
and H T O will be given in the next theorem. Hereafter, the proofs of the results will be supplied only for the value function with entry costs v, the proofs concerning the value function with exit costs v are totally similar.
Remark 2.10. Theorem 2.9 gives us the characterization of the value function at vertex O.
The proof of Theorem 2.9, makes use of Lemmas 2.11 and 2.12.
Proof of Lemma 2.11. We divide the proof into two parts.
. . , N } and any control law α such that (y O,ᾱ ,ᾱ) ∈ T O . Let x ∈ Γ i \ {O} such that |x| is small. From Lemma 2.7, there exists a control law α x,O connecting x and O and we consider It means that the trajectory goes from x to O with the control law α x,O and then proceeds with the control lawᾱ. Therefore Since α is chosen arbitrarily and i is bounded by M , we get Since the above inequality holds for i = 1, N , we obtain that with |x| small enough and any control lawᾱ x such that (y x,ᾱx ,ᾱ x ) ∈ T x . From Lemma 2.7, there exists a control law α O,x connecting O and x and we consider It means that the trajectory goes from O to x using the control law α O,x then proceeds with the control law Since α x is chosen arbitrarily and i is bounded by We are ready to prove Theorem 2.9.
Proof of Theorem 2.9. According to Lemma 2.11 and Lemma 2.12, On the other hand, there exists an ε n -optimal control α n , v (O) + ε n > J (O, α n ). Let us define the first time that the trajectory y O,αn leaves O where T n i is the set of times t for which y O,αn (t) belongs to Γ i \ {O}. Notice that t n is possibly +∞, in which case y O,αn (s) = O for all s ∈ [0, +∞). Extracting a subsequence if necessary, we may assume that t n tends to t ∈ [0, +∞] when ε n tends to 0.
If there exists a subsequence of {t n } n∈N (which is still noted {t n } n∈N ) such that t n = +∞ for all n ∈ N, then for a.e. s ∈ [0, +∞) In this case, By letting n tend to ∞, Let us now assume that 0 ≤ t n < +∞ for all n large enough. Then, for a fixed n and for any positive δ ≤ δ n where δ n small enough, y O,αn (s) still belongs to some Γ i(n) \ {O} for all s ∈ (t n , t n + δ]. We have Choose a subsequence {ε n k } k∈N of {ε n } n∈N such that for some i 0 ∈ {1, . . . , N }, c i(n k ) = c i0 for all k. By letting k tend to ∞, recall that lim k→∞ t n k = t, we have three possible cases This , and finally obtain a contradiction by Lemma 2.12. ϕ N (x N )). The set of admissible test-function is denoted by R (G).

Definition of viscosity solution
Definition 3.2 (Hamiltonian). We define the Hamiltonian H i : Recall that the tangential Hamiltonian at O, H T O , has been defined in (2.1).
We now introduce the Hamilton-Jacobi system for the case with entry costs for all i = 1, N and the Hamilton-Jacobi system with exit costs for all i = 1, N and their viscosity solutions.

Definition 3.3 (Viscosity solution with entry costs).
• A function u := (u 1 , . . . , u N ) where u i ∈ U SC (Γ i ; R) for all i = 1, N , is called a viscosity sub-solution of (3.1) if for any (ϕ 1 , . . . , ϕ N ) ∈ R (G), any i = 1, N and any x i ∈ Γ i such that u i − ϕ i has a local maximum point on Γ i at x i , then • A function u := (u 1 , . . . , u N ) where u i ∈ LSC (Γ i ; R) for all i = 1, N , is called a viscosity super-solution of (3.1) if for any (ϕ 1 , . . . , ϕ N ) ∈ R (G), any i = 1, N and any x i ∈ Γ i such that u i − ϕ i has a local minimum point on Γ i at x i , then • A functions u := (u 1 , . . . , u N ) where u i ∈ C (Γ i ; R) for all i = 1, N , is called a viscosity solution of (3.1) if it is both a viscosity sub-solution and a viscosity super-solution of (3.1).

Definition 3.4 (Viscosity solution with exit costs).
• A function u := ( u 1 , . . . , u N ) where u i ∈ U SC (Γ i ; R) for all i = 1, N , is called a viscosity sub-solution of (3.2) if for any (ψ 1 , . . . , ψ N ) ∈ R (G), any i = 1, N and any y i ∈ Γ i such that u i − ψ i has a local maximum point on Γ i at y i , then is called a viscosity super-solution of (3.2) if for any (ψ 1 , . . . , ψ N ) ∈ R (G), any i = 1, N and any y i ∈ Γ i such that u i − ψ i has a local minimum point on Γ i at y i , then • A functions u := ( u 1 , . . . , u N ) where u i ∈ C (Γ i ; R) for all i = 1, N , is called a viscosity solution of (3.2) if it is both a viscosity sub-solution and a viscosity super-solution of (3.2).
Remark 3.5. This notion of viscosity solution is consitent with the one of [3]. It can be seen in Section 6 when all the switching costs are zero, our definition and the one of [3] coincide.

Connections between the value functions and the Hamilton-Jacobi systems
Let v be the value function of the optimal control problem with entry costs and v be a value function of the optimal control problem with exit costs. Recall that We wish to prove that v := (v 1 , v 2 , . . . , v N ) and v := ( v 1 , . . . , v N ) are respectively viscosity solutions of (3.1) and (3.2). In fact, since G\ {O} is a finite union of open intervals in which the classical theory can be applied, we obtain that v i and v i are viscosity solutions of Therefore, we can restrict ourselves to prove the following theorem.
in the viscosity sense. The function v i satisfies in the viscosity sense.
The proof of Theorem 4.1 follows from Lemmas 4.2 and 4.5. We focus on v i since the proof for v i is similar.
It is thus sufficient to prove that in the viscosity sense. Let a i ∈ A i be such that f i (O, a i ) > 0. Setting α (t) ≡ a i then (y x,α , α) ∈ T x for all x ∈ Γ i . Moreover, for all x ∈ Γ i \ {O}, y x,α (t) ∈ Γ i \ {O} (the trajectory cannot approach O since the speed pushes it away from O for y x,α ∈ Γ i ∩ B (O, r)). Note that it is not sufficient to choose a i ∈ A i such that f (O, a i ) = 0 since it can lead to f (x, a i ) < 0 for all x ∈ Γ i \ {O}. Next, for τ > 0 fixed and any x ∈ Γ i , if we choose Since this holds for any α (α x is arbitrary for t > τ ), we deduce that for t ∈ [0, τ ], yielding that y x,αx (t) tends to y O,α O (t) when x tends to O. Hence, from (4.2), by letting Let ϕ be a function in By letting τ tend to 0, we obtain that Hence, The proof is complete.
which is a contradiction with (4.5). Proof of Lemma 4.5. We adapt the proof of Oudet [21] and start by assuming that We need to prove that in the viscosity sense. Let ϕ ∈ C 1 (Γ i ) be such that and {x ε } ⊂ Γ i \ {O} be any sequence such that x ε tends to O when ε tends to 0. From the dynamic programming principle and Lemma 4.3, there existsτ such that for any ε > 0, there exists (y ε , α ε ) := (y xε,αε , α ε ) ∈ T xε such that y ε (τ ) ∈ Γ i \ {O} for any τ ∈ [0,τ ] and Then, according to (4.6) where the notation o ε (1) is used for a quantity which is independent on τ and tends to 0 as ε tends to 0. For k ∈ N the notation o(τ k ) is used for a quantity that is independent on ε and such that o(τ k ) τ k → 0 as τ → 0.
Finally, O(τ k ) stands for a quantity independent on ε such that O(τ k ) τ k remains bounded as τ → 0. From (4.7), we obtain that Since y ε (τ ) ∈ Γ i for all ε, one has Hence, from (4.8) Let ε n → 0 as n → ∞ and τ m → 0 as m → ∞ such that Since |y εn (τ m )| > 0, then Let ε n tend to 0, then let τ m tend to 0, one gets f i (O, a) ≥ 0, so a ∈ A + i . Hence, from (4.10) and (4.11), replacing ε by ε n and τ by τ m , let ε n tend to 0, then let τ m tend to 0, we finally obtain

Comparison principle and uniqueness
Inspired by [6,7], we begin by proving some properties of sub and super viscosity solutions of (3.1). The following three lemmas are reminiscent of Lemma 3.4, Theorem 3.1 and Lemma 3.5 in [3].
Lemma 5.1. Let w = (w 1 , . . . , w N ) be a viscosity super-solution of (3.1). Let x ∈ Γ i \ {O} and assume that Then for all t > 0, Hence, we can apply the result in [3,Lemma 3.4]. We refer to [6] for a detailed proof. The main point of that proof uses the results of Blanc [8,9] on minimal super-solutions of exit time control problems.

Lemma 5.2 (Super-optimality). Under assumption [H]
, let w = (w 1 , . . . , w N ) be a viscosity super-solution of (3.1) that satisfies (5.1); then there exists a sequence {η k } k∈N of strictly positive real numbers such that lim k→∞ η k = η > 0 and a sequence x k ∈ Γ i \ {O} such that lim k→∞ x k = O, lim k→∞ w i (x k ) = w i (O) and for each k, there exists a control law α k i such that the corresponding trajectory y x k (s) ∈ Γ i for all s ∈ [0, η k ] and Proof of Lemma 5. Proof of Lemma 5.3. Since u is a viscosity sub-solution of (3.1), u i is a viscosity sub-solution of (5.2). Recal that H i (x, ·) is coercive for any x ∈ Γ i ∩ B (O, r), we can apply the proof in Lemma 3.2 of [3], which is based on arguments due to Ishii and contained in [18].
Proof of Lemma 5.4. Since u is a viscosity sub-solution of (3.1), u i is a viscosity sub-solution of (5.2). and . Hence, we can apply the proof in Lemma 3.5 of [3].
Theorem 5.6 (Comparison Principle). Under assumption [H], let u be a bounded viscosity sub-solution of (3.1) and w be a bounded viscosity super-solution of (3.1); then u ≤ w in G, componentwise. This theorem also holds for viscosity sub-and super-solution u and w, respectively, of the exit cost control problem (3.2).
We give two proofs of Theorem 5.6. The first one is inspired by [3] and uses the previously stated lemmas. The second one uses the elegant arguments proposed in [19].
Proof of Theorem 5.6 inspired by [3]. We focus on u and w, the arguments used for the comparison of u and w are totally similar. Suppose by contradiction that there exists x ∈ Γ i such that u i (x) − w i (x) > 0. By classical comparison arguments for the boundary value problem, see [5],

By definition of viscosity sub-solution
This implies λw i (O) + H T O < 0. We now consider the two following cases. Case 1: If w i (O) < min j =i {w j (O) + c j }, from Lemma 5.2 (using the same notations), Moreover, according to Lemma 5.4, we also have This yields By letting k tend to ∞, one gets This implies that u i (O) − w i (O) ≤ 0 and leads to a contradiction.  The comparison principle can also be obtained alternatively, using the arguments which were very recently proposed by Lions and Souganidis in [19]. This new proof is self-combined and the arguments do not rely at all on optimal control theory, but are deeply connected to the ideas used by Soner [23,24] and Capuzzo-Dolcetta and Lions [11] for proving comparison principles for state-constrained Hamilton-Jacobi equations Proof of Theorem 5.6 inspired by [19]. We start as in first proof. We argue by contradiction without loss of generality, assuming that there exists i such that . We now consider the two following cases.
Therefore the claim is proved. It follows that we can apply the viscosity inequality for u i at x ε,γ . Moreover, notice that the viscosity super-solution inequality (5.2) holds also for y ε,γ = 0 since H i (O, p) ≤ H + i (O, p) for any p. Therefore Subtracting the two inequalities, Using [H1] and [H2], it is easy to see that there exists M i > 0 such that for any x, y ∈ Γ i , p, q ∈ R It yields Applying (5.9), let ε tend to 0 and γ tend to 0, we obtain that Using the same arguments as in Case 2 of the first proof, we get w j0 < min min Similarly, if v is the value function (with exit costs) and is the unique bounded viscosity solution of (3.2).
Remark 5.8. From Corollary 5.7, we see that in order to characterize the original value function with entry costs, we need to solve first the Hamilton-Jacobi system (3.1) and find the unique viscosity solution (v 1 , . . . , v N ). The original value function v with entry costs satisfies The characterization of v (O) follows from Theorem 2.9. The characterization of the original value function with exit costs v is similar.

A more general optimal control problem
In what follows, we generalize the control problem studied in the previous sections by allowing some of the entry (or exit) costs to be zero. The situation can be viewed as intermediary between the one studied in [3] when all the entry (or exit) costs were zero, and that studied above when all the entry or exit costs were positive. Accordingly, every result presented below will mainly be obtained by combining the arguments proposed above with those used in [3]. Hence, we will present the results and omit the proofs.
To be more specific, we consider the optimal control problems with non-negative entry cost C = {c 1 , . . . c m , c m+1 , . . . c N } where c i = 0 if i ≤ m and c i > 0 if i > m, keeping all the assumptions and definitions of Section 2 unchanged. The value function associated to C will be denoted by V. Similarly to Lemma 2.8, V| Γi\{O} is continuous and Lipschitz continuous near O: therefore, it is possible to extend V| Γi\{O} at O. This extension will be noted V i . Moreover, one can check that Γi is a continuous function which will be noted V c hereafter.
Combining the arguments in [3] and in Section 2 leads us to the following theorem.
Remark 6.2. In the case when c i = 0 for i = 1, N , V is continuous on G and it is exactly the value function of the problem studied in [3].
We now define a set of admissible test-function and the Hamilton-Jacobi equation that will characterize V. Definition 6.3. A function ϕ : (∪ m i=1 Γ i ) × Γ m+1 × · · · × Γ N → R N −m+1 of the form ϕ (x c , x m+1 , . . . , x N ) = (ϕ c (x c ) , ϕ m+1 (x m+1 ) , . . . , ϕ N (x N )) is an admissible test-function if • ϕ c is continuous and for i ≤ m, ϕ c | Γi belongs to C 1 (Γ i ), • for i > m, ϕ i belongs to C 1 (Γ i ), • the space of admissible test-function is noted R (G). A function U = (U c , U 1 , . . . , U m ) where U c ∈ C (∪ j≤m Γ j ; R) and U i ∈ C (Γ i ; R) for all i > m is called a viscosity solution of the Hamilton-Jacobi system if it is both a viscosity sub-solution and a viscosity super-solution of the Hamilton-Jacobi system. For j = 1, N , we consider the function Ψ j,ε,γ : Γ j × Γ j −→ R Subtracting the two inequalities and using properties of Hamiltonian H j0 , let ε tend to 0 then γ tend to 0, we obtain that U c (O) − W c (O) ≤ 0, which is contradictory.