NEW RESULTS CONCERNING THE HIERARCHICAL CONTROL OF LINEAR AND SEMILINEAR PARABOLIC EQUATIONS

. This paper deals with the application of multiple strategies to control some parabolic PDEs. We assume that we can act on the system through a hierarchy of distributed controls: with a ﬁrst control (a follower), we drive the state exactly to zero; then, with an additional control (the leader), we minimize a prescribed cost functional. That means that we invert the roles played by leaders and followers in the recent literature. We study linear and semilinear problems. More precisely, we prove the existence (and uniqueness in the linear case) of a leader-follower couple. Then, we deduce an appropriate optimality system that must be satisﬁed by the controls and the corresponding state and adjoint states. We also indicate some generalizations to other controls, PDEs and systems. In particular, we establish similar existence and optimality results for hierarchical-biobjective (Pareto-Stackelberg) control problems, where there are two cost functionals and two independent leader controls whose main task is to ﬁnd an associated Pareto equilibrium and one common follower in charge of null controllability


Introduction
The aim of this paper is to get "simultaneously" optimal control and controllability results for linear and semilinear heat equations with multiple (and hierarchical) strategies. We assume that we can act on the system through a hierarchy of distributed controls and, after fixing the cost functional and the desired controllability property, we look for existence, uniqueness and characterization (i.e. optimality) results. and    y t − ∆y + F (y) = f 1 O + v1 ω in Q, y = 0 on Σ, y(·, 0) = y 0 in Ω, (1.2) where f and v are the controls, y is the state, a ∈ L ∞ (Q), F is a C 1 globally Lipschitz-continuous function with F (0) = 0 and y 0 ∈ L 2 (Ω) is prescribed. In (1.1) and (1.2), the set ω ⊂ Ω is the main control domain and O ⊂ Ω is the secondary control domain (both are supposed to be small); in order to avoid ambiguity, we will assume that O and ω are disjoint; 1 O and 1 ω are the characteristic functions of O and ω, respectively; f is the follower and v is the leader. Let us describe the considered hierarchic problem in the case of (1.1).
Let ω d ⊂ Ω be a non-empty open set, representing an observation domain for the leader. We will consider the secondary functional where ρ and ρ 0 are appropriate weights that blow up as t → T − , that is, as t grows to T , see (1.10)-(1.12) for the precise definitions. Note that, if S(v; f ) < +∞, we necessarily have y(x, T ) ≡ 0. We will also consider the main functional where α and µ are positive constants with α + µ = 1 and y d = y d (x, t) is a given function (a desired observation).
The following spaces will be used: The natural norms in U and F will be respectively denoted by · U and · F . The control process can be described as follows: 1. We associate to each leader v ∈ U the unique solution f [v] to the following extremal problem: In view of the behavior of ρ near t = T , the state y associated to v and f [v] must necessarily satisfy the null controllability property y(· , T ) = 0 in Ω. (1.7) 2. Then, we look for admissible controlsv ∈ U satisfying (1.8) There are several reasons to introduce ρ and ρ 0 . First, note that they ensure the null controllability constraint y(x, T ) ≡ 0 automatically. Secondly, they prevent v to present undesirable oscillations as t → T − ; indeed, it is well known that this can happen if, for instance, we search for minimal L 2 -norm null controls. A third reason is that, as indicated in Proposition 2.3, a good choice of these weights reduces the search of the follower f [v] and the associated state y to the solution of a well posed Lax-Milgram problem; see (2.3) and (2.4).
There are other ways to find a follower corresponding to a prescribed leader v; see [6] to this regard. However, to our knowledge, in order to ensure a good behavior of the mapping v → f [v], the best strategy is to solve secondary extremal problems of the kind (1.6).
Observe that, if the function v → P (v; f [v]) is differentiable in the space U of admissible leader controls, then (1.8) implies This property will be crucial for the characterization of the optimal controlv and the associated f [v]. Note also that, after a very simple change of variable, we can also consider a hierarchic problem in which, instead of (1.7), we require y(· , T ) = y(· , T ) in Ω, where y is an uncontrolled solution to (1.1).
Consequently, it is also meaningful to look for optimal leaders and associated followers that drive the solution to (1.1) exactly to a prescribed trajectory t → y(· , t) at time T .
In the case of the semilinear system (1.2), we can consider hierarchic control problems of the same kind. However, their formulation is more complicated and will be delayed to the following section. Indeed, in that case, (1.6) possesses in general not one but probably several solutions and (1.8) needs a reformulation.
Several motivations can be found for these control problems: -If y = y(x, t) is viewed as a temperature distribution in a body, we can interpret that our intention is to drive y to a desired y at time T by heating and cooling (acting only on the small subdomains O and ω), trying at the same time to keep reasonable temperatures in ω d during the whole time interval (0, T ). For instance, let us take a ≡ 0 and y(x, t) ≡ y T (a nonnegative constant). Then, we can think of a two-piece apartment with an air conditioner in the bedroom and another one in the living room, respectively playing the roles of the follower and leader controls. The aims are to get a temperature close to a desired target in the living room (for example) along (0, T ) and ensure that, at the end of the day (at t = T ), one has y(x, T ) ≡ y T everywhere. -The same control strategy makes sense in the context of fluid mechanics. Thus, we can replace (1.1) by similar Stokes or Navier-Stokes systems and take into consideration similar hierarchic problems. We can interpret that we act on the system through mechanical forces applied on O and ω and the goal is to reach y at time T keeping the velocity field not too far from y d in ω d × (0, T ). -In the framework of mathematical finance, this can also be of interest. For instance, it is well known that the price of an European call option is governed by a backwards in time PDE close to (1.2). Now, the independent variable x must be interpreted as a vector indicating the prices of the stocks and t is in fact the reverse of time (we fix a situation at t = T and we want to know what to do in order to arrive at this situation from a well chosen state). In this regard, it is natural and can be interesting to control the solution to the system with the composed action of several agents, each of them corresponding to a different range of values of x. For example, with N = 1 (only one stock) and Ω = (0, L), we can take ω d = Ω, O = (0, L 0 ) and ω = (L 1 , L) for some L 0 and L 1 with 0 < L 0 ≤ L 1 < L. This means that the "important" decisions are taken when the stock price is large and this determines what to do when the price is low. This way, we intend to get an option price close to a desired target along (0, T ), subject to the controllability requirement at t = 0. For further information on the modeling and control of these phenomena, see for instance [7,21,22].

Main results
Before stating our main results, let us specify once for all the weight functions ρ and ρ 0 . We will see later that their definitions are motivated by well known controllability results for (1.1) in suitable spaces.
Let η 0 = η 0 (x) be a function satisfying With our assumptions on Ω, such a function η 0 always exists (see Lem. 1.1, p. 4 in [11]). Then, let us introduce the weight functions and λ, s > 0 are large enough. In fact, the required values of λ and s will be fixed in accordance with Theorem 2.1 below in the linear case and satisfying (3.1) for semilinear systems. At first sight, the choice of ρ and ρ 0 may seem too artificial. However, we do need weights like these, since they appear in the Carleman inequality (2.1) used in the proofs of the main results.
In the case of (1.1), the following holds:  We will see in Section 2 that the minimizerv satisfies, together with the corresponding f [v], the associated stateŷ and some additional (adjoint) variables, an appropriate optimality system; the details are given in Theorem 2.6. On the other hand, in Section 5.4, we will see that results similar to Theorem 1.1 (and Thm. 2.6) can also be obtained for many other linear systems, including the well known Stokes equations; see Theorem 5.3.
In the semilinear case, with F being a Lipschitz-continuous function, we can consider similar controllability questions. However, it is important to note that, now, we lose the convexity of the functionals S and P and this introduces several nontrivial difficulties.
Thus, for each v ∈ U, we can consider the extremal problem (1.6), where S is again given by (1.3) but, now, y is the unique solution to (1.2). We will denote by Φ[v] the family of solutions to (1.6). In this case, we will look for a leaderv and an associated followerf such that, instead of (1.8), one has: , possesses at least one solution (v,f ).
In this paper, we also analyze if a result like Theorem 1.1 holds true when the leader is constrained to belong to an appropriate convex set U ad ⊂ L 2 (ω × (0, T )). Thus, let I be a non-empty closed interval with 0 ∈ I, let us take and let us suppose that the minimization in (1.8) is subject to the restriction v ∈ U ad . The control result is then the following: Theorem 1.3. Let us consider the linear system (1.1), where a ∈ L ∞ (Q) and y 0 ∈ L 2 (Ω). There exists exactly one minimizerv of J in U ad and an associated follower f [v] such that the corresponding state satisfies (1.7).
In Section 4, we will consider a situation where adequate constraints are imposed not only to the leaders but also to the associated followers; see Remark 4.2.

Plan of the paper
As mentioned above, the main novelty of this paper is that the choice of the follower (resp. the leader) is determined by a controllability (resp. an optimal control) requirement. The analysis and results also hold, after appropriate modifications, when several main cost functionals (and several leader controls) appear and, instead of an extremal problem, we look for related equilibria. All this will be explained below.
The rest of the paper is organized as follows.
In Section 2, we prove Theorem 1.1, which concerns the linear case. This result will be strongly used in the remaining sections. We will also establish a characterization result for the optimal leader-follower-state triplet (see Thm. 2.6). In Section 3, we prove Theorem 1.2; we also deduce an optimality system that must be satisfied by any solution to (1.13). Section 4 is devoted to prove Theorem 1.3 and make some additional considerations on hierarchical control with constraints. Finally, we present some additional comments and questions in Section 5.

The linear case
In this section we prove Theorem 1.1. Thus, let us consider the linear system (1.1), let us introduce the notation L a y := y t − ∆y + ay, L * a p := −p t − ∆p + ap and let the space P 0 be given by We will need the following symmetric bilinear forms on P 0 associated to the coefficients a ∈ L ∞ (Q): From the unique continuation property satisfied by the solutions to homogeneous heat equations, we know that all these bilinear forms are in fact a scalar products (actually, it will be seen below that they are equivalent).
In the sequel, we will denote by P the completion of P 0 associated to m(0; · , ·). It is known that the functions in P, their first and second spatial derivatives and their first time derivatives are locally square integrable in Ω × (0, T − δ) for all small δ > 0. More precisely, we have the following Carleman inequality: Theorem 2.1. There exist positive constants λ 0 , s 0 and C 0 , only depending on Ω, O and T , such that, if we take λ = λ 0 and s = s 0 in (1.11), any p ∈ P satisfies Furthermore, λ 0 and s 0 can be found arbitrarily large.
The proof of this result is given in [11]; see also [10] for more details on the constants. A consequence of (2.1) is that the (adjoint of the) heat equation in (1.1) is observable. In other words, there exists a constant C only depending on Ω, O, T and a such that, if ϕ ∈ L 2 (0, T ; These estimates ensure the null controllability property of the heat equation with controls in L 2 (O × (0, T )); see for instance [10].
In the remainder of this section, it will be assumed that λ = λ 0 and s = s 0 . A straightforward consequence of Theorem 2.1 is the following: There exist positive constants K 0 and K 1 , only depending on Ω, O, T and a L ∞ (Q) , such that the following holds: In the following result, we recall that, for any admissible v, the associated follower is well defined: Let v be given in U. Then, there exists exactly one solution f [v] ∈ F to the null controllability problem (1.6), where S is given by (1.3) and y is the solution to (1.1). Furthermore, one has where p is the unique solution to the linear problem and we have used the notation Proof. We use here the nowadays well known Fursilkov-Imanuvilov approach to null controllability, see [11]. First, remark that the Lax-Milgram's Theorem can be applied to (2.4). Indeed, m(a; · , ·) is obviously continuous and coercive in P. On the other hand, in view of (2.1), the functions p ∈ P satisfy and, consequently, with appropriate continuous embeddings. Therefore, the linear mapping p → p(· , 0) is well defined and continuous from P into H 1 0 (Ω); this shows that the right hand side in (2.4) is a bounded linear form on P. Let f [v] and y be given by (2.3). Then, it is not difficult to check that f [v] ∈ F, y is the state associated to v and belongs to Y and f [v] solves (1.6). The function f → S(v; f ) is coercive, lower semi-continuous, convex and proper in F. It is also quadratic and strictly convex in its domain. Consequently, we also have that f [v] is the unique solution to (1.6).
This ends the proof.
Observation 2.4. The previous argument yields the estimates for some C only depending on Ω, O, T and a L ∞ (Q) . This leads to the following estimates of p, f [v] and the associated state y: Actually, (2.4) may be viewed as a boundary problem for a PDE that is fourth-order in space and second-order in time. Specifically, p solves (2.4) if and only if p ∈ P and one has Clearly, in order to prove the existence of a solution to (1.8), it is convenient to analyze the behavior of the function v → P (v; f [v]) and, more precisely, its convexity and differentiability properties. This is the objective of the following result, whose proof is elementary: To end this section, let us establish a characterization result: Theorem 2.6. The unique solutionv to (1.8) satisfies, together with the associatedŷ,p,φ andψ, the following optimality system: in Ω, in Ω, Proof. Let v, w ∈ U and ε > 0 be given, let us set set g : ) and let us introduce the solutions z, φ, k and η to the following problems: Then, after some computations, one has Consequently, the following identity holds for all v, w ∈ U: In particular, with v =v, denoting byŷ,φ andψ the associated state and adjoint states and taking w arbitrary in U, we see thatφ +ψ + µρ 2 0v = 0 a.e. in ω × (0, T ), whence the assertion follows.

The semilinear case
This section is mainly devoted to prove Theorem 1.2. We will use arguments similar to those above that lead to existence results for (1.6) and (1.13).
We will also find a necessary condition for optimality, similar to (2.7)-(2.12), that has to be satisfied by any solution to the control problem.
Let us introduce R := sup R |F (r)|. In the sequel, it will be assumed that the weights ρ and ρ 0 are given by (1.11) with λ = λ 0 and s = s 0 , where λ 0 and s 0 are furnished by Theorem 2.1 and satisfy Obviously, there exist constants K 1 and K 2 , only depending on Ω, O, T and R, such that (2.2) holds for all a ∈ L ∞ (Q) with a L ∞ (Q) ≤ R.

Proof of Theorem 1.2
Let us first prove that any admissible leader v possesses at least one follower in F: Let v be given in U. Then, there exists at least one solution in F to the extremal problem (1.6), where S is given by (1.3) and y is the solution to (1.2). Furthermore, any solution f to (1.6) satisfies, together with the associated state y and an additional variable p ∈ P, the semilinear system (1.2), the identities and the estimates for some C only depending on Ω, O, T and R.
Proof. Let us first see that there exist controls f ∈ F such that S(v; f ) < +∞. Indeed, let us denote by F 0 the function given by Obviously, F 0 (ξ) is uniformly bounded in R.
For each z ∈ L 2 (Q), we will denote by Λ(z) the unique solution y z to the linear problem where f is the unique solution to (1.6) with a = F 0 (z). Let us denote this control by f z . The existence and uniqueness of f z is a consequence of the arguments in Section 2 (see Prop. 2.3). Furthermore, since the F 0 (z) are uniformly bounded in L ∞ (Q), the controls f z are uniformly bounded in F and the associated states y z belong (among other things) to a fixed compact set in L 2 (Q).
Thus, the mapping z → Λ(z) is well-defined, continuous and compact in L 2 (Q) and maps the whole space into a ball. In view of Schauder's Theorem, Λ possesses at least one fixed-pointỹ. If we setf := fỹ, then we obviously have S(v;f ) < +∞. Now, let {f n } be a minimizing sequence for (1.6). It is clear that the f n (resp. y n ) are uniformly bounded in F (resp. Y). Consequently, it can be assumed that they converge weakly in F to some f and the corresponding states y n converge strongly in L 2 (Q) to the associated y. From the weak lower semicontinuity of the functionals we easily deduce that f minimizes (1.6). Hence, there exists at least one solution to this extremal problem. Let us prove that any solution to (1.6) satisfies (3.2) for some p ∈ P.
Thus, let f ∈ F be a solution to (1.6) and let us denote by y the corresponding solution to (1.2). Let us introduce the solution y to the auxiliary problem Then (1.6) can be rewritten in the form It is easy to check that M is C 1 in Y × F and, in particular, We also have and, since H 0 and H * 0 are compact, as a consequence of Fredholm's Alternative Theorem, the rank R(M (y, f ) * ) is closed.
At this point, it is possible to apply the following result, usually known as Dubovitski-Milyoutin Formalism for extremal problems in Hilbert spaces (see [1,12]): Letĥ be a solution to (3.5). Then, there exist λ ∈ R + and ζ ∈ N (S (ĥ)) ⊥ , not both zero, such that − λI (ĥ) + ζ = 0. Accordingly, by duality, the algebraic sum of the associated conjugate sets contains the origin and this is precisely (3.6).

Proposition 3.4. Let us set
where, for each leader v ∈ U, Φ[v] denotes the set of the corresponding followers, i.e. the family of solutions to (1.6). Then G is weakly closed in U × F and the function (v, f ) → P (v; f ) is coercive and sequentially weakly lower semicontinuous.
Proof. Let {(v n , f n )} be a sequence in G, let the y n be the associated states and let us assume that (v n , f n ) converges weakly in U × F to some (v, f ). Then, it can be assumed that the y n converge strongly in L 2 (Q) to the state y corresponding to (v, f ).
Let us check that f solves (1.6). This will prove that (v, f ) ∈ G and, accordingly, G is weakly closed. Indeed, if f were not a solution to (1.6), there would existf ∈ F such that whereŷ is the state corresponding to (v,f ). Consequently, for n large enough, we would also have and also Q ρ 2 |ŷ n | 2 + O×(0,T ) whereŷ n is the state corresponding to (v n ,f ). But this contradicts that (v n , f n ) ∈ G.
That (v, f ) → P (v; f ) is sequentially weakly lower semicontinuous is obvious. Let us finally see that it is coercive. Thus, let us assume that (v n , f n ) ∈ G for all n and f n F → +∞. In view of Proposition 3.1, the couples (v n , f n ) must satisfy the second estimates in (3.3), whence v n U → +∞ and, also, P (v n ; f n ) → +∞. This ends the proof.
Let us achieve the proof of Theorem 1.2. Again, the argument is classical. Let {(v n , f n )} be a minimizing sequence for (1.13). Then, (v n , f n ) are obviously uniformly bounded in U × F. Therefore, it can be assumed that (v n , f n ) converges weakly in this space to some (v,f ) ∈ G and the corresponding states y n converge strongly in L 2 (Q) to the associatedŷ.
In view of Proposition 3.1, we can reformulate (1.13) in the form Subject to (y, v, f, p) ∈ X , K(y, v, f, p) = (0, 0, 0), (3.14) where we have used the following notation we have introduced the linear compact operator H 0 : in Ω and y is the unique solution to the uncontrolled problem    y t − ∆y = 0 in Q, y = 0 on Σ, y(· , 0) = y 0 in Ω.
This ends the proof.
For example, the assumptions in Theorem 3.5 are satisfied by any F of the form where p and q are polynomial functions respectively of degree n and m with n ≤ m and q(s) = 0 for all s ∈ R.

Leaders with constraints
Let us begin this section by sketching the proof of Theorem 1.3. In fact, it is not very different from the proof of Theorem 1.1. It is again a consequence of Propositions 2.3 and 2.5. Indeed, the properties of the mapping v → f [v] and the functional in (1.4) guarantee that J possesses exactly one minimizer in U ad .
We also have the following: Theorem 4.1. The unique minimizerv of J in U ad and the associatedp,ŷ,φ andψ satisfy (2.7)-(2.11) together withv where P ad : U → U ad is the usual orthogonal projector.
Again, the proof is similar to the proof of Theorem 2.6. It suffices to notice that the unique minimizer in (1.8) subject to the constraint v ∈ U ad must satisfy Taking into account the definitions ofφ andψ, we readily see that this is equivalent to (4.1).
Observation 4.2. It makes sense to try to prove hierarchic control results when we impose a priori constraints on the leaders and the followers. Thus, let α, β > 0 be given and let us introduce the sets For every v ∈ U α , there exists a unique associated follower f [v] ∈ F. Furthermore, in view of (2.5), one has where C only depends on Ω, O, ω, T and a. Consequently, if β > Cα and the initial state satisfies y 0 ≤ β/C − α, we can find leaders v ∈ U α and associated followers f [v] ∈ F β such that the corresponding states satisfy (1.7).

Some additional comments and open questions
This section is devoted to discuss some extensions and variants of the problems analyzed above. We will consider only states governed by linear PDEs. Of course, similar nonlinear problems are interesting and deserve attention but their study unfortunately requires some technicalities that are out the scope of this work. They will be treated in a forthcoming paper.

Multi-objective hierarchical problems and Pareto equilibria leaders
Let ω 1 , ω 2 and O be three non-empty mutually disjoint open subsets of Ω and let us consider the controlled system in Ω, where again a ∈ L ∞ (Q) and y 0 ∈ L 2 (Ω). We will use the spaces Y and F defined in (1.5) and also the spaces Let the sets ω d,i ⊂ Ω be non-empty and open and let the functions y d,i ∈ L 2 (ω d,i × (0, T )) be given. We will consider the secondary functional and the main functionals where the α i , µ i > 0 and α i + µ i = 1 for i = 1, 2. The Pareto hierarchical control process for (5.1)-(5.3) is the following: 1. We associate to each leader couple (v 1 , v 2 ) ∈ U 1 × U 2 the unique solution f [v 1 , v 2 ] to the extremal problem Observe that the corresponding state y must necessarily satisfy y(· , T ) = 0. In the sequel, we set Then, we look for a Pareto equilibrium (v 1 , v 2 ) in U 1 × U 2 for the functionals G 1 and G 2 . By definition, this means that the following properties are satisfied: see [18].
Arguing as in the proof of Proposition 2.3, it is not difficult to check that, for each (v 1 , v 2 ) ∈ U 1 × U 2 , there exists exactly one solution f [v 1 , v 2 ] to (5.4) furthermore satisfying where p is the unique solution to the linear problem The functionals G i : U 1 × U 2 → R are well defined, strictly convex and C 1 . Consequently, it can be deduced from Lagrange's Multipliers Theorem that, if (v 1 , v 2 ) is an associated Pareto equilibrium, there exist λ ∈ [0, 1] such that Also, if (5.6) is satisfied for some λ ∈ [0, 1], then (v 1 , v 2 ) is necessarily the unique minimizer of λG 1 + (1 − λ)G 2 and, consequently, (v 1 , v 2 ) is a Pareto equilibrium for G 1 and G 2 .
Arguing as in the proof of Theorem 2.6, it is possible to deduce that, for each λ, the couple (v 1,λ , v 2,λ ) must solve, together with the associated state y λ and some p λ , φ λ and ψ λ , the following optimality system: in Ω, in Ω, ∀q ∈ P, ψ λ ∈ P. (5.12) The computation of the Pareto equilibria (v 1,λ , v 2,λ ) can be carried out by solving the optimality system. To this purpose, for any fixed λ ∈ (0, 1), a natural fixed-point algorithm is the following: Then, for any given n ≥ 0 and (v n 1 , v n 2 ), compute p n , f n and y n from (5.9) and (5.8), compute φ n and ψ n from (5.11) and (5.12) and finally compute (v n+1 1 , v n+1 2 ) from (5.10).
It is not difficult to check that, at least when α 1 /µ 1 and α 2 /µ 2 are small enough, these iterates converge strongly in U 1 × U 2 to a Pareto equilibrium.

Boundary followers
In this section, we will deal with a hierarchical control problem where the follower acts on a part of the boundary.
Thus, let ω ⊂ Ω be a non-empty open set, let γ be a relatively open subset of the boundary ∂Ω and let us consider the state system in Ω, (5.13) where (again) a ∈ L ∞ (Q) and y 0 ∈ L 2 (Ω). Let the functionη 0 be such that As before, to each leader v ∈ U * we associate the unique solution f [v] to the extremal problem Minimize S * (v; f ), subject to f ∈ F * . (5.14) Then, we consider the functional v → P * (v; f [v]) and we try to findv satisfying In view of the unique continuation property, b(a; · , ·) is a norm in P 0 . Let us denote by B the corresponding Hilbert completion. Then, we can use a new Carleman inequality involving the values on the boundary of the normal derivatives. It is given in the following result: Theorem 5.2. There exist positive constants λ 1 , s 1 and C 1 , only depending on Ω, γ and T , such that, if we take λ = λ 1 and s = s 1 , any p ∈ B satisfies Q −2 Furthermore, λ 1 and s 1 can be found arbitrarily large.
In the remainder of this section, we take λ = λ 1 and s = s 1 . Then, as in Section 2, we can find positive constants K 0 and K 1 , only depending on Ω, γ, T and a L ∞ (Q) , such that As before, for each v ∈ U * , there exists exactly one solution f [v] to (5.13). Furthermore, the follower f As in Section 2, it can be deduced that there exists a unique leaderv satisfying (5.15). We also have thatv satisfies, together with the associated stateŷ and somep,φ andψ, the following: in Ω, in Ω,

Boundary leaders and followers
Let γ and σ disjoint open subsets of the boundary ∂Ω and, again, let ω d ⊂ Ω be a non-empty open set where an objective function y d is defined. Let us consider the state system    y t − ∆y + a(x, t)y = 0 in Ω, in Ω, where we find a boundary leader v and a boundary follower f , respectively acting on γ × (0, T ) and σ × (0, T ). For the analysis of this problem, we need the weight functions defined in Section 5.2 together with the following x-independent weight function: This way, we can use the secondary functional S * , the main functional the spaceŨ := {v : ζv ∈ L 2 (σ × (0, T ))} and the spaces F * and Y * and we can prove results similar to those above. More precisely, to each leader v ∈Ũ, we can associate the follower f [v] ∈ F * , the unique solution to the secondary extremal problem (5.14). One has where p is the solution to the problem On the oher hand, the functional v →P (v; f [v]) possesses exactly one minimizerv inŨ. It satisfies, together with the associated stateŷ and somep,φ andψ, the following optimality system:   ŷ t − ∆ŷ + a(x, t)ŷ =v1 ω in Ω y = f [v]1 γ +v1 σ on Σ, y(· , 0) = y 0 in Ω,