STATE-CONSTRAINED CONTROLLABILITY OF LINEAR REACTION-DIFFUSION SYSTEMS

. We study the controllability of a coupled system of linear parabolic equations, with nonnegativity constraint on the state. We establish two results of controllability to trajectories in large time: one for diagonal diﬀusion matrices with an “approximate” nonnegativity constraint, and a another stronger one, with “exact” nonnegativity constraint, when all the diﬀusion coeﬃcients are equal and the eigenvalues of the coupling matrix have nonnegative real part. The proofs are based on a “staircase” method. Finally, we show that state-constrained controllability admits a positive minimal time, even with weaker unilateral constraint on the state. Résumé . On s’intéresse à la contrôlabilité d’un système couplé d’équations paraboliques linéaires avec une contrainte de positivité sur l’état. On énonce deux résultats de contrôlabilité aux trajectoires en temps grand : un pour des matrices de diﬀusion diagonales avec contrainte de positivité “approchée”, et un autre, plus fort, avec une contrainte de positivité “exacte”, dans le cas où les coeﬃcients de diﬀusion sont identiques et où les valeurs propres de la matrice de couplage sont de partie réelle positive. Les preuves s’appuient sur une méthode “en escalier”. Enﬁn, on montre que le temps minimal de contrôlabilité avec contrainte sur l’état est strictement positif, y compris sous une contrainte unilatérale moins restrictive sur l’état.


Introduction
In the following, N and N * denote the sets of respectively nonnegative and positive integers. Let d in N * .
Let Ω be a bounded open connected set of R d with C ∞ boundary ∂Ω, and ω an open subset of Ω. Let T > 0, Ω T = (0, T ) × Ω and ω T = (0, T ) × ω. Let n, m in N * , with n 2. Let − → n be the outward normal on ∂Ω.
in Ω. (1.1) In (1.1), A = (a ij ) 1 i,j n and D = (d ij ) 1 i,j n are square matrices in M n (R) and B is a matrix in M n,m (R). Assume that D satisfies the ellipticity condition given by ∃α > 0, ∀ξ ∈ R n , Dξ, ξ α|ξ| 2 . (1. 2) The spaces in which the initial condition Y 0 and the control U lie will be made more precise later on. Notice that m represents the number of controls. Notably, we may have m < n, which means that we can have an underactuated system.
The controllability to trajectories of system (1.1) in arbitrary time has been established under a Kalman-type condition in [1]. The question we address in this paper is the following : is it possible to ensure that the state remains nonnegative while controlling (1.1) from a nonnegative initial state towards a nonnegative trajectory? This question is relevant because reaction-diffusion systems like (1.1) frequently model phenomena in which the state is nonnegative (e.g. concentrations of chemicals). In these cases, a controlled trajectory that does not remain nonnegative would have no interest for applications.
State-constrained controllability is a challenging subject that has gained popularity in the last few years, notably at the instigation of Jérome Lohéac, Emmanuel Trélat and Enrique Zuazua in the seminal paper [2], in which some controllability results with positivity constraints on the state or the control for the linear heat equation are proved, under a minimal time condition which turns out to be necessary. This question yielded to an increasing number of articles in different frameworks, many of them being coauthored by Enrique Zuazua: for ODE systems [3,4], semilinear and quasilinear heat equations [5,6], monostable and bistable reaction-diffusion equations [7,8], the fractional one-dimensional heat equation [9,10], wave equations [11], and age-structured systems [12]. The spirit of most of these results can be summarized this way: when the considered system is controllable in the classical sense, and sometimes under assumptions on the initial and target states or on the system properties, controllability with a constraint on the state is possible with a positive minimal time.
Our goal in this paper is to state similar results for coupled parabolic systems of the form (1.1) with internal control. This framework raises several difficulties. Indeed, for boundary control or equations satisfying a maximum principle, there is an equivalence between nonnegativity of the state and nonnegativity of the control, which is useful as control-constrained problems are better understood in general. This equivalence does not hold anymore for System (1.1): the state might remain positive even if the control is negative. Moreover, due to the coupling terms, the asymptotic behaviour of the trajectories is difficult to know precisely. This means that we cannot rely on dissipativity or stabilization to a steady state to obtain controllability, with the notable exception of the case where the diffusion matrix D is equal to the identity matrix.
One of the ideas we present in the following to bypass this difficulty is an original version of the "staircase" method, where we drive the system along a path of non-constant trajectories, while this approach is usually employed to follow a path of constant steady states [2,5]. Adapting it to non-constant trajectories requires supplementary arguments to make sure that these trajectories do not go too far away from each other.
In the particular case where D = I n and the eigenvalues of A have nonnegative real part, we establish controllability to trajectories in large time with nonnegative state (Theorem 2.6). When D is only assumed to be diagonal, we show that (1.1) is also controllable in large time, but with state remaining "approximately" nonnegative, i.e. greater than −ε for any fixed ε > 0 (Theorem 2.8). Additionally, we show that there exists a positive minimal controllability time as soon as the initial state and the target trajectory are different, even if we allow the state to be greater than a negative constant instead of being nonnegative (Theorem 2.12). The article is structured as follows: the main results are stated in Section 2, the proofs of the results on stateconstrained controllability and minimal time are respectively developed in Sections 3 and 4, and we provide some perspectives for future research in Section 5.

Main results
In the following, for Y a vector in R n and α ∈ R, we write Y α and Y > α if all of the n components of Y are respectively greater or equal to α and greater than α. Moreover, |Y | is the usual Euclidean norm of Y on R n and max Y refers to the greatest component of Y . Finally, for r ∈ Z and Q some open subset of R q with q ∈ N * , H r (Q) denotes the Sobolev space W r,2 (Q). In what follows, we will also consider the free evolution It is well-known that, for every Y 0 ∈ L 2 (Ω) n and U ∈ L 2 (ω T ) m , the Cauchy problem given by System (1.1) If, in addition, Y 0 ∈ L ∞ (Ω) n and U ∈ L ∞ (ω T ) m , we also get a standard well-posedness L ∞ estimation: There exists C(T ) > 0 such that, for any Y 0 ∈ L ∞ (Ω) n and any U ∈ L ∞ (ω T ) m , the solution of (1.1) with initial condition Y 0 and control U satisfies whereỸ is the solution of the free System (2.1) with same initial condition Y 0 .
Proof. Let T , Y andỸ defined as in 2.1, and let Z = Y −Ỹ ; Z then satisfies the equation 3) is linear, so we have of course that the application Z : Moreover, by virtue of [13, Chapter VII, Theorem 2.1], we have the following estimation on Z: where C 1 and C 2 depend only on n, Ω, T and A. Taking U such that U L ∞ (ω T ) m = 1, we obtain Z L ∞ (Ω T ) n C 1 + C 2 , so the linear map Z is also continuous, which yields estimation (2.2) for all Y 0 ∈ L ∞ (Ω) n and U ∈ L ∞ (ω T ) m , with C(T ) = C 1 + C 2 .

Results on state-constrained controllability
Before stating our main results, let us state a few preliminary properties satisfied by System (1.1). To preserve the nonnegativity of the trajectories of the uncontrolled System (2.1), we assume that the diffusion matrix D is diagonal: Our goal is to control System (1.1) with a positivity constraint on the state Y . Therefore, it seems reasonable to assume that free trajectories naturally stay nonnegative, hence assumptions (2.5) and (2.6).
We recall the classical notion of controllability to (free) trajectories: (Ω) n , and for all solutionỸ of the free system (2.1) associated to another initial conditioñ Y 0 ∈ L 2 (Ω) n , there exists a control U in L 2 (ω T ) m such that the solution Y of (1.1) with initial condition Y 0 and control U satisfies Y (T, ·) =Ỹ (T, ·). (2.7) The Laplace operator −∆ with Neumann boundary conditions admits a sequence of eigenvalues repeated with their multiplicity (λ p ) p∈N such that To each eigenvalue λ p , we associate a corresponding normalized eigenvector e p ∈ H 1 (Ω) in such a way that {e p } p∈N forms a Hilbert basis of L 2 (Ω). Notice that e p ∈ C ∞ (Ω) by elliptic regularity, since ∂Ω is smooth. Given two matricesÃ in M n (R) andB in M n,m (R), we use the following notation for the Kalman matrix associated toÃ andB: Our first two main results establish state-constrained controllability of the System (1.1) under the aforementioned quasipositivity and controllability assumptions. Theorem 2.6 (Case D = I n ). Assume that D = I n and that A and B satisfy (2.6) and (2.10). Assume moreover that the eigenvalues of A all have a nonnegative real part. 11) and that none of the components of Y 0 and Y f,0 is a.e. zero on Ω.
Then, there exists T > 0 and U ∈ C ∞ 0 (ω T ) m such that the solution Y of (1.1) with initial condition Y 0 and control U satisfies Then, for all ε > 0, there exists T > 0 and U ∈ C ∞ 0 (ω T ) m such that the solution Y of (1.1) with initial condition Y 0 and control U satisfies
(2.15) Remark 2.9. Theorem 2.8 can be indifferently stated with Neumann, Dirichlet or even Robin boundary conditions in (1.1). By contrast, Neumann conditions are an important assumption for Theorem 2.6, to ensure that the free trajectories of (1.1), after a well-chosen change of variables, converge to strictly positive (constant) steady states. Hence, Robin boundary conditions in Theorem 2.6 could also be considered, as long as they make the free trajectories converge to positive steady states (possibly non-constant).
On the other hand, since they prevent the solutions to be strictly positive, Dirichlet conditions appear to make the problem of controllability with nonnegative state more difficult and perhaps less relevant; the notion of approximate nonnegative controllability used in Theorem 2.8 might then be better fitted to deal with such boundary conditions. , which leads to the following system: It is easy to see that System (2.16) satisfy (2.6) and (2.9), so it is controllable with approximately nonnegative state by virtue of Theorem 2.8. Let us now attempt to require that the state remains exactly nonnegative. First, note that if u = 0, z = y 1 + y 2 satisfies the heat equation Now, we consider constant initial conditions Y 0 = 3 1 and Y f,0 = 1 1 . The free trajectory (y f 1 , y f 2 ) starting at Y f,0 verifies thanks to (2.17) that Moreover, Since A is quasipositive in the sense of (2.6), we have On the other hand, since y 2 is nonnegative, the integral of y 1 on [0, 1] is nondecreasing over time, so the controlled trajectory (y 1 , y 2 ) satisfies 1 0 y 1 (t, x)dx 3 for all t 0, whichever the control. Therefore, it is impossible to This simple example highlights the fact that an actual gap exists between the notions of controllability with nonnegative state and approximately nonnegative state for coupled systems. In particular, Theorem 2.6 deals with a favorable case for which exact nonnegative controllability holds: when D = I n and the eigenvalues of A have nonnegative real part.
Let us additionally describe another situation in which exact nonnegative controllability holds between two trajectories, even with D = I n . Assume thatỸ and Y f are globally bounded (if they are not, one can perform a change of variable Y → e λt Y with λ > 0 sufficiently large and apply the following to the new system). Then, let If ζ > 0, then one can replace −ε with ζ − ε in (2.15) and conclude that (1.1) is controllable betweenỸ and Y f with nonnegative state, for ε small enough. Thus, ifỸ and Y f are globally bounded and bounded from below by a positive constant, we recover nonnegative controllability. The proof of this -somewhat anecdotal -result steadily follows the proof of Theorem 2.8, with ζ being added in the relevant inequalities.
The proofs of Theorems 2.6 and 2.8, presented in Section 3, are based on a "staircase" strategy, that has proven its efficiency for the study of state-constrained or control-constrained controllability [3,5,8,11]. The idea is to make small steps towards the target, following a path of trajectories such that the controlled trajectory stays always close to a nonnegative free trajectory, and therefore almost nonnegative (see Figures 1 and 2). In the aforementioned references the steps trajectories are restricted to be connected steady states. The proof of Theorem 2.6 features a change of variables that decouples the equations and allows the similar use of constant steady states. On the other hand, in the proof of Theorem 2.8, we relax this steady state assumption and follow a path of non-constant free trajectories.

Minimal time
In this section, we do not assume anymore that A, B and D satisfy (2.5), (2.6) and (2.9). Our main result is the following: Theorem 2.12. Assume that assumption (1.2) holds and that Ω \ ω contains a nonempty open ball. Let M > 0.  The assumption Y f (0, ·)| Ω\ω = Y 0 | Ω\ω corresponds to the "interesting" case where the control needs to act on regions over which it is not supported in order to reach the target trajectory. Therefore, we left out the case where Y f (0, ·)| Ω\ω and Y 0 | Ω\ω differ only on ω, which however does not seem entirely trivial (notably, the strategy proposed in [2, Remark 16] does not work). The positivity of the minimal time in that latter case may depend on whether Y f is a constant steady state or a space-varying trajectory.
Theorem 2.12 shows that relaxing the constraint Y 0 to allow the controlled trajectory to be negative still implies the existence a minimal controllability time. This is not surprising: it has been numerically observed that, when there is no state constraint and as the time T allowed to control the equation goes to zero, the control and the state tend to become highly oscillating [2,15] and reach therefore very high absolute values. Hence, it is intuitively understandable that setting an unilateral constraint on the state restricts this behaviour and implies thatT Theorem 2.12 extends to general linear parabolic systems like (1.1) the result stated in [2,Theorem 4], that establishes the existence of a positive minimal controllability time for the scalar heat equation. The proof, presented in Section 4, relies on similar arguments as in [2].

Proofs of Theorems 2.6 and 2.8
Before proving our results on state-constrained controllability, let us state a useful estimation on the cost of the control.
, and satisfying moreover: for any s ∈ N, there exists C s > 0 such that Proposition 3.1 is classical but is not an immediate consequence of the results in [1]. For the sake of completeness, we provide a short proof, based on the strategy given in [16,Theorem 4].
Proof. We consider the adjoint equation First of all, we decompose the initial condition Z 0 in the Hilbert basis defined before (2.10): We can then decompose the solution Z of (3.2) as where Z k is the unique solution of the ordinary differential system Let us recall the spectral inequality for eigenfunctions of the Dirichlet-Laplace operator as obtained in the seminal paper by Gilles Lebeau and Enrique Zuazua [17] (see also [18]): for any non-empty open subsetω of Ω, there exists C > 0 such that for any J ∈ N * and any (a 1 , . . . a J ) ∈ R J , we have Writing (3.4) for each component of B * Z and summing on n, we obtain that there exists C > 0 such that, for all t ∈ (O, T ), Assume thatω is strongly included in ω and let ϕ ∈ C ∞ 0 (ω) be such that ϕ = 1 onω. We can deduce from (3.5) the inequality Integrating (3.6) between T /4 and 3T /4, we obtain Now, we consider the system of ODEs Moreover, it is proved in [16,Appendix] that there exists p 1 , p 2 ∈ N (depending on n but independent of k) such that (3.8) holds with Since −λ k A * + D * is dissipative for k large enough, there exists C > 0 independent on k and T such that Hence, restricting to the case T 1 and combining (3.10) together with (3.8) and (3.6), we deduce that for another constant C > 0, Let ψ ∈ C ∞ 0 (0, T ) such that ψ = 1 on (T /4, 3T /4). We deduce from (3.11) that Inequality (3.12) is a low-frequency observability inequality for the solutions of (3.2). It is well-known that it is equivalent to a partial controllability result for the solutions of (1.1). More precisely, we consider as an initial Then, we deduce that there exists U J ∈ L 2 (ω T ), such that the corresponding solution Y of (1.1) with initial condition Y 0 satisfies that Y (T, ·), e j = 0 for any j ∈ N with j J. Moreover, following [17, Proof of Proposition 2], it is possible to prove that one can choose U j in the smooth class C ∞ 0 (ω T ), in such a way that for any s ∈ N, we have Hence, by applying the Lebeau-Robbiano strategy as described in [19], we can create a control U ∈ C ∞ 0 (ω T ) (that is alternating phases of C ∞ controls with compact support and phases of dissipation) satisfying the estimation (3.1) such that the corresponding solution of (1.1) with initial conditionỸ 0 and control U satisfies Y (T, ·) = 0. Then, by linearity, Y = Y f + Y is a solution of (1.1) (associated to the control U ) with initial condition Y 0 and satisfies Y (T, ·) = Y f (T, ·).
Combining Propositions 2.1 and 3.1, with s large enough such that H s (Q T ) → L ∞ (Q T ), which is possible by Sobolev embedding, yields the following result, that features an estimation of the L ∞ distance between the free trajectory and the controlled trajectory by the L 2 distance between the initial states of the free and the target trajectories.

Proposition 3.2.
Let Y 0 ∈ L ∞ (Ω) n ,Ỹ the corresponding solution of (2.1), and Y f a trajectory of (2.1) with an initial condition Y f,0 ∈ L ∞ (Ω) n . Then, for all T > 0, there exists a control U ∈ C ∞ 0 (ω T ) and a constant C(T ) such that the solution Y of (1.1) with initial condition Y 0 and control U satisfies Y (T, ·) = Y f (T, ·) and (3.14) Proposition 3.2 is a key ingredient in the proofs of Theorems 2.6 and 2.8.
Proof of Theorem 2.6. For t 0 and Y ∈ R N , define Notice that Y is a solution of (1.1) if and only if Z is solution of the following nonautonomous system: in Ω.

(3.16)
It is obvious that System (3.16) is controllable if and only if System (1.1) is controllable. Moreover, we can state an estimation similar to Proposition 3.2 for the trajectories of (3.16), that takes into account its time dependency: in Ω, (3.17) with initial condition Z 0 and U = 0. Let Z f be a trajectory of (3.17) with another initial condition Z f,0 and U = 0. Then, for all T > 0, there exists a control U in C ∞ 0 (ω T ) and C(T ) independent of T 0 such that the solution Z of the system (3.17) satisfies Z(T 0 + T, ·) = Z f (T 0 + T, ·) and Since System (1.1) is autonomous, combining Proposition 3.2 and the definition of Z gives the existence of a constant C 1 (T ) independent of T 0 such that e tA (Z(t, ·) −Z(t, ·)) L ∞ (Ω) n Moreover, since the eigenvalues of A all have nonnegative real part, it means that there exists K independent of t such that e −tA K for all t 0, which yields (3.18) with C(T ) = KC 1 (T ).
Let us highlight the fact that the estimation given by this lemma necessarily requires the assumption made on the eigenvalues of A. If we relax this assumption, we lose the independence of C(T ) with respect to T 0 , which is is a key point of the following proof, for we will control System (3.16) on a number of consecutive time intervals that depends itself on the value of C(T ).
Now, let Z f be the solution of (3.16) with initial condition Z f,0 = Y f,0 and no control. We are going to show the existence of T > 0 and a control U such that the solution Z of (3.16) with initial condition Z 0 = Y 0 and control U satisfies Z(T, ·) = Z f (T, ·) (3.20) and, for all (t, x) ∈ Ω T , Z(t, x) 0. (3.21) As already noted above, it is clear that such a control U is such that the solution Y of (1.1) with initial condition Y 0 and control U satisfies Moreover, for all (t, x) ∈ Ω T , Y (t, x) = e tA Z(t, x) 0, (3.23) because of (3.21) and the fact that the exponential of a quasipositive matrix has only nonnegative entries (if A is quasipositive, write A = P + αI n with α ∈ R such that P has only nonnegative entries, then it is clear that e P is nonnegative and so is e A = e αIn e P = e α e P [20]). When U = 0, (3.16) becomes a system of n decoupled parabolic equations. Then, using a spectral expansion, we immediately have that the solutionsZ 0 and Z f starting respectively at Z 0 and Z f,0 converge in norms L ∞ and L 2 (and any other L p -norm) toZ with |Ω| = Ω 1dx. Assumption (2.11) in Theorem 2.6 ensures that all the components ofZ andZ f are positive. Therefore, there exists ζ > 0 such thatZ ζ andZ f ζ.
We are now in a position to build the trajectory Z going from Z 0 to Z f . This will take several steps that are summarized on Figure 1. Let δ > 0 and τ > 0.
satisfiesZ(T 0 + τ ) =Z. Lemma 3.3 ensures that V can be taken such that, for all t ∈ [T 0 , T 0 + τ ], (3.26) and therefore, using (3.24), for all (t, x) ∈ [T 0 , T 0 + τ ] × Ω, satisfies Z(T 0 + (k + 2)τ ) =Z k+1 . According to Lemma 3.3, the control U k is such that one has, for all t ∈ I k , (3.30) At the end of this step, we have reached the steady stateZ N =Z f . (4) On the time interval [T 0 + (N + 1)τ, T 0 + (N + 2)τ ], we define a control W in C ∞ 0 ((T 0 + (N + 1)τ, satisfies Z(T 0 + (N + 2)τ, ·) = Z f (T 0 + (N + 2)τ, ·). Lemma 3.3 ensures one more time that W can be taken such that for all t ∈ [T 0 + (N + 1)τ, T 0 + (N + 2)τ ], which gives, using (3.24): for all (t, x) ∈ [T 0 + (N + 1)τ, T 0 + (N + 2)τ ] × Ω, Overall, taking T = T 0 + (N + 2)τ , which is possible since τ and ζ does not depend on δ, and we have found a control U in C ∞ 0 (ω T ) such that Z satisfies (3.20) and (3.21). The proof of Theorem 2.8 is again based on building a "staircase", made this time of non-constant trajectories. Note that, to ensure that these trajectories do not go to far away from each other, we start the proof with a simple change of variable that makes the trajectories globally bounded. Like for the change of variable (3.15) in the proof of Theorem 2.6, this change of variable preserves the quasipositivity of the coupling matrix and the nonnegativity of the solutions.
Proof of Theorem 2.8. Let τ > 0, ε > 0, Y 0 ∈ L ∞ (Ω) n such that Y 0 0, and Y f in L ∞ (R + × Ω) n a trajectory of (1.1) associated to the initial condition Y f,0 0. By means of a change of variable Y → e λt Y with λ > 0 sufficiently large (which is equivalent to changing A into A − λI n , which does not affect the quasipositivity of the coupling matrix A), we can always assume that A satisfies the following condition: ∀ξ ∈ R N , Aξ, ξ 0.
(3.33) We deduce from (3.33) that the solution Y of (2.1) starting at any Y 0 is globally bounded. Indeed, take the scalar product of (1.1) with Y and integrate over Ω:  Notice that, for all k ∈ {0, . . . , N }, t 0 and x ∈ Ω, We now build the controlled trajectory Y using the staircase strategy. The steps of the construction of Y are represented on Figure 2.

Proof of Theorem 2.12
Our proof of the existence of a positive minimal time for controllability of System (1.1) relies on proving the existence of such a minimal time for a scalar heat equation with a potential, a source term and boundary control. The arguments are inspired by those presented in the proof of [2, Theorem 4.1], which proves the same result for the standard heat equation.
Proof. Let M > 0 and Y 0 ∈ L 2 (Ω) N , and let Y f ∈ L 2 (Ω T ) N be a trajectory of System (1.1). Assume that there exists T > 0 and U ∈ L 2 (ω T ) m such that the solution of (1.1) starting at Y 0 with control U satisfies Y (T, ·) = Y f (T, ·) and ∀(t, x) ∈ Ω T , Y (t, x) −M.
In (4.2), f (t, x) = N j=2 (a 1j + d 1j ∆)y j contains the coupling terms from System (1.1), and v(t, x) is the trace of the solution Y of (1.1) with control U . Due to interior parabolic regularity results inside the domain Ω, the solution Y restricted to B belongs to L 2 ((0, T ), H 2 (B) n ), which notably ensures that f ∈ L 2 ((0, T ) × B). Thus, (4.2) can be seen as a scalar heat equation with a linear potential, a source term, and Neumann control on the whole boundary. Moreover, Assumption (4.1) requires that the control v in (4.2) satisfies v −M at all times.
Since we will only be considering System (4.2) from now on, let us rename y 1 by y and a 11 by a to lighten notations.
Let y 0 and y f be the restrictions to (0, T ) × B of the first component of Y 0 and Y f . Let also y f,0 = y f (0, ·). We define as above the minimal controllability time for (4. Notice that by construction of (4.2), we haveT (y 0 , y f ) T (Y 0 , Y f ). Therefore, if (4.2) has a positive minimal controllability time, then so does (1.1).
Assume thatT (y 0 , y f ) = 0. Following the ideas used in [2] for proving the existence of a minimal time for the heat equation, we will study a spectral decomposition of the solution y of (4.2). Consider the sequence of eigenvalues (λ n ) n∈N * and the associated sequence of eigenvectors (p n ) of the following Sturm-Liouville problem on [0, 1]:  In (4.4), ω d−1 = ∂B dΓ x . It is well-known that the sequence (λ n ) is increasing and that lim n→+∞ λ n = +∞. We define ϕ n (x) = p n ( x ) for x ∈ B, and α n = p n (1), so that ϕ n satisfies the adjoint problem ∆ϕ n + aϕ n = −λ n ϕ n in B, ϕ n (x) = 0, ∇ϕ n · n(x) = α n on ∂B. (4.5) Notice that, thanks to the requirement made on p n (0) in (4.4), we have ϕ n L 2 (B) = 1 for all n in N * . Moreover, straightforward computations (see [2,Equation (18)]) give the identity that will be useful in the following. Let T > 0 and a control v T ∈ L 2 ((0, T ) × ∂B) such that the solution y of (4.2) starting at y 0 with control v T reaches y f in time T . For n ∈ N * and t ∈ (0, T ), define y n = B y(t, x)ϕ n (x)dx. Then, we computė  Now, since v T + M 0, we have upper and lower bounds on the right-hand side of (4.11), depending on the sign of λ n : if λ n 0, and if λ n > 0. Now, we want to take the limit when T goes to 0 in (4.12) and (4.13). It is obvious that (4.14) Moreover, using the Cauchy-Schwarz inequality and the fact that φ n L 2 (B) = 1, we have the following bound: Using (4.14) and (4.16) into (4.12) and (4.13) when taking the limit yields Since the left-hand side of (4.17) does not depend on n, it means that there exists γ ∈ R such that for all n ∈ N * , y 0 n = y f,0 n + α n γ.
(4.18) The next step is to show that γ = 0. Since y 0 ∈ L 2 (B), we know that the series  The first sum on the right-hand side converges because y f,0 ∈ L 2 (B). Therefore, the last sum is also finite, so lim n→∞ γα n (2y f,0 n + α n γ) = 0. (4.20) Then, notice that and use the identity (4.6) to obtain which does not depend on n. Since lim n→+∞ α n = +∞, we deduce that (4.20) holds only if γ = 0. This means that, for all n ∈ N, y 0 n = y f,0 n . Therefore y 0 = y f,0 . By contraposition, this proves the theorem.

Discussion and open problems
We have studied the problem of nonnegative controllability for coupled reaction-diffusion systems. Our results show that one can control such a system in large time to trajectories using the staircase method with approximately nonnegative state. Moreover, in the particular case where D = I n and A only has eigenvalues with nonnegative real part, controllability in large time with nonnegative state holds. In a broader framework (less assumptions on A, B and D), we also proved the existence of a positive minimal controllability time with whichever constraint of type Y −M with M 0. We list a few remarks and open questions below.

Regularity of the control for minimal time.
In [2], the authors show that the heat equation is controllable with nonnegative state constraint with a positive minimal time. Moreover, by considering a sequence of controls weakly converging in L 1 , they show that controllability in exactly the minimal time can be achieved with a Radon measure control. This result easily transposes to System (1.1). As stated in [2], the question of whether the control in the minimal time can be more regular is still open.
Controllability with nonnegative state in the general case. Remark 2.11 displays an example of system showing that exact nonnegative controllability does not hold in general. Therefore, an interesting extension to Theorem 2.8 would be to further discuss about the restrictions to be made on the initial condition and target state that could help recover exact nonnegative controllability.
Non-autonomous systems. Linear systems like (1.1) with time-dependent matrices A, B and D (nonautonomous systems) are also commonly considered and the question of their state-constrained controllability would be relevant. Controllability without a state constraint for such systems has been established in [21] under a Silverman-Meadows-type condition.
When adding a state constraint, our study suggests that estimations like (3.22) are crucial to establish controllability. As discussed after Lemma 3.3, caution is required to guarantee that these estimations are uniform in time when the system has time-dependent coefficients. It is clearly not the case for all non-autonomous systems; hence finding conditions on A, B and D that allow controllability with non-negative state call for further investigation.
Boundary control. Boundary control for coupled systems of parabolic equations is a difficult problem, and controllability even without a state constraint is not resolved as of today in the general case. The case d = 1 and some particular cases when d > 1 have been dealt with; we refer the reader to the survey paper [22] and more recent advances made in [23,24]. A study of state-constrained controllability for these cases, potentially through straightforward adaptation of the staircase argument, would be an interesting continuation of this work.
Nonlinear case. A natural extension of this work would be the generalization of our results to semilinear parabolic systems. Let us do a short review of the state of the art for controllability and state-constrained controllability of such systems and give some perspectives on future research. Consider the following system As for nonnegative-state controllability, first results have been stated in [6] in two particular cases: Then, there exists a time T > 0 and a control u ∈ L ∞ ((0, T ) × Γ) such that the solution y of (5.1) with initial condition y 0 satisfies y(T ) = y 1 and, for all t in (0, T ), y(t) 0.
(2) (Controllability in the dissipative case). In the dissipative case (sf (s) 0 for all s ∈ R), System (5.1) is controllable to trajectories in large time with nonnegative state.
In both cases, there exists a positive minimal time. Let us also mention [5], in which are shown similar results for a semilinear equation with boundary control, and also that state-constrained controllability fails outside of these two particular cases.
To extend the results of Theorem 5.1 to more general nonlinearity f or arbitrary initial and target data, the main challenge in the semilinear case compared to the linear case is that, even in a favourable case where the nonlinearity f is globally Lipschitz, the staircase method does not work anymore, because the trajectories might move away from each other exponentially in time. Therefore, the small fixed-size steps of the staircase do not ensure that the controlled trajectory will eventually reach the target trajectory. We even conjecture that this type of behaviour might make state-constrained controllability fail, and are conducting research to find a counterexample.
Finally, for a semilinear system of coupled equations, in Ω, (5.4) with Y ∈ R N , little is known on global controllability to trajectories. Moreover, for scalar equations, most state-constrained controllability results rely to some extent on the maximum principle, which does not hold for coupled systems like (5.4). Hence the question of state-constrained controllability for these systems remains largely open.