A PENALTY APPROACH TO THE INFINITE HORIZON LQR OPTIMAL CONTROL PROBLEM FOR THE LINEARIZED BOUSSINESQ SYSTEM

In this paper, we consider the infinite time horizon LQR optimal control problem for the linearized Boussinesq system. The goal is to justify the approximation by penalization of the free divergence condition in this context. We establish convergence results for optimal controls, optimal solutions and Riccati operators when the penalization parameter goes to zero. These results are obtained under two different assumptions. The first one treats the linearization around a sufficiently small stationary state and an arbitrary control operator (possibly of finite rank), while the second one does no longer require the smallness of the stationary state but requires to consider controls distributed in a subdomain and depending on the space variable. Mathematics Subject Classification. 49N10, 93B05. Received November 29, 2020. Accepted January 13, 2021. Dedicated to Enrique Zuazua for his 60th birthday.

of the free divergence condition. Within this approach, denoting the velocity field of the fluid by v and the pressure field by p, the free divergence condition div v = 0 is replaced by div v + εp = 0. As described in the next section, this procedure yields a standard well-posed linear system (governed by a PDE of parabolic type with distributed control). For simulation purposes this approach has been widely used for Stokes, Navier-Stokes or Boussinesq type system. We refer to Temam [31] for the first analysis of the method (for the Navier-Stokes equations) and to Hebeker et al. [17] and Shen [27,28] for error estimates in the case of non stationary Stokes and Navier-Stokes systems. From an optimal control perspective, the penalty method has been applied in Badra, Buchot and Thevenet [2] for the finite horizon LQR problem where the authors prove convergence results for the penalized Oseen system with boundary control.
The main results of this work assert that, under appropriate assumptions, for every v 0 y 0 ∈ H we have lim ε→0 u opt,ε = u opt and lim ε→0 v opt,ε y opt,ε = v opt y opt , (1.9) in the sense which will be made precise in Theorems 3.1 and 3.2.
Control problems with an approximation by penalization of the free divergence condition have been studied in different ways. From a controllability point of view, the null-controllability of the Stokes system, respectively Navier-Stokes, is established in [19], respectively in [1], with as many controls as equations. More recently, [6] proves the null-controllability of the Stokes system in 2D with only one control, this result having been extended in [7] for the (nonlinear) Boussinesq system in 2D and 3D for cylindrical geometries. For LQR optimal control problems, up to our knowledge, the only article in the literature seems to be [2] where the authors prove results in the spirit of (1.9) for the Oseen system, with boundary control, and for a finite time horizon. We emphasize that the error estimates provided in [2] explode when the horizon time goes to +∞.
The main novelty brought in by this work consists in obtaining convergence results for infinite time horizon LQR problems. An essential ingredient to accomplish this goal is a version of the strategy proposed in Banks and Kunisch [5] for Galerkin approximations, see also Banks and Ito [4]. More precisely, the method essentially consists in the three main steps described below.
1. A question which is common to the various situations we consider is proving that the Riccati operator associated to (1.8) is bounded, uniformly with respect to ε and that the optimal trajectories decay exponentially, uniformly with respect to ε. This will be shown to hold under two different assumptions. The first one considers an arbitrary bounded control operator B (possibly of finite rank) and assumes a smallness condition for the stationary state. This is first done for the case of a vanishing stationary state and then by a perturbation argument. The second situation considered in this work treats the case of a control distributed in a subdomain O ⊂ Ω. In this case the control space is infinite dimensional, which enables us to derive uniform null-controllability results for (1.4) and (1.6). This task is accomplished by combining Carleman estimates for the penalized Stokes system due to [1] and Carleman estimates for the heat equation due to [14]. 2. Secondly, we prove (1.9) for finite time horizon problems. In order to do this, we first establish convergence results for solutions of (1.6) to solutions of (1.4), in the limit ε → 0 and in compact time intervals. This is obtained through an adaptation of the Trotter-Kato theorem and the use of Lax-Phillips semigroups. Then, using the well-known formulas for optimal controls and optimal trajectories, given for instance in [15], we can pass to the limit to deduce (1.9). 3. Finally, by using the uniform bounds and decays that are established in the first point of the strategy, we can approximate the infinite time horizon problem in the interval [0, ∞) by finite time horizon problems in intervals [−s, 0] with s > 0. By gathering this approximation and the fact that (1.9) holds in [−s, 0], we obtain the expected convergence results (1.9) for the infinite time horizon problem.
The paper is organized as follows. In Section 2, we introduce a functional analytic framework to establish the well-posedness of the control systems (1.4) and (1.6) and we recall some basic facts about LQR problems in infinite horizon. In Section 3 we state the main results of the paper, which basically formalize (1.9) under appropriate assumptions. Section 4 mainly relies on convergence results (in the limit ε → 0 + ) and uniform bounds (in ε > 0) for semigroups associated to penalized problems. In Section 5, we prove uniform stabilizability results and null-controllability results for penalized systems. With all these preliminary results at hand, Section 6 is dedicated to the proof of the main results of the paper. Finally, Section 7 contains some final comments and open questions.

Semigroup formulation and background on LQR problems
For the remaining part of this work we assume that the stationary solution

Well-posedness of the controlled systems
We introduce below a semigroup formulation of (1.4) and (1.6). To this aim we introduce the spaces where P is the Leray projector. Equivalently, A 0 can be defined by duality, by setting: We will need the result below, for which we sketch the proof, for the sake of completeness, with no claim of originality.
Proposition 2.1. The operator A 0 defined above generates an analytic semigroup on H, denoted by T 0 = T 0 t t 0 .
Proof. By integration by parts and Young's inequalities, it is not difficult to see that there exists a positive constant c depending on v s y s W 1,∞ (Ω) d+1 such that for every ϕ ψ ∈ D(A 0 ), where we recall the notation X = L 2 (Ω) d+1 . Thus the corresponding sesquilinear form associated to −A 0 + cI (this form is simply defined by the right hand side of (2.5)) is coercive on H. We also readily check that this form is densely defined, closed and continuous on H. By Theorem 1.52 of [21], A 0 − cI generates an analytic semigroup on H and finally, by a perturbation argument (see, for instance, [22], Sect. 3.2, Cor. 2.2) A 0 generates an analytic semigroup on H.
For every ε > 0, consider the operator A ε : D(A ε ) → X defined by Alternatively, A ε can be defined by We have the following result.
Proposition 2.2. For every ε > 0 the operator A ε defined above generates an analytic semigroup on X = L 2 (Ω) d+1 , denoted by T ε = (T ε t ) t 0 . Moreover, there exist M 1 and ω ∈ R such that Proof. By integration by parts and Young's inequalities, it is not difficult to see that there exists a positive constant c depending on (v s , y s ) W 1,∞ (Ω) d+1 (independent of ε > 0), such that for every Thus the sesquilinear form associated to −A ε + cI is coercive on X. We also readily check that this form is densely defined, closed and continuous on X. By Proposition 5.1 of [21] and Theorem 1.52 of [21], A ε − cI generates an analytic semigroup T ε satisfying Since the constant c is independent of ε we can apply ( [22], Sect. 3.2, Cor. 2.2), to obtain that for every ε > 0 the operator A ε generates an analytic semigroup on X and that we have (2.9).
Let us introduce the projector P H ∈ L(X; H) defined by where P is the Leray projector. Let U be a Hilbert space (the space of controls). We introduce the control operators B ∈ L(U ; X) and B 0 := P H B ∈ L(U ; H). (2.11) We have the following well-posedness result for controlled systems.

12)
admits an unique mild solution v y defined by admits an unique mild solution (v, y), defined by

Some background on LQR optimal control problems
We recall below some basic facts on LQR problems in an infinite dimensional context. Let A be the generator of a strongly continuous semigroup on a Hilbert space X and B ∈ L(U; X ) be a bounded control operator on a Hilbert space U. The LQR theory in infinite horizon focus on trajectories of the linear control system x = Ax + Bu, x(0) = x 0 , which minimize the quadratic cost functional We recall below the well-known notion of stabilizability.
Definition 2.4. The pair (A, B) is stabilizable if there exists K ∈ L(X , U) such that A − BK generates an exponentially stable semigroup F K t t 0 on X , i.e. there exist M > 0 and ω > 0 such that One of the main results of the LQR theory states that the optimal control associated to (2.16) is given by a feedback law, see, for instance, Theorem 2.1 of [5] and Theorem 4.2 of [15].
Theorem 2.5. If the pair (A, B) is stabilizable then for every x 0 ∈ X , J(·; x 0 ) admits a unique minimum u opt given by where P ∈ L(X ; X ), is the unique nonnegative self-adjoint solution of the Riccati equation

18)
and F = (F t ) t 0 is the exponentially stable semigroup on X generated by A − BB * P. Moreover, for every We recall that P ∈ L(X ; X ) is a solution to the algebraic Riccati equation (2.18) if P maps D(A) to D(A * ) and (2.18) is satisfied when the left hand side is applied to an arbitrary x ∈ D(A).

Main results
In this section we continue to use the notation introduced in Section 2 for the spaces H, X, U and for the operators (A ε ) ε 0 , B 0 and B (we recall, in particular, from (2.11) that B 0 = P H B). Moreover, we introduce some new notation and we state the main results of the paper.
Assume that (A 0 , B 0 ), with A 0 defined in (2.4) and B 0 defined in (2.11), is stabilizable. By Theorem 2.5, let us denote by Π 0 ∈ L(H) the unique nonnegative self-adjoint solution of In the same way, assume that (A ε , B) is stabilizable with A ε defined in (2.7) and B defined in (2.11). Then let us denote by Π ε ∈ L(X) the unique nonnegative self-adjoint solution of In (3.2), I X denotes the identity operator on X. The closed loop semigroup, i.e. generated by A ε − BB * Π ε , is denoted by (F ε t ) t 0 and the optimal control, associated to an initial data v 0 y 0 ∈ X is given by u opt, The main results of the article focus on asymptotic properties of Π ε , (F ε t ) t 0 and u opt,ε in the limit ε → 0.
The first main result of the paper treats the case of a stationary state v s y s "sufficiently small", with an arbitrary control space U and an arbitrary bounded control operator (possibly of finite rank).
then A 0 is exponentially stable and A ε is uniformly (with respect to ε) exponentially stable. Moreover, there exist M > 0 and ω > 0, independent of ε such that the solution

5)
In addition, for every v 0 8) and the last two convergences are uniform with respect to t on compact intervals.
Our second main result does no longer require the smallness of (v s , y s ), considering a particular class of control operators.

Preliminary results on the (penalized) Boussinesq semigroup
In this section and the following ones we continue to use the notation H := V 0 (Ω) × L 2 (Ω) and X = L 2 (Ω) d+1 for the state space of the system described by the Boussinesq equations and their penalized version, respectively. We recall that the operator A 0 : D(A 0 ) → H has been defined in (2.4), whereas for every ε > 0 the operator A ε has been defined in (2.7). As in the previous sections, the analytic semigroup on H generated by A 0 is denoted by T 0 , whereas, for every ε > 0, T ε stands for the analytic semigroup on X generated by A ε .

Convergence results for the semigroups describing the free dynamics
In this subsection we prove that when ε → 0+ the family of semigroups (T ε ) ε 0 strongly converges to T 0 (and the same for adjoint semigroups). The proof relies on an adaptation of the Trotter-Kato's theorem, described below.
We begin by noticing that the adjoint In the same way, for every ε > 0 the adjoint In (4.1), (4.2), the notation (∇ϕ) tr means the transpose of the matrix ∇ϕ.
We first prove the convergence of the resolvents of A ε (respectively A * ε ) towards the resolvents of A 0 (respectively A * 0 ). Proposition 4.1. There exists λ 0 > 0 such that for every λ ∈ C with Re λ λ 0 , we have Proof. We only prove (4.3), since the proof of (4.4) is fully similar.
where the constant c is the one appearing in (2.10). Then, for λ ∈ C with Re λ λ 0 , by setting and by using (2.8) we have Taking ψ η = ϕ ε ζ ε in (4.6) and using (2.10) we obtain where c > 0 is some another constant. From this and the Poincaré inequality it follows that there exists (another) The above estimate implies that there exists For ψ η ∈ D(A 0 ), note that div ψ = 0, we can thus pass to the limit in (4.6) to obtain that then using Note that, arguing as in Proposition 2.1 and in Proposition 2.2, the operator A * 0 generates an analytic semigroup on H, denoted by (T 0 ) * = (T 0 t ) * t 0 and for every ε > 0 the operator A * ε generates an analytic semigroup on X, denoted by (T ε ) * = ((T ε t ) * ) t 0 . Moreover, there exist M 1 and ω ∈ R such that An important consequence of Proposition 4.1 is the following result: uniformly with respect to t on compact intervals.
Proof. We only prove (4.10). The proof of (4.11) follows the same scheme, using (4.4) instead of (4.3). We follow Section 3.4, Theorem 4.2 of [22]. We take λ λ 0 where λ 0 is defined in Proposition 4.1 and we note By using that the resolvent commutes with the associated semigroup, see Section 1.2, Theorem 2.4, c) of [22], we have It follows from (4.3) and (2.9) that D ε For the term D ε 2 z 0 , we use the fact, proved in Section 3.4, Lemma 4.1 of [22], that The integrand in the right hand side of the above estimate is bounded by an independent function of s and tends to 0 as ε → 0 by using (4.3). So, by Lebesgue's convergence theorem, we have and the limit is uniform in [0, T ]. So we have that D ε 2 R 0 λ z 0 → 0 as ε → 0 uniformly in [0, T ] for every z 0 ∈ H. Moreover, we know that every z 0 ∈ D(A 0 ) can be written as z 0 = R 0 λ z 1 with z 1 ∈ H so we deduce that D ε 2 z 0 → 0 as ε → 0 uniformly in [0, T ] for every z 0 ∈ D(A 0 ). Then, we have actually proved that

Lax-Phillips semigroups
In this section we recall, following Staffans and Weiss [30], the concept of Lax-Phillips semigroups associated to a control system which is then used, combined with the Trotter-Kato theorem, to prove the convergence of the penalized Boussinesq control systems towards the usual one.
Let A : D(A) → X be the generator of a strongly continuous semigroup F = (F t ) t 0 . We denote by ω F the growth bound of F and we fix ω > ω F . Let B ∈ L(U, X ) be a bounded control operator. We denote where (e ω v)(t) = e ωt v(t) and e ω v L 2 ω = v L 2 . For later use, we also introduce the notation It is known (see Grabowski and Callier [16] for the first derivation of this result and Staffans and Weiss [30] for a presentation of these results in a much more general context), that the operator C : D(A) → X × U ω defined by generates a strongly continuous semigroup Ξ on X × U ω . More precisely, we have where (Φ t ) t 0 are the output maps defined by and S * t is the left shift by t on L 2 loc ([0, ∞); U), i.e., Remark 4.3. Note that, as mentioned in [30], the growth bound of Ξ equals to ω.
Let us now retrieve our (penalized) Boussinesq context, with the notation for the operators A ε and semigroups T ε recalled at the beginning of this section. We know from Proposition 2.1 and Proposition 2.2 that there exist M 1 andω ∈ R such that the corresponding semigroups satisfy For ω >ω, we consider the space U ω introduced in (4.14) taking U = U .
We can introduce the Lax-Phillips semigroup Ξ 0 = (Ξ 0 t ) t 0 on H × U ω , associated to the pair (A 0 , B 0 = P H B) and, for every ε > 0, the Lax-Phillips semigroup Ξ ε = (Ξ ε t ) t 0 on X × U ω , associated to the pair (A ε , B). The main result in this section is given just below and it will be used several times in the remaining part of this work. Proposition 4.5. With the above notation, we have

27)
Moreover, the convergence above is uniform for t in compact intervals.

Uniform constants in Datko's theorem
The goal of this subsection is to provide a version of a well-known theorem of Datko, in which we make explicit the fact that the exponential decay rate of the semigroup depends only on the constants involved in assumptions (4.35) and (4.36) below.
Proposition 4.7. Let A be a generator of a strongly continuous semigroup (F t ) t 0 on a Hilbert space X . Assume that there exists a positive constant C > 0 such that Then there exist M > 0 and ω > 0, depending only on C, such that Proof. We closely follow the methodology used, for instance, in Tucsnak and Weiss ( [32], Sect. 6.1), in proving Datko's theorem. First, from (4.35), we have that for every τ > 0 there exists m τ 1 (depending also on C) such that F t L(X;X) m τ for all t ∈ [0, τ ]. Then By combining (4.38) and (4.36) it follows that where k τ = √ τ mτ . In particular, we see from the above that F kτ L(X ;X ) c0 κτ for every τ > 0 and every k ∈ N (and this holds also for k = 0). Hence, for every τ > 0, every n ∈ N and every h ∈ X , By (4.39) we get that Finally, combining the above estimate and a classical result (see, for instance, Tucsnak and Weiss [32], formula (2.1.3)), it follows that which ends the proof.

Uniform stabilizability and null-controllability results
In the first part of this section we show that, assuming a smallness assumption on the stationary state v s y s (see Prop. 5.1), the semigroups T ε generated by the operators A ε defined in (2.7) are uniformly (with respect to ε) exponentially stable. This will be a basic ingredient of the proof of Theorem 3.1. In the second part of this section we derive the corresponding ingredient of the proof of Theorem 3.2, which is is a uniform (again with respect to ε) null-controllability result for the pair (A ε , B), where B is the operator defined in (3.9).

Small stationary states and arbitrary control operator
In this subsection we show that if the stationary state around which we linearized the problem is small (in an appropriate sense) then the semigroups (T ε ) ε>0 are uniformly exponentially stable. More precisely, we have: Proposition 5.1. There exists δ > 0 such that if (3.3) holds then A ε generates a uniformly stable semigroup on X, i.e. there exist M 1 and ω > 0 independent of ε such that Proof. We split the proof into three steps.
Step 1. For every ε > 0, consider the operator A ε : In other words, looking to (2.7), we havẽ To show that the resolvent set of A ε contains the half plane C −β := {z ∈ C ; Re z > −β} for some β > 0, we note that the resolvent equation for A ε , that is can, according to (2.8) and (5.2), be equivalently written as This proves that indeed the resolvent set of A ε contains C −β for β = min(β 1 , β 2 ) > 0.
Step 2. We remark that there exists σ ∈ (0, β) such that σI + A ε generates an exponentially stable analytic semigroup. Indeed, this is a direct application of Theorem 4.3 page 118 in [22], using Step 1 and Proposition 2.2.
Step 3. We show that the operator E, with domain D(E) = H 1 0 (Ω) d+1 , defined by is (σI + A ε )-bounded in the usual sense, recalled below. To this aim, let ϕ, ψ ∈ D(E). We know from Step 1, that ϕ ψ = (−σI − A ε ) −1 f g for some f g ∈ X and there exists a constant C > 0 such that Assuming that (3.3) holds, we have by using (5.7) that Taking δ > 0 sufficiently small, and using Corollary 2.3, page 81 of [22] we obtain thatÃ ε + σI + E = A ε + σI generates a bounded analytic semigroup on X, so A ε generates a uniform stable semigroup on X (note that σ is independent of ε > 0). This concludes the proof.

Possibly large stationary sates and controls distributed in a subdomain
In the general case of a stationary state v s y s ∈ W 1,∞ (Ω) d+1 , with no smallness assumption, we cannot ensure that the operator A ε generates a stable semigroup, so we will use the control operator B defined in (3.9) to stabilize this semigroup, uniformly with respect to (small enough) ε.
The main contribution in this subsection is the following uniform (with respect to ε) null controllability result.
Remark 5.3. Using the standard duality between null controllability and finite state observability (see, for instance, [32], Thm. 11.2.1), Proposition 5.2 is a direct consequence of the following result: Proposition 5.4. With the notation in Proposition 5.2, for every T > 0 there exist ε 0 > 0 and κ > 0 such that for every ε ∈ (0, ε 0 ), where, for every ε > 0, T ε * is the analytic semigroup on X generated by A * ε .
The remaining part of this subsection is essentially devoted to the proof of Proposition 5.4.
Remark 5.5. Recalling that the adjoint A * ε is defined in (4.2) and noticing that B * ∈ L(X; U ) is given by we see that the observability estimate (5.9) can be rephrased in PDE terms as follows: for every Φ 0 Θ 0 ∈ X, the solution of in Ω, where κ is the constant in (5.9).
To prove (5.12), we will use Carleman estimates in the spirit of [1]. We have to introduce the classical weights of Carleman inequalities.
Let O 0 ⊂⊂ O and let η ∈ C 2 (Ω) such that The existence of such a function is due to Imanuvilov, see Lemma 2.68 of [11] for a proof. Then, we introduce Finally, for k 2 and λ > 1, we define Setting Q T = (0, T ) × Ω, we also introduce the notation We are the now in a position to recall the standard Carleman estimate for the heat equation with homogeneous Dirichlet boundary conditions, see [13].
Proposition 5.6. There exists λ 0 > 0 such that for all λ λ 0 , there exist two constants c 0 (λ) and s 0 (λ) such that for all s s 0 (λ), for every Θ 0 ∈ L 2 (Ω), g ∈ L 2 (Q T ) the solution Θ to One of the main ingredient of the proof of (5.12) is a uniform Carleman estimate for the penalized Stokes system in Ω, (5.16) with where for i = 1, 2 and j = 1, . . . , n, are given functions. In the previous equations, notation ∇ × f 1 is used for the curl of f 1 . To state this Carleman estimate we introduce the integral quantities: and We have the following Carleman estimate, proved in Theorem 2.2 of [1].
Proof. In the following proof, the positive constants C can vary from line to line and possibly depend on λ, v s y s W 1,∞ (Ω) d+1 but do not depend on ε.
First, by Proposition 5.7, we apply the Carleman estimate for the Stokes penalized system satisfied by Φ with f as in (5.17) and then we obtain Taking s sufficiently large, according to the definition of I 0 , we can absorb the second right hand side term in (5.20) to obtain We now use the Carleman estimate for the heat equation in Proposition 5.6 satisfied by Θ with g = v s · ∇Θ + Φ d and we obtain We can absorb the first right hand side term in (5.22) taking s sufficiently large to obtain We sum (5.21) and (5.23) to obtain and taking again s sufficiently large, we can absorb the first and second right hand side terms of the above inequality to deduce (5.19).
Remark 5.9. The above proof can be adapted, by slightly changing the weights, to obtain that (5.19) holds without the term in the right hand side that containing Φ d . Indeed, by remarking that Φ d = −∂ t Θ − α∆Θ − v s · ∇Θ, one can estimate the local term in Φ d by a local term in Θ and global terms in Θ. Moreover, these global terms can be absorbed by the left hand side of (5.19), see, for instance, Proof of Proposition 2.8 in [7] for the employment of such a technique. This actually leads to a null-controllability result as in Proposition 5.2 with a control of the form (7.1).
We are now in a position to prove the main result in this section.
Proof of Proposition 5.4. According to Remarks 5.3 and 5.5 it suffices to prove (5.12). The basic tool is the Carleman estimate (5.19) above. Let λ and s be sufficiently large such that (5.19) holds. We first remark that there exist c 1 , c 2 > 0 such that for some constant c 3 > 0. By using (4.9), i.e. the semigroups ((T ε ) * ) ε>0 have a growth bound independent of ε > 0, it follows that there exists c 4 > 0 such that Gathering (5.26) and (5.27) we deduce the expected observability estimate (5.12).
Remark 5.10. For obtaining the null-controllability result of Proposition 5.2 for the pair (A 0 , B 0 ), one can follow exactly the same steps that we have done, using Lemma 3.1 of [13] for Carleman estimates applied to the Stokes system.

Proof of the main results
The goal of this section is to prove Theorem 3.1 and Theorem 3.2. We first use Proposition 5.1 to prove (3.4) and (3.5) for Theorem 3.1.
Proofs of (3.4) and (3.5) for Theorem 3.1. We have from Theorem 2.5 and Proposition 5.1 that which implies (3.5) by using Proposition 4.7.
In order to complete the proofs of Theorem 3.1 and Theorem 3.2 we still need to check the convergence results stated in (3.6), (3.7) and (3.8). This will be done by first proving that similar convergence properties hold for the corresponding finite horizon LQR problems, in the spirit of Appendix in [5]. To this aim, we first recall some general results concerning this type of approximation.
Let A be a generator of a strongly continuous semigroup (T t ) t 0 on a Hilbert space X , let B ∈ L(U; X ) be the (bounded) control operator where the input space U is also a Hilbert space. Consider regulator problems on the finite intervals [s, t f ] with −∞ < s < t f , with a weighting symmetric bounded positive operator G 0 for the final state x(t f ). The finite interval problems are given by In the above context it is known that a unique nonnegative selfadoint Riccati operator function Π s can be associated with (R, t f ), see, for instance, Theorem 3.2 of [15]. More precisely, Π s satisfies the integral Riccati equation Let Π s be the unique Riccati operator function associated with the problem (R, t f = 0). If lim t→∞ F t x 0 = 0 for all x 0 ∈ X , where F := (F t ) t 0 is the closed loop semigroup, i.e., the one generated by A − BB * Π, then we have If moreover G Π (in the sense of symmetric operators) and there exist positive constants M and β such that By coming back to our situation, calling (R ε , t f = 0) the finite horizon regulator problem associated to (A ε , B), and (R 0 , t f = 0) the finite horizon regulator problem associated to (A 0 , B 0 ), we can show the following result, which is an adaptation of Theorem 5.1 in [15] to our context.
the uniform bound for the feedback semigroup, stated in (3.5), we conclude that for s < 0, we have Π ε Π s,ε (s) Π ε + M 2 e 2ωsM . (6.13) Let us fix v 0 y 0 ∈ H \ {0} and N ∈ N * . From (6.13), we know that there exists C 0 > 0 such that Using the fact that (A 0 , B 0 ) is stabilizable, we have that (6.3) holds for Π (here Π actually denotes Π 0 to avoid confusion with Π s or Π ζ ). From Theorem 6.1 applied to the problem (R 0 , 0) on H, we then know that there exists C 1 > 0 such that Therefore, from the triangle inequality, we deduce that for ζ < − max(C 0 , C 1 ), We know from Theorem 6.2 that the second term in right hand side term of the above equation goes to 0 as ε → 0, so there exists ε such that for every ε ∈ (0, ε ), we obtain which, recalling that N ∈ N * is arbitrary, concludes the proof of (3.6). We next prove the convergence of the feedback operators, i.e. (3.7). To this aim we first note that for every v 0 y 0 ∈ H we have The quantity defined by the first two terms in the right hand side of (6.17) tends to 0 as ε → 0, uniformly on compact intervals in time. For the remaining terms in the right hand side we write 0, t 0). (6.18) Consequently, from (6.17), (6.18) and the uniform bound (2.9), we obtain that taking T > 0, there exists a constant C > 0 such that for every ε > 0 and t ∈ [0, T ], Hence, by using Gronwall's inequality, we obtain that there exists a positive constant C > 0 such that for every ε > 0, t ∈ [0, T ], By using Proposition 4.2, the convergence of the Riccati operators i.e. (3.6) and Proposition 4.5 we see that the three terms in right hand side of the above formula converge to 0 as ε → 0. Then, we immediately deduce (3.7). Finally, we obtain the convergence of the optimal controls (3.8), using the definition of optimal controls in function of the closed-loop semigroup, i.e. (2.17), and the convergence of the closed-loop semigroup, i.e. (3.7).

Concluding remarks
In this paper we prove that the penalization of the free divergence condition allows the approximation infinite horizon LQR problem for the linearized Boussinesq system by the corresponding problems for a sequence of parabolic systems, for which standard theory and available computational techniques fully apply. The main open question left by this work is the study of the action of the obtained controllers when inserted in the nonlinear Boussinesq system. As in the case of the Navier-Stokes equation (see, for instance, [24]), we expect local stabilization results, the difficulty being that we need estimates for the nonlinear problem which are independent of ε.
We end this paper by formulating some comments and some open questions related to our main results in Theorem 3.1 and Theorem 3.2.
-In Theorem 3.2, we can actually replace (3.9) by that is to say, a control operator that acts only on the first (d − 1) velocity components and on the temperature, see Remark 5.9 above. -In the spirit of Theorem 4.8 of [4], assuming that (3.4) and (3.5) hold true, we may wonder if we have the following convergence in norm operator lim ε→0+ Π ε − Π 0 L(H;X) = 0.
-For numerical purposes, it would be relevant to obtain error estimates for the convergences (3.6), (3.7) and (3.8), in the spirit of [20]. -A natural generalization, concerning both Theorem 3.1 and Theorem 3.2, could consist in adding an observation operator in the quadratic functionals defined in (1.7) and (1.8). We think that, at least in the case of bounded observation operators, our approach can be adapted to this situation, in the spirit of what has been done in [4] for Galerkin type approximations. -For proving Theorem 3.1 and Theorem 3.2, the approach that we follow crucially uses the fact that B and B 0 are bounded control operators. It would be relevant to extend these results for unbounded control operators in order to treat boundary control operators, see [2] for finite time horizon problem in the context of the Oseen system. -Another important issue, namely when one aims getting closer to applications, is to show that if (3.3) or (3.9) holds then the semigroup generated by A − B 0 B * 0 Π ε is uniformly exponentially stable for ε sufficiently small. -In the spirit of [3], it would also be interesting to obtain local exponential stabilization with a finite number of controls, uniformly with respect to ε, for the (nonlinear) Boussinesq system (1.1), replacing div v = 0 by div v + εp = 0.