Numerical approximation of the averaged controllability for the wave equation with unknown velocity of propagation

We propose a numerical method to approximate the exact averaged boundary control of a family of wave equations depending on an unknown parameter sigma. More precisely the control, independent of sigma, that drives an initial data to a family of final states at time t = T, whose average in sigma is given. The idea is to project the control problem in the finite dimensional space generated by the first N eigenfunctions of the Laplace operator. The resulting discrete control problem has solution whenever the continuous one has it, and we give a convergence result of the discrete controls to the continuous one. The method is illustrated with several examples in 1-d and 2-d in a square domain.


Introduction
Consider the wave equation with a missing parameter σ and a control f acting on one part of the boundary: where Ω is an open bounded domain in R N with smooth boundary Γ , Γ 0 be an open nonempty subset of Γ, χ Γ0 the characteristic function of the set Γ 0 , t ∈ [0, T ] , T > 0, Q = Ω × (0, T ), a(σ) ∈ L ∞ is the square of the velocity of propagation and we assume 0 < a m ≤ a(σ) ≤ a M < ∞, σ ∈ Υ ⊂ R, for some constants a m , a M and Υ an interval of R. The function f = f (x, t) is a control, independent of the unknown parameter σ and which acts on Γ 0 . For each value of the parameter σ ∈ Υ, problem (1) has a unique solution (u, u t ) ∈ C([0, T ] , L 2 (Ω) × H −1 (Ω)) for any f ∈ L 2 ((0, T ) × Γ 0 ) and (u 0 , u 1 ) ∈ L 2 (Ω) × H −1 (Ω). Of course, these solutions will depend on the parameter σ ∈ Υ and we will write u(x, t; σ), when we want to make this dependence explicit.
Moreover, there exists C = C(T ) > 0, independent of σ ∈ Υ, such that the solution of (1) satisfies the following inequality (u, u t ) L ∞ ([0,T ];L 2 (Ω)×H −1 (Ω)) ≤ C (u 0 , u 1 ) L 2 (Ω)×H −1 (Ω) + f L 2 ((0,T )×Γ0) . ( We are interested in the numerical approximation of the following controllability problem: Given T > 0, the initial data (u 0 , u 1 ) ∈ L 2 (Ω) × H −1 (Ω) and a final target (u 0 T , u 1 T ) ∈ L 2 (Ω) × H −1 (Ω), find f ∈ L 2 (0, T ; Γ 0 ) such that the solution of (1) satisfies When such a control exists we say that the initial data (u 0 , u 1 ) ∈ L 2 (Ω) × H −1 (Ω) is controllable in average to the target (u 0 T , u 1 T ). This notion of controllability in average (or averaged controllability) was first introduced by E. Zuazua in [17], where sufficient conditions were given for the controllability of finite dimensional systems. In this work the author also highlight the difficulty to extend classical controllability results for PDE to obtain averaged controls. Precisely, the fact that the dynamics of the average is no longer solution of the same PDE. There have been a considerable effort to overcome this difficulty and obtain results in this direction for different distributed systems (see for example [11], [15]). For the particular case of the wave equation, we can address to [7] where the authors consider two different wave equation with the same control, or [9] where a more general family of wave equations is considered. More precisely, they deal with a parametric family of wave equations where the parameter is a measure perturbation of a Dirac mass.
To illustrate the mathematical difficulties behind this control in average for the wave equation we mention that for the one-dimensional model with the simplest choice a(σ) = σ, σ ∈ [σ 0 , σ 1 ], the existence of controls is still open. A partial result is given in [9] assuming, among other properties, that σ −1 0 σ 1 / ∈ Q, which does not seem a natural hypothesis in this context.
In this paper we do not address this difficult problem. We rather focus on the numerical approximation of the averaged controls, assuming that they exist. However, this is also far from being a simple exercise since the usual approach based on the discretization of the controlled wave equation via finite differences or finite elements does not inherit the properties of the continuous model. Even for a single wave equation (i.e. without a parameter dependence), the discrete controls obtained for the associated finite dimensional systems may become unbounded as the discretization parameter goes to zero. This was first observed in [5] where the authors considered a finite dimensional version of the Hilbert Uniqueness Method introduced in [8]. Since then, several cures have been proposed to recover convergence approximations of the controls as bigrid algorithms, Tychonoff regularization, filtering, mixed finite elements, etc (see for instance [4], [6], [13], [2], [3]). We refer to the review [16] for a detailed description of such problems and references.
Here we follow a different approach based on a spectral approximation of the control problem. As in previous works, the starting point is the variational characterization of the control used in the Hilbert Uniqueness Method, that now we project into a finite dimensional space generated by the first eigenfunctions of the Laplace operator. It turns out that such projection can be interpreted as a Galerkin approximation for the continuous control problem. This is not the case for the finite difference or finite elements approach. Therefore, existence and convergence of controls can be deduced by the usual theory. The finite dimensional control problem can be solved using the finite dimensional theory introduced in [17] which provides an explicit expression for the control in average.
The main advantage of this approach is that it provides a simple and efficient method to obtain numerical approximations of controls without any regularization technique. Moreover, it can be easily adapted to parametric systems as (1). In fact, it can be used in any situation where the control can be characterized variationally (simultaneous control, parametric regional control [1], etc.). An important drawback is that it requires to compute the eigenfunctions of the Laplace operator. This is simple in one-dimensional problems or two dimensional ones in special domains (rectangles or disks), but not for general domains or variable coefficients equations.
The numerical experiments suggest that the control in average holds for a large class of systems. In particular for the one-dimensional wave equation with a(σ) = σ, σ ∈ [σ 0 , σ 1 ], without no restrictions on the interval as long as the time is sufficiently large. We also give numerical evidences that the averaged control converges to the control of the averaged parameter when the parameter belongs to an interval of length ε → 0.
The rest of the paper is divided as follows: in section 2 we adapt the usual variational characterization of controls for the wave equation to the control in average. In section 3 we introduce the observability inequality that allows to prove the existence of controls and a particular class of controls that is obtained by minimization of a suitable functional. In section 4 we introduce the numerical method. In section 5 we prove the convergence of discrete controls to the continuous one. In section 6 we deduce matrix formulation equivalent to the discrete system and the controllability of the finite dimensional system. Finally, in section 7 we present some numerical examples in one dimension and in two dimensions on a square domain.

Variational characterization of the control in average
In this section we introduce a variational characterization of the controls in average that we use later to find their numerical approximation. In particular we prove that a class of controls in average can be obtained as minimizers of a quadratic functional defined on a Hilbert space. The results of this section are not new and can be found in [11] (appendix A.2). We include them here for completeness.
For technical reasons we restrict ourselves to controls which are zero near t = 0, T . This affects to the quadratic functional that we define below. We follow the approach in [12] for the wave equation, that we adapt to the parametric system (1) for the control in average.
Multiplying the equation of u by φ and integrating by parts one obtains By a density argument we deduce that an analogous formula holds for any (u 0 , u 1 ) ∈ L 2 (Ω) × H −1 (Ω), (φ 0 , φ 1 ) ∈ H 1 0 × L 2 (Ω) and f ∈ L 2 (0, T ; Γ 0 ). We only have to replace the integrals by duality products when corresponding. Now, if f is a control such that (3) holds then the weak form of (6) is equivalent to (5). Reciprocally, if (5) holds then the weak form of (6) is equivalent to for all (φ 0 , φ 1 ) ∈ H 1 0 (Ω) × L 2 (Ω), and this is equivalent to (3). One possibility to construct controls f that satisfy the variational condition (5) is as minimizers of a particular quadratic functional. We define the following cost functional J : where (φ, φ t ) is the solution of (4) with the final data (φ 0 , φ 1 ) ∈ H 1 0 (Ω) × L 2 (Ω). The function η(t) is a prescribed smooth function in [0, T ] introduced to guarantee that the controls vanish in a neighborhood of t = 0, T . Thus, we consider δ > 0 arbitrarily small and, The function η will depend on δ > 0 but we do not make explicit this dependence in the notation to simplify.
is a control such that the solution of (1) satisfies (3).
dσ is a control for which (3) holds.

The observability inequality in average
Let us now give a general condition which ensures the existence of a minimizer for J and therefore a control in average for system (1).
In particular, when system (4) is observable in average in time T > 0 we can choose η(t) as in (8), with δ > 0 sufficiently small, in such a way that The parameter ε > 0 in the previous definition is necessary to deduce (11) from (10). If we consider ε = 0, we obtain a more natural version of the observability inequality but now it is not clear how to deduce (11). This is in contrast with a single wave equation (i.e. without parameter dependence) where (11) and (11) are equivalent, for sufficiently large time T .
We prove below that the observability in average that we have defined above is a sufficient condition for the controllability in average of system (1). But we first state a technical lemma that will be used later.
for all solutions φ of the adjoint system (4) Then, inequality (12) holds with C = 1 a m + 1 Theorem 2 If the system (4) is observable in average then the functional J defined by (7) has an unique minimizer Proof. It is easy to see that J is continuous and convex. The continuity of the second term in (7) is a consequence of Lemma 2 while for the first term one just observe that where C(σ, T ) is the constant in the classical direct (regularity) inequality for the constant coefficients wave equation (1). The dependence on σ of this constant is easily obtained from its proof (see [8]). In particular, C(σ, T ) = C(1 + T σ)/σ that is integrable in Υ, and the continuity of this first term in J holds. The existence of a minimum is ensured if we prove that J is also coercive i.e.
The coercivity of the functional J follows immediately from (12) and (11). Indeed, Condition (13) follows inmediatly from this inequality. We now give two properties of the control obtained by minimization of the functional J.
is any other control driving to zero the average of the state of system (1) then Remark 2 This result establishes that the control given from the minimizer of J is the one that minimizes the L 2 −weighted norm associated to the measure 1 η(t) dt.
Proof. Let (φ 0 ,φ 1 ) be the minimizer for the functional J . Consider now relation (5) for the control η(t) Υ a(σ) ∂φ ∂υ |Γ0 dσ . By taking (φ 0 ,φ 1 ) as test function we obtain that On the other hand, relation (5) for the control g and test function (φ 0 ,φ 1 ) gives Combining the last two identities we obtain that

Dividing this inequality by
we easily obtain (14). The next result provides a bound of the control from the initial and target data.
∂φ ∂υ |Γ0 dσ be the averaged control, associated to the initial data (u 0 , u 1 ) and target (u 0 T , u 1 T ), obtained by minimization of the functional J. Then there exists a constant, independent of σ ∈ Υ, such that Proof. Asφ is the solution of the adjoint system (4) associated to the the minimizer (φ 0 ,φ 1 ) of J in H 1 0 (Ω) × L 2 (Ω), we have in particular Then, taking into account Lemma 2, we have Dividing by we easily obtain (15).

Numerical approximation of the control problem
In this section we introduce a suitable discretization of the control problem (1) that we use later to find numerical approximations of the controls. Let {w k } k∈N be an orthonormal basis of L 2 (Ω) constituted by eigenfunctions of the Laplace operator with Dirichlet boundary conditions, −∆ D , and {λ 2 k } k∈N the corresponding eigenvalues. We assume that they are ordered increasingly, i.e. 0 < λ 2 1 < λ 2 2 ≤ λ 2 3 ≤, ... We define the scaled spaces H α (Ω) as Note that H 0 = L 2 (Ω), H 1 = H 1 0 (Ω) and H −1 = H −1 (Ω). We also introduce the finite dimensional space X N generated by the first N eigenfunctions, i.e.
c k w k (x), with c k ∈ R ⊂ H α , α = −1, 0, 1. and the projection operator Clearly, this operator can be extended to the analogous projection in H α for α = −1, 1, that we still denote by P N . The idea is to project system (1) into the finite dimensional space X N , but we first transform this system to introduce a more suitable representation of the boundary control.
For each t ∈ [0, T ] we introduce the following elliptic problem with the control at the boundary, Then, we have at least formally that . Now, we apply the projection operator to system (18). Taking into account that the Laplace operator conmmutes with P N and leave invariant the subspace X N we easily obtain the finite dimensional system in Ω where (v 0,N , v 1,N ) = (P N ∆ −1 D u 0 , P N ∆ −1 D u 1 ) and D N = P N ∆ : X N → X N . Finally, we apply the operator D N , and we obtain for the new variable u N = D N v N the system in Ω (20) where (u 0,N , u 1,N ) = (P N u 0 , P N u 1 ). We adopt (20)-(21) as the discrete approximation of the control system (1). More precisely, the discrete control problem reads: Given T > 0, (u 0,N , u 1,N ) ∈ X N × X N and a target (u 0,N T , u 1,N T ) ∈ X N × X N , find a control f N ∈ L 2 (0, T ; Γ 0 ) such that the solution of (20)-(21) with f = f N satisfies Note that the elliptic problem in (21) is not discretized in the above formulation. For one dimensional problems this is not necessary since h can be computed explicitly from the control f . However, in more general problems a discretization of this elliptic problem will be also required. We discuss this in the experiments below.

Convergence of discrete averaged controls
We first introduce a variational characterization of the discrete controls following the same approach as in the continuous system. Let us consider the following backwards wave equation: in Ω where (ϕ 0,N , ϕ 1,N ) ∈ X N × X N . Note that, due to our discretization scheme, the solution φ N of system (23) coincides with the solution of the continuous system (4) with the initial data (ϕ 0,N , ϕ 1,N ) ∈ X N × X N .
We also introduce the following scalar product in that coincides with the duality product between L 2 (Ω) × H −1 (Ω) and H 1 0 (Ω) × L 2 (Ω) for functions in The following result is the analogous to Lemma 1 for the discrete control problem.
is the solution of the discrete backwards system (23).
Proof. Following the proof of Lemma 1 we multiply the equation of u N in (20) by the solution of the adjoint problem φ N and integrate by parts. We easily obtain where h N is the solution of (21) with f = f N the averaged control. Therefore, it is enough to prove where w j (x), λ j are the eigenfunctions and eigenvalues of the operator −∆.
Using Green formula we easily obtain, This concludes the proof. (4) and (23) coincide, the variational characterization of the discrete controls in (25) is analogous to the one associated to the continuous control problem, but for functions in the subset X N × X N ⊂ H 1 0 (Ω) × L 2 (Ω). This means in particular, that any control of the continuous system is also a control for the discrete one.

Remark 3 As the solutions of systems
Of course, the reciprocal is not true. In the rest of the paper we construct a sequence of discrete controls that converges to the control that minimizes J, assuming that the continuous system is observable in average.
We define the following quadratic cost functional J N : X N × X N → R by: where φ N is the solution of (23). Once again, the fact that the solutions of systems (4) and (23) coincide for initial data in X N × X N , allows us to interpret J N as the restriction of J to X N × X N . In particular, existence of minimizers is guaranteed as soon as the continuous system is controllable in average. The characterization of these minimizers as controls for the finite dimensional system is straightforward, following the proof of Theorem 1. In particular we have the following result: Theorem 5 Let u 0,N , u 1,N , u 0,N T , u 1,N T ∈ X N ×X N be some discrete data and suppose that (φ 0,N ,φ 1,N ) ∈ X N × X N is a minimizer of J N . Ifφ N is the corresponding solution of (23) with final data φ0,N ,φ 1,N dσ is a control such that the solution of (21) satisfies (25). In particular, We now state the convergence result for the discrete controls.
Theorem 6 Let T > 0 be large enough in order to have averaged observability of the continuos wave equation. For a given initial data and target (u 0 , u 1 ), (u 0 T , u 1 T ) ∈ L 2 (Ω) × H −1 (Ω), let f be the averaged control of the continuous system provided by the minimizer of J, and f N (x, t) be the sequence of averaged controls obtained by minimizing the discrete functional J N for the initial data and target (P N u 0 , P N u 1 ), (P N u 0 T , P N u 1 T ) ∈ X N × X N respectively. Then Proof. We show that the numerical approximation can be stated as a Ritz-Galerkin approximation of the variational characterization of the control. The result will follow from the classical convergence result for such approximations of variational problems (see for example [14]). We consider the Hilbert space V = H 1 0 × L 2 and the following bilinear form on V , where φ, ψ are the solutions of the adjoint system (4) with final data (φ 0 , φ 1 ), (ψ 0 , ψ 1 ) respectively. We also consider the linear form on V , The minimizer of J, (φ 0 ,φ 1 ) ∈ V solves the variational equation, Both, the bilinear form A and the linear one L are continuous on V . For the continuity of A we refer to the proof of Theorem 2 while the continuity of L is a direct consequence of Lemma 2. The bilinear form is also coercive, as a consequence of the averaged observability. Now we consider the finite dimensional subspace of V N = X N × X N ⊂ V . We only have to prove that the minimizer of the discrete functional J N , (φ 0,N ,φ 1,N ) ∈ V N is solution of But this is straightforward from the variational characterization of the minimizers for J N , Lemma 3, formula (24) and the fact that the solutions of the adjoint systems (23) and (4) coincide for final data According to the Ritz-Galerkin convergence result we have the following estimate for the minimizers of J and J N ( for some constant C independent of N . The classical approximation result for spectral projections gives the convergence of the right hand side as N → ∞, and therefore the convergence of minimizers. Finally, the L 2 −convergence of controls is a direct consequence of the continuity of the bilinear form A.

Matrix formulation and finite dimensional control
In this section we consider some implementation issues related with the finite dimensional approximation of the averaged control given by (20)-(21). There are several ways to do that. For instance one can follow the original method in [5] where the authors compute the minimizer of the discrete functional J N with a conjugate gradient algorithm. This method should be adapted to the present setting with the averaged control. A more direct approach is to write a matrix formulation of the discrete problem and compute the control using the explicit expression given for the finite dimensional control theory (see [17]). From the practical point of view this second approach requires more computing time but it has a simpler implementation. In this paper we follow this second approach. We divide this section in two subsections where we consider separately the 1-d and 2-d wave equations respectively.

The 1-d wave equation
We assume Ω = (0, 1) and that the control acts at the extreme x = 1. The eigenfunctions of the Laplace operator in this domain are given by w i (x) = √ 2 sin(iπx) with i ∈ N and the corresponding eigenvalues are λ 2 i = i 2 π 2 . Note that w i are normalized in the L 2 (Ω) norm. In particular, in this case X N is the subespace generated by those eigenfunctions w i with i ≤ N and the dimension of X N is therefore N .
The function h N (x), solution of (20), can be computed explicitly, and Let us denote by {u N i (t)} N i=1 the components of u N in the basis of eigenfunctions of the Laplace operator. Then, system (21) is equivalent to the following system of equations for these components, that can be written in matrix form as, where, and the initial data, Note that we have removed the dependence on N in the notation to simplify, but all the vectors and matrixes in system (28) depend on N . The components of the target are also written in vector form To apply the general theory of controllability for finite dimensional systems we write (28) as a first order one , and the target According to the control theory for finite dimensional systems (see [17]), a control that drives the initial data Z 0 to the target Z T is given by where Q T is the average controllability gramiam (see [17]) given by The fact that f N (t) in (33) is a control for (32) can be easily checked. We only have to substitute it in the solution of the system (32), given by Moreover, this control is the one obtained from the minimizer of J N in (26).

The 2-d wave equation
In this section we consider the numerical approximation of the average control for the 2-d wave equation in a square domain. We assume Ω = (0, 1)×(0, 1) and that the control acts in Γ 0 = {1}×(0, 1)∪(0, 1)×{1}. For this problem the averaged control of the wave equation is not known, for any time T > 0. Our experiments provide a numerical evidence that such controllability holds. The eigenfunctions of the Laplace operator in this domain are given by w i,j (x 1 , x 2 ) = 4 sin(iπx 1 ) sin(jπx 1 ) with (i, j) ∈ N × N and the corresponding eigenvalues are λ 2 ij = (i 2 + j 2 )π 2 . Note that w ij are normalized in the L 2 (Ω) norm and that we have used a double index to represent the eigenpair (λ ij , w ij ). Of course this does not affect the results of the previous section. In particular, in this case X N is the subespace generated by those eigenfunctions w ij with i, j ≤ N and the dimension of X N is therefore N 2 .
Any function u ∈ C([0, T ]; L 2 (Ω)) can be written as u( where u ij (t) are the Fourier coefficients, and the projection operator P N : L 2 (Ω) → X N is defined as A natural discretization of the control f N (x 1 , x 2 , t) is obtained from the Fourier representation of With this choice, the function h N (x 1 , x 2 , t), solution of (20), can be computed explicitly, and Following the 1-d case we substitute in (20) and write the system for the Fourier coefficients, Now we write the components u ij in a column matrix as follows, The rest is similar to the 1-d case.

Numerical experiments
In this section we compute the control in average in different situations, for the 1-d and 2-d wave equation in the square. In both cases we use formula (33) for the finite dimensional systems obtained by the projection method described above.

The 1-d wave equation
We consider in this section three experiments. In the first one we assume that the unknown parameter is in an interval and we obtain the control in average, that we compare with the control at a single value of the parameter. In the second experiment we show the behavior of this control when the length of the interval tends to zero. In particular we observe that the control in average converges to the control of the average parameter. The third experiment consider a parameter in the union of two small intervals. We see that the control in average makes the solution u(x, T ) to approximate either of two different symmetric profiles.
3.5533 × 10 −5 2.2245 × 10 −1  In Figure 2 we show the control in average and the control of a single realization for comparison. We observe that both seem to have the same regularity as the initial position, i.e. continuous but not C 1 . We have also plotted the solution at time t = T for a few realizations of the parameter σ, with and without the control in average to illustrate the effect of the control. In this example the velocity u t (x, t) becomes a discontinuous function in x, for t > 0, even if we do not apply any control. In fact, we can see the effect of a trigonometric approximation of discontinuous functions in the lower left experiment in Figure  2, which draws the uncontrolled solutions u t (x, T ; σ), for different values of σ.
In table 1 we show the behavior of the norm of the control and the average when the number of Fourier coefficients grow. This illustrates the convergence of the numerical algorithm.

Experiment 2.
Here we illustrate the convergence of the control in average to the control of the average parameter when the set of parameters is an interval with decreasing length. In particular we consider a(σ) = σ, σ ∈ Υ = [1 − ε, 1 + ε] for different values of → 0 and we compare the control in average with the control for σ 0 = 1. The average in σ is computed with the trapezoidal rule and step dσ = ε10 −2 . The rest of the data are as in the experiment 1. In table 2 we illustrate the convergence of the control in average to the control of the average parameter.
Experiment 3. Now we illustrate the behavior of the control in average when the set of parameters is the union of two intervals. In particular we consider a(σ) = σ, σ ∈ Υ = [1 − ε, 1 + ε] ∪ [3/2 − ε, 3/2 + ε] with ε = 5 × 10 −3 . The average in σ is computed with the trapezoidal rule and step dσ = ε10 −2 . The rest of the data are as in the Experiment 1. In Figure 3 we show the control in average and the controls for ε f ave − f σ0 L 2 1/2 2.0006 × 10 −1 10 −1 2.1701 × 10 −2 10 −2 3.8824 × 10 −4 10 −3 3.3179 × 10 −6 Table 2: Experiment 2: Convergence of the averaged control to the control for the averaged parameter σ 0 when the lenght of the interval ε → 0. the two parameters σ = 1, 3/2. We see that the averaged control exhibits some small oscillations which are not present in the other two controls. We also draw the controlled solutions at t = T for some values of the parameter. We observe how the solution is controlled either to one of two symmetric final states

The 2-d wave equation
In this section we consider a single experiment that illustrate that the method can be easily adapted to multidimensional situations, as long as we know the eigenfunctions of the associated Laplace operator in the domain. We assume a(σ) = σ, σ ∈ [3/2, 2] and the average is computed with the trapezoidal rule with step dσ = 10 −2 . We consider the first N = 100 Fourier coefficents. The initial data u 0 = max{x 1 , (1 − x 1 )} max{x 2 , (1 − x 2 )} is in Figure 4 and u 1 = 0. The target is the equilibrium u 0 T = u 1 T = 0. In Figure 4 we have drawn the control acting in the two sides of the square and two sections of several solutions with this control at time t = T , i.e. u(x 1 , 1/2, T ) and u(1/2, x 2 , T ). We observe that none of them is zero at t = T but their average is zero.

Conclusions
We have introduced a new numerical algorithm to approximate the averaged control for the wave equation. The method consists in a spectral projection of the variational characterization of the HUM control. We prove the convergence of the method and how it can be reduced to a classical finite dimensional control problem for which there is an explicit formula for the control. This provides an efficient method to compute averaged controls. We illustrate it with several numerical experiments. The method is quite general and can be easily adapted to similar situations where a variational characterization of controls exists.

Initial position u 0
Control in average at x = 1 Control in average at y = 1 Controlled solutions at section y = 0.5 Controlled solutions at section x = 0.5 Figure 4: Experiment 4.