Uniqueness of Solution to Systems of Elliptic Operators and Application to Asymptotic Synchronization of Linear Dissipative Systems II: Case of Multiple Feedback Dampings

In this paper, the authors consider the asymptotic synchronization of a linear dissipative system with multiple feedback dampings. They first show that under the observability of a scalar equation, Kalman’s rank condition is sufficient for the uniqueness of solution to a complex system of elliptic equations with mixed observations. The authors then establish a general theory on the asymptotic stability and the asymptotic synchronization for the corresponding evolutional system subjected to mixed dampings of various natures. Some classic models are presented to illustrate the field of applications of the abstract theory.


Introduction
Synchronization is a widespread natural phenomenon. It was first observed by Huygens [11] in 1665. The theoretical research on synchronization from the mathematical point of view dates back to Wiener in 1950s in [43] (Chapter 10). The previous study focused on the systems described by ordinary differential equations. Since 2012, Li and Rao started the research on the exact boundary synchronization for a coupled system of wave equations (see [18,[20][21][22][23]26]), later the approximate synchronization has been carried out for a coupled system of wave equations with various boundary controls (see [19,25,27,30]). The most part of their results was recently collected in the monograph [28]. Consequently, this kind of study of synchronization becomes a part of research in control theory. The optimal control for the exact synchronization of parabolic system was recently investigated in [42]. We quote [1,6] for the synchronization of distributed parameter systems on networks.
By duality, the approximate boundary controllability of a coupled system of wave equations can be transformed to the uniqueness of solution to the corresponding adjoint system. Since g s φ, φ V ′ ,V = 0 if and only if g s φ = 0. (1.4) Denote by V and H the product spaces: For U = (u (1) , · · · , u (N ) ) T , let the vector operators L, respectively, G s be defined by Lu (1) . . . (1) . . . It is easy to show that (1.7) generates a semi-group of contractions with compact resolvent in the space V × H. The case M = 1: was studied in [29], and we showed that Kalman rank condition rank(D 1 , AD 1 , · · · , A N −1 D 1 ) = N (1.9) is necessary for the asymptotic stability of system (1.8). Moreover, under suitable conditions on the pair of operators (L, g 1 ), Kalman rank condition (1.9) is also sufficient for the asymptotic stability of system (1.8) (see [29,Theorem 3.4]). In [31], we carried out a complete study on the uniform synchronization of system (1.8). In particular, we justified the necessity of diverse conditions of compatibility on the matrices A and D 1 . Moreover, in [32] we considered a coupled system of wave equations in a rectangular domain, which does not satisfy the usual multiplier geometrical condition. The aim of the present work is to investigate the asymptotic stability of system (1.7) under the common action of M feedback dampings D 1 G 1 U ′ , · · · , D M G M U ′ . In Proposition 2.2 below, we will show that Kalman rank condition rank(D, AD, · · · , A N −1 D) = N (1. 10) with the composite matrix by blocks: is still necessary for the asymptotic stability of system (1.7). Moreover, under suitable conditions on the matrix A and on the pairs (L, g s ) for 1 ≤ s ≤ M , we will show in Theorem 3.2 that Kalman rank condition (1.10) is still sufficient for the asymptotic stability of system (1.7). The involved dampings in system (1.7) can be of different types, for example, boundary damping, locally distributed viscous dampings, locally distributed Kelvin-Voigt damping or bending moment damping etc. Therefore, it provides a rich freedom for the choice of feedback controls in applications. This is the main advantage of the approach. The materials in the paper are organized as follows. In §2, we first formulate the problem in the framework of semi-groups. Then by the classic method of frequency domain, we reduce the asymptotic stability to the uniqueness of solution to an over-determined elliptic system. In §3, under the assumptions that A is closed to a scalar matrix and L can be uniformly observed by the operator g s for 1 ≤ s ≤ M , we establish the corresponding uniqueness theorem. We study the corresponding asymptotic synchronization in §4. In order to illustrate the abstract result, we give some examples of applications in §5.

Setting of Problem
In this section, we will characterize the asymptotic stability of system (1.7) by the method of frequency domain. We first make some necessary arrangement.
Since for 1 ≤ s ≤ M , D s are symmetric and positive semi-definite matrices, by conditions (1.3)-(1.4), it is easy to check that Then, defining the linear operator A by with the domain we transform (1.7) into an abstract formulation as follows: It was shown in [29, Proposition 3.1] that operator A generates a semi-group of contractions with compact resolvent in the space V × H.
We recall the following generalized rank condition of Kalman type, which will play an important role in the study of uniqueness. Proposition 2.2 If system (1.7) is asymptotically stable, then we necessarily have Kalman rank condition (1.10) with D given by (1.11).
Proof If (1.10) fails, by Proposition 2.1, there exists a unit vector E ∈ R N and a real number a such that Noting that A and D s with 1 ≤ s ≤ M are symmetric, we get Then, applying E T to (1.7) and setting u = E T U we get which is conservative, therefore, unstable.
is asymptotically stable if and only if for any given β ∈ R, the over-determined system of the state variable Φ = (φ (1) , · · · , φ (N ) ) T : associated with the conditions has only the trivial solution.
Proof Noting that A −1 is compact in the space V × H, by the classic theory of semi-groups (see [3,4,37]), the dissipative system (1.7) is asymptotically stable if and only if A has no pure imaginary eigenvalues. Indeed, assume that A has a pure imaginary eigenvalue, namely, there exist β ∈ R and a non-trivial (Φ, Ψ) ∈ V × H, such that A(Φ, Ψ) = iβ(Φ, Ψ), (2.14) namely, Inserting the first equation into the second one, we get

Uniqueness Theorem Under Kalman Rank Condition
In this section, we will show both the necessity and the sufficiency of Kalman rank condition (1.10) for the uniqueness of solution to the over-determined system (2.12)-(2.13).
Proposition 3.1 Assume that the over-determined system (2.12)-(2.13) has only the trivial solution. Then the pair (A, D) necessarily satisfies Kalman rank condition (1.10) with D given by (1.11).
Proof This is a direct consequence of Proposition 2.2 and Theorem 2.1. However, we prefer to give a direct proof here.
Otherwise, let a and E be chosen as in (2.10). Let v ∈ V be a non-zero element and λ ∈ R + be large enough, such that Lv = λv and λ + a > 0. Defining we have So, Φ is a solution to (2.12). Moreover, noting that G s is of diagonal form, we check easily that Φ satisfies the dissipation condition (2.13): Thus, we get a contradiction. Now we make some preparation for the proof of sufficiency. Since Kalman rank condition (1.10) is stable under invertible linear transformation, without loss of generality, the symmetric matrix A can be written as A = diag( σ1 λ 1 , · · · , λ 1 , · · · · · · , σm λ m , · · · , λ m ), where λ k ≥ 0 are eigenvalues of A with multiplicity σ k (k = 1, · · · , m).
Accordingly, let For any given p with 1 ≤ s ≤ M , we write where the vector d where the vector d i ∈ R MN is composed of the i-th column of the matrix D s with 1 ≤ s ≤ M : Proposition 3.2 For any given integer k with 1 ≤ k ≤ m, the vectors d µ k−1 +1 , · · · , d µ k of the composite matrix D T are linearly independent.

Definition 3.1 For any given
holds for any given solution φ s to the over-determined scalar problem By the continuous embedding H ⊂ V , there exists a constant c 2 > 0, such that Theorem 3.1 Assume that (a) there exists a ∈ R, such that the following ε-closing condition Then, the over-determined system (2.12)-(2.13) has only the trivial solution.
Proof Applying D s to (2.12) and noting W = D s Φ, we get On the other hand, noting that G s is diagonal, condition (2.13) leads to Then, taking the j-th component of (3.13) and (3.17), we get with the additional condition Then, noting (3.11), we have If β 2 − a > 0, the observability of (L, g s ) implies again (3.21). On the other hand, noting that L is self-adjoint, we have Then it follows from (3.16) that Hence, noting the ε-closing condition (3.12) and (3.21), we get Then, it follows from (3.24) that f j = 0 and then w j = 0 for j = 1, · · · , N, provided that εc < 1. Thus we get is the i-th column vector of the matrix D s . Noting (3.7), we arrange (3.26) by blocks into the following expression By Proposition 3.2, the column vectors d µ k−1 +1 , · · · , d µ k of D T are linearly independent, then we get namely, Φ ≡ 0. The proof is thus complete.
Theorem 3.1 can be read as "under suitable conditions, the observability of the scalar equation implies the stability of the whole system". By this way, we provide a simple and efficient approach to solve a seemingly difficult problem of asymptotic stability of a complex system.
As a direct consequence of Theorems 2.1 and 3.1, we have the following important result.
Theorem 3.2 Under the same assumptions as those in Theorem 3.1, system (1.7) is asymptotically stable.

Asymptotic Synchronization by Groups
By Proposition 3.1, when the pair (A, D) does not satisfy Kalman rank condition (1.10), system (1.7) is not asymptotically stable. Instead of stability, we consider the asymptotic synchronization by groups.
Let p ≥ 1 be an integer such that with n r − n r−1 ≥ 2 for r = 1, · · · , p. We re-arrange the components of the state variable U into p groups (u (1) , · · · , u (n1) ), (u (n1+1) , · · · , u (n2) ), · · · · · · , (u (np−1+1) , · · · , u (np) ). Let S r be a full row-rank matrix of order (n r − n r−1 − 1) × (n r − n r−1 ): Define the (N − p) × N matrix C p of synchronization by p-groups as Then Ker(C p ) = Span{e 1 , · · · , e p }.  .7) is asymptotically synchronizable by p-groups, if for any given initial data (U 0 , U 1 ) ∈ V × H, the corresponding solution U satisfies as t → +∞ for all n r−1 + 1 ≤ k, l ≤ n r and 1 ≤ r ≤ p, or equivalently Let us recall some known results. by [29,Theorem 4.8], A satisfies the condition of C p -compatibility: and D satisfies the condition of strong C p -compatibility: In this situation, by [29,Proposition 4.2], there exist a symmetric and positive semi-definite matrix A of order (N − p) and a symmetric and positive semi-definite matrices D s (1 ≤ s ≤ M ) of order (N − p), such that (4.14) Applying (C p C T p ) − 1 2 C p to system (1.7) and setting W = (C p C T p ) − 1 2 C p U , we get the following reduced system Obviously, the asymptotic synchronization by p-groups of system (1.7) is equivalent to the asymptotic stability of the reduced system (4.15).
Since the reduced matrices A and D s (1 ≤ s ≤ M ) are still symmetric and positive semidefinite, the asymptotic stability of the reduced system (1.7) can be treated by Theorem 3.2. More precisely, let Since A is of order (N − p), by Proposition 2.2, the following rank condition on the reduced matrices A and D: is required for the asymptotic stability of the reduced system (4.15). We first present a basic relation between the rank of the original matrices and the reduced ones.  Proof First, it follows from (4.14) that Then, using (4.16), we have is a diagonal matrix of M blocks of C T p , respectively, is a diagonal matrix of M blocks of (C p C T p ) − 1 2 . Noting (4.13) and (4.19), we have Successively, we have Thus, we have where On the other hand, since A is symmetric, by the condition of C p -compatibility (4.11), we have AIm(C T p ) ⊆ Im(C T p ). On the other hand, the condition of strong C p -compatibility (4.12) implies that Im(D) ⊆ Im(C T p ). Then, we can successively get Then, we get Now, consider the transposition of the matrix in the right-hand side of (4.29) : First, we have Next, by the condition of strong C p -compatibility (4.12), we have Im(D s ) ⊆ Im(C T p ) for 1 ≤ s ≤ M , namely, Then we get It follows that (4.34) Once again, by [28, Proposition 2.7], we get namely, rank(D, AD, · · · , A N −1 D){{C T p } M } N = rank(D, AD, · · · , A N −1 D). (4.36) Finally, combining (4.25), (4.29) and (4.36), we get the desired rank relation (4.18). The proof is achieved.
As a direct application of Theorem 3.2, we have the following result. A satisfy (4.11), respectively, D satisfy (4.12). Assume that (a) the ε-closing condition (3.12) holds, (b) the pair (A, D) satisfies the rank condition (4.10), (c) the operator L is g s -observable for 1 ≤ s ≤ M . Then, system (1.7) is asymptotically synchronizable by p-groups. Moreover, for any given initial data (U 0 , U 1 ) ∈ V × H, there exist linearly independent functions u 1 , · · · , u p such that
Proof First, by [28, Proposition 2.21], the spectrum of A is a part of that of A, so, the ε-closing condition (3.12) still holds for the reduced matrix A. On the other hand, combining the rank condition (4.10) and the rank relation We can thus apply Theorem 3.2 to the reduce system (4.15) for obtaining the approximate stability. Moreover, for any given initial data (U 0 , U 1 ) ∈ (V ×H) N , let U be the corresponding solution to system (1.7). Let u r = (U,er ) er for r = 1, · · · , p. Then, projecting to Ker(C p ) and to Im(C T p ), respectively, we get Moreover, by (4.8), we get Noting (4.5), we see that (4.40) exactly means (4.37). Now we will precisely show the dynamics of the functions u 1 , · · · , u p . Since A is symmetric, noting (4.11), there exist some real numbers α rl with α rl = α lr , such that α rl e r e l e l , r = 1, · · · , p. The proof is complete.

Remark 4.1
The convergence (4.7) is called the asymptotic synchronization by p-groups in the consensus sense, while the convergence (4.37) is in the pinning sense. (u 1 , · · · , u p ) T is called the asymptotically synchronizable state by p-groups. Theorem 4.1 indicates that the two notions are simply the same. Moreover, since the functions u 1 , · · · , u p are linearly independent, there does not exist an extended matrix C q (q < p) such that Therefore, unlike the case of approximate boundary synchronization by p-groups (see Chapter 11 in [28]), there is no possibility to get any induced synchronization in the present situation.

Applications
In this section, we denote by Ω ⊂ R n a bounded domain with smooth boundary Γ and by ω ⊂ Ω a neighbourhood of the boundary Γ.
Let a and b be given smooth and positive functions in Ω such that The coupling matrix A, as well as the damping matrices D 1 , D 2 , · · · , appearing in diverse models, are assumed to be symmetric and positive semi-definite.

Wave equations with mixed dampings
Consider the following system of wave equations with boundary viscous damping and locally distributed viscous and Kelvin-Voigt dampings (see [36]) where ∂ ν denotes the outward normal derivative on the boundary. Let Multiplying system (5.2) by a test function Φ ∈ (H 1 (Ω)) N and integrating by parts, we get the variational formulation: where (·, ·) denotes the inner product in R N or in M N (R). Let L be defined by respectively, g 1 , g 2 and g 3 be defined by Setting L, G 1 , G 2 and G 3 as in (1.6), the variational equation (5.4) can be rewritten as Obviously, the operators L, g 1 , g 2 and g 3 satisfy conditions (1.1)-(1.4). Then, system (5.7) generates a semi-group of contractions with compact resolvent in the space (H 1 (Ω) × L 2 (Ω)) N .

Remark 5.1
In fact, the uniform estimate (5.8) is based on the uniform stability of equation (5.2) for a scalar equation (i.e. for N = 1), which was abundantly studied by different approaches in literatures. We only quote [13,[16][17] and the references therein for boundary feedback. The uniform decay was first established by multipliers in [10] as ω is a neighbourhood of the boundary. The explicit decay rate was given in [41] under suitable geometric condition. Later, the result was generalized in [44] to semi-linear case. When Ω is a compact Riemann manifold without boundary and ω satisfies the geometric optic condition, the uniform stability was established by a micro-local approach in [39]. Moreover, the volume of the damping region ω can be sufficiently small in [5] etc.

Kirchhoff plate equations with locally distributed Kelvin-Voigt dampings
Consider the following system of Kirchhoff plate equations with locally distributed viscous and Kelvin-Voigt dampings (see [8,[14][15]) for more precise description): Multiplying system (5.20) by a test function Φ ∈ (H 2 0 (Ω)) N and integrating by parts, we get the following variational formulation: where (·, ·) denotes the inner product in R N or in M N (R).

Euler-Bernoulli beam equations with mixed dampings
In the two previous subsections, we have considered the case of mixed dampings for wave equations and of locally distributed dampings for plate equations. However, when ω is not a neighbourhood of Γ, the situation is technically complicated! As a beginning, we will consider a system of beam equations. There are many to pursue· · · In particular, the discussion below can also be carried out for many other situations, such as Timoshenko beam [2,12], Bresse beam [35] etc.
Let a, b be smooth and positive functions in (0, 1) such that Consider the following system of Euler-Bernoulli beam equations with locally distributed and boundary dampings: Multiplying system (5.38) by Φ ∈ V N and integrating by parts, we get where (·, ·) denotes the inner product in R N . Let L be defined by respectively, g 1 , g 2 , g 3 and g 4 be defined by and Setting L, G 1 and G 2 as in (1.6), the variational equation (5.40) can be rewritten as Obviously, the operators L and g 1 , g 2 , g 3 and g 4 satisfy well conditions (1. Proof By Theorem 4.1, it is sufficient to show that there exists a positive constant c, independent of β ∈ R and f ∈ L 2 (0, 1), such that the following uniform observability inequality Using the boundary conditions in (5.46), it follows that Here and hereafter, c will denote a positive constant. It follows that which is a stronger version of (5.45).
Remark 5.2 Roughly speaking, we can stabilize the beam system (5.38) by using several dampings of different types. This is the great advantage of the method. However, there are many variances, for example, the supports of the damping coefficients a and b in (5.37) can be different. In particular, the support of b can be a neighbour of the ends of the interval [0, 1]. We can also add the Kelvin-Voigt damping D 5 (cU ′ xx ) xx .

Remark 5.3
For all the models considered here, the observability inequality is obtained by the multiplier method under the geometrical multiplier condition. It is stronger than the required observability inequality, for example, (5.18) is much stronger than (5.8) etc. We hope that this regularity should be served to establish a polynomial decay rate for the smooth initial data: C p (U (t), U ′ (t)) (H 1 (Ω)×L 2 (Ω)) N −p = O((1 + t) −δ ), (5.71) where the constant δ > 0 is independent of the initial data. We refer to [9,38] for the recent progress on the polynomial stability of indirectly damped wave equations.