Incorporating knowledge on the measurement noise in electrical impedance tomography

The present article is concerned with the identification of an obstacle or void of different conductivity which is included in a two-dimensional domain by measurements of voltage and currents at the boundary. In general, the voltage distribution is prescribed and hence deterministic. Whereas, the current distribution is measured and contains measurement errors. We assume that some information is given on these measurement errors which can be described by means of a random field. We exploit this extra knowledge by minimizing a linear combination of the expectation and the variance of the Kohn-Vogelius functional. It is shown how these ideas can be realized in numerical computations. By numerical results, the applicability and feasibility of our approach is demonstrated. Introduction Electrical impedance tomography is used in medical imaging to reconstruct the electric conductivity of a part of the body from measurements of currents and voltages at the surface [23]. The same technique is also used in geophysical explorations. An important special case consists in reconstructing the shape of an unknown inclusion or void assuming (piecewise) constant conductivities. In this case, only one pair of current/voltage measurements is necessary, in principle. The problem under consideration is a special case of the general conductivity reconstruction problem and is severely ill-posed. It has been intensively investigated as an inverse problem. We refer for example to [1, 4, 9, 21] for numerical algorithms and to [5, 17] for particular results concerning uniqueness. Moreover, we refer to [7, 8] for methods using the full Dirichletto-Neumann map at the outer boundary. In [29], the problem under consideration has been reformulated as a shape optimization problem for the Kohn-Vogelius functional (see [26]). Then, seeking the unknown inclusion is equivalent to seeking the minimizer of an energy functional. Much attention has been spent on the analysis of this approach ([2, 3, 13]) and its comparison with a least-squares tracking type functionals. It is also sufficiently versatile to be used in the context of fluid mechanics [6]. HH has been supported by the Swiss National Science Foundation through the project H-matrix based first and second moment analysis. 1 2 MARC DAMBRINE, HELMUT HARBRECHT, AND BENEDICTE PUIG Our objective in this article is to take advantage of properties of the noise to construct a deterministic formulation which incorporates this knowledge. We assume that the measured flux is given as a random field that models the measurement errors. We then aim at minimizing a combination of the expectation and the variance of the Kohn-Vogelius functional. As we will show, both quantities can easily be computed via deterministic quantities. The associated shape gradients can likewise be deterministically computed. The rest of this article is organized as follows. In Section 1, we present the physical model and reformulate the identification problem as shape optimization problem. We introduce the random model and compute the expectation and the variance of the shape functional and their shape gradients. As we will see, both are given by deterministic expressions under some structure assumptions on the random fields under consideration. Then, Section 2 is concerned with the discretization of the shape optimization problem. We assume that the sought inclusion is a starshaped domain which enables us to approximate it by a finite Fourier series. The state equations are reformulated as boundary integral equations which are discretized by means of a fast wavelet boundary element method of linear complexity. In Section 3, we present numerical results which validate the feasibility of the present approach. Finally, in Section 4, we state concluding remarks. 1. Problem formulation 1.1. Physical model. Let T ∈ R, d = 2, 3, be a simply connected domain with boundary Σ = ∂T and assume that an unknown simply connected inclusion S with regular boundary Γ = ∂S is located inside the domain T satisfying dist(Σ,Γ) > 0, cf. Figure 1.1. In order to determine the inclusion S, we measure the current distribution g ∈ H−1/2(Σ) at the boundary Σ for a given voltage distribution f ∈ H(Σ). Hence, we are seeking a domain D := T \ S and an associated harmonic function u, satisfying the system of equations (1.1) ∆u = 0 in D, u = 0 on Γ, u = f, ∂u ∂n = g on Σ. This system is an overdetermined boundary value problem which admits a solution only for the true inclusion S. In accordance with e.g. [25], the inclusion’s boundary Γ is uniquely determined from f 6= 0 and g. Nonetheless, it is well known that the problem of finding the inclusion’s boundary Γ is severely ill-posed. Especially, the measured data g contain noise. EIT FOR DATA WITH RANDOM PERTURBATION 3


Introduction
Electrical impedance tomography is used in medical imaging to reconstruct the electric conductivity of a part of the body from measurements of currents and voltages at the surface [26]. The same technique is also used in geophysical explorations. An important special case consists in reconstructing the shape of an unknown inclusion or void assuming (piecewise) constant conductivities. In this case, only one pair of current/voltage measurements is necessary, in principle.
The problem under consideration is a special case of the general conductivity reconstruction problem and is severely ill-posed. It has been intensively investigated as an inverse problem. We refer for example to [1,4,9,21] for numerical algorithms and to [5,17] for particular results concerning uniqueness. Moreover, we refer to [7,8] for methods using the full Dirichlet-to-Neumann map at the outer boundary.
In [29], the problem under consideration has been reformulated as a shape optimization problem for the Kohn-Vogelius functional (see [24]). Then, seeking the unknown inclusion is equivalent to seeking the minimizer of an energy functional. Much attention has been spent on the analysis of this approach ( [2,3,13]) and its comparison with a least-squares tracking type functionals. It is also sufficiently versatile to be used in the context of fluid mechanics [6].
Our objective in this article is to take advantage of properties of the noise to construct a deterministic formulation which incorporates this knowledge. We assume that the measured flux is given as a random field that models the measurement errors. We then aim at minimizing a combination of the expectation and the variance of the Kohn-Vogelius functional. As we will show, both quantities can easily be computed via deterministic quantities. The associated shape gradients can likewise be deterministically computed.
The rest of this article is organized as follows. In Section 1, we present the physical model and reformulate the identification problem as shape optimization problem. We introduce the random model and compute the expectation and the variance of the shape functional and their shape gradients. As we will see, both are given by deterministic expressions under some structure assumptions on the random fields under consideration. Then, Section 2 is concerned with the discretization of the shape optimization problem. We assume that the sought inclusion is a starshaped domain which enables us to approximate it by a finite Fourier series. The state equations are reformulated as boundary integral equations which are discretized by means of a fast wavelet boundary element method of linear complexity. In Section 3, we present numerical results which validate the feasibility of the present approach. Finally, in Section 4, we state concluding remarks.

Physical model
Let T ∈ R d , d = 2, 3, be a simply connected domain with boundary Σ = ∂T and assume that an unknown simply connected inclusion S with regular boundary Γ = ∂S is located inside the domain T satisfying dist(Σ, Γ ) > 0, cf. Figure 1. In order to determine the inclusion S, we measure the current distribution g ∈ H −1/2 (Σ) at the boundary Σ for a given voltage distribution f ∈ H 1/2 (Σ). Hence, we are seeking a domain D := T \ S and an associated harmonic function u, satisfying the system of equations (1.1) This system is an overdetermined boundary value problem which admits a solution only for the true inclusion S. In accordance with e.g. [25], the inclusion's boundary Γ is uniquely determined from f = 0 and g. Nonetheless, it is well known that the problem of finding the inclusion's boundary Γ is severely ill-posed. Especially, the measured data g contain noise.

Formulation as shape optimization problem
Following [29], we introduce the auxiliary harmonic functions v and w, satisfying and consider the following shape optimization problem Herein, the infimum has to be taken over all domains which have a void with sufficiently regular boundary. We refer to [29] for the existence of optimal solutions with respect to this shape optimization problem. The shape gradient to (1.2) has also been computed in [29]. For variation fields V : R d → R, being sufficiently smooth, it holds that see also [29]. Given an inclusion Σ such the overdetermined boundary problem (1.1) has a solution, the necessary first order optimality condition δJ(D)[V] = 0 is satisfied for all admissible variations V. Notice that the shape Hessian for (1.2) has been computed and analyzed in [13].

Random model
We shall now assume that we have some knowledge on the errors which are caused by the measurement of g. Then, we can model g as a random field. To that end, let (Ω, S, P) be a complete probability space and assume that g : Σ × Ω → R is a random field which belongs to the Bochner space L 2 P (Ω, H −1/2 (Σ)). Let us recall for the reader's convenience the definition of Bochner spaces. Consider a real number p ≥ 1. Then, for a Banach space X, the Bochner space L p P (Ω, X) consists of all functions v : Ω → X whose norm v L p P (Ω,X) := is finite. If p = 2 and X is a Hilbert space, then the Bochner space is isomorphic to the tensor product space L 2 P (Ω) ⊗ X.
Since the data L 2 P (Ω, H −1/2 (Σ)) are random, also the state v will be a random field. It satisfies v ∈ L 2 P (Ω, H 1 (Ω)) by linearity of the underlying partial differential equation. As a consequence, the shape functional J becomes also a random process.
Two strategies are a priori available to deal with such a random shape functional. The first one consists in minimizing each realization of the objective and then taking an average of the minimizers. This strategy has been presented in the context of Bernoulli's free boundary problem in [11]. Nonetheless, this approach is unrealistic here due to its high computational cost.
We therefore propose in this article to address the second approach. Namely, we minimize an averaged shape functional as considered in [10]. In particular, we will minimize a combination of the expectation and the variance of the shape functional (1.2). In other words, we seek the domain D with inclusion S in argmin F where the objective is or even where the random shape functional reads as and the states read as Let us explain the meaning of the averaged objective defined in (1.4) and (1.5), respectively. When the weight α is equal to 0, we consider only the average value of the Kohn-Vogelius functional. Its minimization means to be good on a regular base, but does not prohibit a flat distribution and, hence, being often with high values of the of the objective. In order to obtain a shape around which the distribution of the objective is more narrow, we can penalize the variance of the Kohn-Vogelius functional by increasing the weight α. Notice that the standard deviation scales like the expectation which makes (1.5) better suited to achieve this goal in comparison with (1.4). The range of admissible α is [0, 1), since the expectation is neglected when α = 1. Then, only the reduction of variance matters and not the average value at all, losing all interest in the shape identification.
Concerning the existence of an optimal shape for the minimization of the objective defined in (1.4) or (1.5), the situation is the same than for the original problem: minimizing J. In the class of open subsets of T , we do not have an existence result. Existence holds once the class of admissible domains is restricted to some class of domains for which one obtains compactness and the continuity of the solution of the Dirichlet problem with respect to the shape (compare [22]): the class of domains with a uniform exterior cone property for example.
For what follows, we should make the assumption that the Neumann data g are given by the expansion where the random variable Y i (ω) are independent and identically distributed random variables, Y i ∼ Y , being centered, E[Y ] = 0, normalized, V[Y ] = 1, and having finite fourth order moments. Thus, there especially hold the identities Note that we have in mind several independent measurements of the current distribution for the same voltage distribution, from which we can derive the sample mean and the sample covariance. Then, assuming that g(x, ω) is a Gaussian random field, the expansion (1.8) can be derived from by means of the Karhunen-Loève expansion, see [18,27] for example. Given the expansion (1.8), the linearity of the state equation where v i solves the equation (1.10) In particular, if g is a Gaussian random field, then also v is a Gaussian random field.

Expectation and variance of the random shape functional
We shall show that the expectation and the variance of the random shape functional (1.6) can be computed from deterministic quantities.
Proof. Following the ideas from [10], using Fubini's theorem for non negative functions, the expectation of the random shape functional can be rewritten as In view of the expansion (1.9), we thus conclude Making now use of the fact that Y i ∼ Y are independent and identically distributed random variables, we arrive at Finally, we can exploit that Y is centered and normalized to arrive at Integration by parts and observing ∂v i /∂n = g i in accordance with (1.10) yields thus (1.11).
In complete analogy, one can derive a deterministic expression for the variance of the random shape functional. (1.12) Proof. Due to the knowledge of (1.11), the variance can be computed from the uncentered second moment of the shape functional in accordance with The starting point to derive the uncentered second moment is By inserting again the expansion (1.9) of the random field v and straightforward calculation, we obtain Of course, this deterministic expression can further be simplified by using the independence of the random Finally, integration by parts yields the desired expression (1.12).
We have a further simplification of (1.12) in the most important situation of Gaussian random fields. Corollary 1.3. If g is a Gaussian random field, then (1.13) Proof. In the case of a Gaussian random field, the random variables obey the normal law, i.e., Y ∼ N (0, 1). By injecting that it thus holds E[Y 4 ] = 3 and E[Y 3 ] = 0, we derive the assertion immediately from (1.12).
This is the expression we will exploit in our numerical examples. Especially, for sake of convenience, we will provide the shape gradient only for the specific formula (1.13) in the next section and not for the general case (1.12).

Shape gradient of the expectation and of the variance
We shall compute next the shape gradient of the expectation and of the variance. For a survey on the shape calculus, we refer the reader to [12,28,30] and the references therein.
Obviously, due to linearity, the shape gradient δ E J(D, ω) [V] of the shape functional E[J(D, ω)] into the direction of the perturbation field V is just given by E δJ(D, ω)[V] . Hence, it is computed according to We insert the expansion (1.9) and exploit again that the random variables Y i ∼ Y are independent, identically distributed, centered and normalized to arrive at the final expression: Of course, we could alternatively have computed the shape derivative of the deterministic shape functional (1.11), yielding the same result. In case of the variance of the random shape functional (1.6), the situation becomes somewhat more difficult. It can be derived by using directly the derivative of the shape functional's second uncentered moment: However, since we are mainly interested in Gaussian random fields g(x, ω), i.e., Y i ∼ N (0, 1) in (1.8) and (1.9), respectively, we will provide only the shape derivative of (1.13).
Proof. Since only the functions v i 's depend on the domain D, the shape derivative of (1.13) reads where the local shape derivatives δv i = δv i [V] satisfy the boundary value problems Using that ∂v i /∂n = g i , we obtain for all i, j = 0, 1, . . . , M by integration by parts This, in view of (1.15), means that Σ δv i g j dσ = Γ V, n ∂v i ∂n ∂v j ∂n dσ for all i, j = 0, 1, . . . , M . By inserting the latter identities into (1.14), we arrive at the desired assertion.
We mention that the shape derivative of the standard deviation is given by the chain rule in accordance with

Numerical realization
Since the numerical realization is based on the adaptation of the classical Kohn-Vogelius approach for electrical impedance tomography, we briefly recall the ingredients and refer to [13,14] for more details.

Boundary integral equations
We will compute the unknown boundary data of the state functions v and w by boundary integral equations. We introduce the single layer and the double layer operator with respect the boundaries Φ, Ψ ∈ {Γ, Σ} by For sake of simplicity, we suppose that diam D < 1 to ensure that V ΦΦ is invertible, cf. [23]. Then, the normal derivative of w is given by the Dirichlet-to-Neumann map cf. (1.7). Likewise, the unknown boundary data of v i are determined by

Boundary element method
The shape functional and its gradient can be computed from the knowledge of the boundary data of the state equations (1.7) and (1.10). These data are given by the boundary integral equations (2.1) and (2.2). Hence, it is rather convenient to employ a boundary element method to compute the required boundary data of the state equations. We use a Galerkin discretization by , and the mass matrices , and the load vectors of Dirichlet data f Σ and Neumann data g i,Σ Then, the linear system of equations gives us the Neumann data (∂w/∂n) We mention that the appearing system matrices have to be computed only once for each domain while the system (2.4) has to be solved M + 1 times to get the v i 's from the g i 's. We will use the wavelet Galerkin scheme which yields quasi sparse system matrices and, hence, a linear overall complexity with respect to the number N Γ + N Σ of degrees of freedom. We refer to [20] for all the details on the wavelet based fast solution of boundary integral equations.

Approximation of the free boundary
For the numerical computations, we restrict ourselves to inclusions which are starshaped with respect to the origin 0. Then, the inclusion can be parametrized in accordance with i.e., we can identify the inclusion via a radial function which depends only on the polar angle. Hence, it is reasonable to make for the sought inclusion the ansatz r Nr (φ) = a 0 + Nr n=1 a n cos(nφ) + a −n sin(nφ). (2.5) Since r Nr admits 2N r + 1 degrees of freedom a −Nr , a 1−Nr , . . . , a Nr , we arrive at a finite dimensional optimization problem in the open set A Nr := a −Nr , a 1−Nr , . . . , a Nr ∈ R : r Nr (φ) > 0, φ ∈ [0, 2π] ⊂ R 2Nr+1 .
Hence, via the identification r Nr ⇔ D Nr , the finite dimensional approximation of shape minimization problem (1.2) reads as find (2.6) The associated gradient has to be computed with respect to all directions under consideration: Herein, e r (φ) = (cos φ, sin φ) is the radial direction. We will apply the quasi-Newton method, updated by the inverse BFGS-rule without damping, in combination with a quadratic line-search in order to solve the minimization problem (2.6). For all the details and a survey on available optimization algorithms, we refer to [15,16,19] and the references therein.

Numerical results
In our numerical example, we consider D to be the ellipse with semi-axes 0.45 and 0.3, having a starshaped inclusion in its interior. This inclusion is to be determined from measurements of the Neumann data for the single voltage distribution f (x) = x 2 1 − x 2 2 at the outer boundary Σ. We consider the situation that the noisy measurement g(x, ω) is a Gaussian random field. Then, the Neumann data g(x, ω) are given in accordance with (1.8), being fully described by having normalized Gaussian random Cor(x, y) = Ω g(x, ω) − g 0 (x) g(y, ω) − g 0 (y) dP(ω).
We assume for our test that the covariance kernel is a Gaussian kernel with correlation length > 0. Hence, by means of the Karhunen-Loève expansion and an appropriate random number generator, we are able to simulate the Gaussian random field numerically, see [18,27] for example. The discretization of the shape optimization problem is as follows. The sought inclusion is approximated by a Fourier expansion of with 33 Fourier coefficients, i.e., it holds N r = 16 in (2.5). Notice that the sought inclusion cannot be exactly represented by this Fourier expansion. Moreover, the solutions of the boundary integral equations (2.1) and (2.2) are approximately computed by using 512 piecewise linear wavelets per boundary, i.e., it holds N Σ = N Γ = 512 in Section 2.2. We use always the circle of radius 0.2 as initial guess and stop the quasi-Newton method after 25 steps since the underlying shape identification problem is severely ill-posed.

Classical approach
The classical approach would be to sample the process g and to minimize the Kohn-Vogelius functional for each realization. In Figure 2, we plotted ten reconstructions which were derived from a single measurement, where the correlation length is set to = 0.1 and the noise level β is chosen such that the perturbation g(ω) − g 0 L 2 (Σ) is about 5 percent of g 0 L 2 (Σ) . We observe a great variance of the reconstructions in Figure 2. In particular, the reconstructions differ mostly considerably from the exact inclusion, which is indicated in dark gray.  Since we are using a parametrization (2.5) based on Fourier coefficients, we can compute the mean of the Fourier coefficients of the reconstructions. Doing so for the ten reconstructions found in Figure 2, we obtain the inclusion seen in Figure 3. One clearly observes an improvement of the reconstruction. Nonetheless, this parametrization based notion of the mean shape is not generally possible. In particular, it is computationally extremely expensive.

Expected Kohn-Vogelius functional
To realize the new approach proposed on this article, we repeat the measurement M times, yielding samples g (1) , g (2) , . . . , g (M ) of the unknown random field g(ω). From these measurements, we compute the sample mean to approximate the mean g 0 in (1.8). The random variation is approximated by means of the Karhunen-Loève expansion with respect to the sample covariance  If we run the optimization for the expected Kohn-Vogelius functional, i.e., for the objective (1.5) with α = 0, then we obtain the reconstructions found Figure 4. Here, we have repeated the reconstruction algorithm four times based on M = 10 samples each. One observes much better reconstructions than those which are obtained from a single measurement. Nonetheless, there is still a slight deviation of the reconstructions. This stems from the fact that ten samples are not sufficient to reliably estimate the expectation and the covariance. The situation changes if we exploit 100 measurements. In this case, we obtain a slightly improved reconstruction, see Figure 5. Especially, there is no more difference when repeating the experiment.

Influence of the coupling parameter α
So far, we only considered the minimization of the expected Kohn-Vogelius functional, which means the particular choice α = 0 in the objective (1.4) and (1.5), respectively. Therefore, we shall study the dependence of the reconstruction algorithm on the coupling parameter α. To that end, we choose M = 100 samples in order to ensure that the reconstruction does not depend on the particular samples.
We consider objective (1.5), since the standard deviation exhibits the same scaling as the expectation. For our test example, both quantities have a similar size for the initial shape. For increasing coupling parameter α, the standard deviation of the Kohn-Vogelius functional becomes more and more important compared with its expectation. Nonetheless, the reconstruction is basically the same as seen in Figure 6. Here, one finds the reconstructions for α = 1 /2, α = 3 /4, α = 7 /8, and α = 15 /16. What differs for different values of the coupling parameter α is the convergence behaviour of the reconstruction algorithm. Increasing α enforces faster convergence in the beginning of the minimization algorithm, see also Figure 7, where the convergence histories of the objective (1.5) are found for different values of the coupling parameter α. Nonetheless, one also figures out that the functional becomes more flat as α increases. Notice that the reconstruction algorithm does not converge anymore for values of α higher than α = 15 /16.

Conclusion
In the present article, we have proposed a method which enables to reconstruct inclusions or void in electrical impedance tomography also in case of very noisy data. Namely, we modeled the measurement data as random field which can approximately be determined from repeated measurements. An objective which combines the expectation and variance of the Kohn-Vogelius functional is then mimimized to reconstruct the sought inclusion. In particular, it is shown that the objective as well as its shape gradient is a deterministic quantity. Numerical results are present which show the capability and feasibility of the proposed approach.