Durham University HOMEPAGE






Convergence for Eigenvalue Problems with Mesh Adaptivity


A posteriori analysis:

This section contains our a posteriori error estimator and the definition of the mesh adaptivity algorithm for which convergence will be proved in the following sections.

Recalling the mesh sequence $ \mathcal{T}_n$ defined above, we let $ \mathcal{S}_n$ denote the set of all the edges (or the set of faces in 3D) of the elements of the mesh $ \mathcal{T}_n$ . For each $ S\in \mathcal{S}_n$ , we assume that we have already chosen a preorientated unit normal vector $ \vec{n}_S$ and we denote by $ \tau_1(S)$ and $ \tau_2(S)$ the elements sharing $ S$ (i.e. $ \tau_1(S)\cap \tau_2(S)=S$ ). In addition we write $ \Omega(S) = \tau_1(S) \cup \tau_2(S)$ . Elements, faces and edges are to be considered closed. Furthermore we denote the diameter of $ S$ by $ H_S$ .

Notation 2.1 We write $ A \lesssim B$ when A/B is bounded by a constant which may depend on the functions $ \mathcal{A}$ and $ \mathcal{B}$ in (1.2), on $ \underline{a}$ , $ \overline{a}$ , $ \underline{b}$ and $ \overline{b}$ , on $ C_\mathrm{ell}$ in Assumption 1.1 and on $ C_\mathrm{reg}$ in (1.7). The notation $ A \cong B $ means $ A \lesssim B$ and $ A \gtrsim B$ .

All the constants depending on the spectrum are handled explicitly. Similarly all mesh size dependencies are explicit. Note that all eigenvalues of (1.6) satisfy $ \lambda_n\gtrsim 1$ , since $ \lambda_n\ge\lambda_1=a(u_1,u_1) \gtrsim\vert u_1\vert^2_{1,\Omega} \gtrsim \Vert u_1\Vert^2_{0,\Omega} \gtrsim\Vert u_1\Vert^2_{0,\mathcal{B},\Omega}=1$.

The error estimator which we shall use is obtained by adapting the standard estimates for source problems to the eigenvalue problem.

For a function $ g$ , which is piecewise continuous on the mesh $ \mathcal{T}_n$ , we introduce its jump across an edge (face) $ S\in \mathcal{S}_n$ by:

$\displaystyle [g]_S(x):=\Bigg( \lim_{\substack{ \tilde{x}\in \tau_1(S)\ \til... ...e{x}\rightarrow x}}g(\tilde{x})\Bigg),\quad \mbox{for }x\in \mathrm{int}(S). $
Then for any function $ v$ with piecewise continuous gradient on $ \mathcal{T}_n$ we define, for $ S\in \mathcal{S}_n$
$\displaystyle J_S(v)(x):=[\vec{n}_S\cdot \mathcal{A}\triangledown v]_S(x),$ for $\displaystyle \quad x\in \mathrm{int}(S). $

The error estimator $ \eta_n$ on the mesh $ \mathcal{T}_n$ is defined as

$\displaystyle \eta^2_n:=\sum_{S\in S_n} \eta^2_{S,n} \ ,$ (2.1)

where each term $ \eta_{S,n}$ , which is the local contribution to the residual, is defined by
$\displaystyle \eta^2_{S,n}:=\Vert H_n\lambda_n u_n\Vert^2_{0,\mathcal{B},\Omega(S)}+\Vert H_S^{1/2}J_S(u_n)\Vert^2_{0,S} \ .$ (2.2)

The following lemma is proved, in a standard way, by adapting the usual arguments for source problems.

Lemma 2.1 (Reliability)
$\displaystyle \parallel\!\!\vert u-u_n\parallel\!\!\vert _\Omega\ \lesssim\ \eta_n+G_n,$ (2.3)

$\displaystyle G_n:= \frac{1}{2}(\lambda+\lambda_n)\frac{\Vert u-u_n\Vert^2_{0,\mathcal{B},\Omega}}{\parallel\!\!\vert u-u_n\parallel\!\!\vert _\Omega}.$ (2.4)

Remark 2.2 We shall see below that $ G_n$ defined above constitutes a ``higher order term''.

The idea is to refine a subset of the elements of $ \mathcal{T}_n$ whose side residuals sum up to a fixed proportion of the total residual $ \eta_n$ .

Definition 2.4 (Marking Strategy 1) Given a parameter $ 0< \theta<1$ , the procedure is: mark the sides in a minimal subset $ \hat{\mathcal{S}}_n$ of $ \mathcal{S}_n$ such that
$\displaystyle \Bigg(\sum_{S\in\hat{\mathcal{S}}_n}\eta_{S,n}^2\Bigg)^{1/2} \ge \theta \eta_n \ .$ (2.5)

To satisfy the condition (2.5), we need first of all to compute all the ``local residuals'' $ \eta_{S,n}$ and sort them according their values. Then the edges (faces) $ S$ are inserted into $ \hat{\mathcal{S}}_n$ in decreasing order of $ \eta_{S,n}$ , starting from the edge (face) with the biggest local residual, until the condition (2.5) is satisfied. Note that a minimal subset $ \hat{\mathcal{S}}_n$ may not be unique. Then we construct another set $ \hat{\mathcal{T}}_n$ , containing all the elements of $ \mathcal{T}_n$ which share at least one edge (face) $ S\in\hat{\mathcal{S}}_n$ .

In order to prove the convergence of the adaptive method, we require an additional marking strategy, which will be defined in Definition 2.6 below. The latter marking strategy is driven by oscillations. The same argument has been already used in many papers about convergence for source problems, but to our knowledge has not yet been used for analysing convergent algorithms for eigenvalue problems.

The concept of ``oscillation'' is just a measure of how well a function may be approximated by piecewise constants on a particular mesh. For any function $ v \in L_2(\Omega)$ , and any mesh $ \mathcal{T}_n$ , we introduce its orthogonal projection $ P_n v$ onto piecewise constants defined by:

$\displaystyle (P_nv) \vert_\tau =\frac{1}{\vert\tau\vert}\int_\tau v_n,$ for all $\displaystyle \quad \tau \in \mathcal{T}_n .$ (2.6)

Then we make the definition:
Definition 2.5 (Oscillations) On a mesh $ \mathcal{T}_n$ , we define
$\displaystyle \mathrm{osc}(v, \mathcal{T}_n):= \Vert H_n (v - P_n v)\Vert_{0,\mathcal{B},\Omega}.$ (2.7)

Note that
$\displaystyle \mathrm{osc}(v,\mathcal{T}_n) = \bigg( \sum_{\tau\in\mathcal{T}_n}H_\tau^2 \Vert v- P_n v \Vert^2_{0,\mathcal{B},\tau}\bigg)^{1/2}. $
and that (by standard approximation theory and the ellipticity of $ a(\cdot,\cdot)$ ),
$\displaystyle \mathrm{osc}(v,\mathcal{T}_{n})\ \lesssim\ (H_{n}^\mathrm{max})^2 \vert\vert\vert v\vert\vert\vert _\Omega \ ,$ for all $\displaystyle \quad v \in H^1_0(\Omega) \ .$ (2.8)

The second marking strategy (introduced below) aims to reduce the oscillations corresponding to a particular approximate eigenfunction $ u_n$ .

Definition 2.6 (Marking Strategy 2) Given a parameter $ 0< \tilde{\theta}<1$ : mark the sides in a minimal subset $ \tilde{\mathcal{T}}_n$ of $ \mathcal{T}_n$ such that
$\displaystyle \mathrm{osc}(u_n, \tilde{\mathcal{T}}_n)\ \ge \tilde{\theta}\ \mathrm{osc}(u_n, \mathcal{T}_n) \ .$ (2.9)

Note that a minimal subset $ \tilde{\mathcal{T}}_n$ may not be unique. To satisfy the condition (2.9), we need first of all to compute all the local terms $ H_\tau^2\Vert (u_n- P_nu_n)\Vert^2_{0,\mathcal{B},\tau}$forming $ \mathrm{osc}(u_n, \mathcal{T}_n)$ and sort them according their values. Then the elements $ \tau$ are inserted into $ \tilde{\mathcal{T}}_n$ in decreasing order of the size of those local terms, until the condition (2.9) is satisfied.

Our adaptive algorithm can then be stated:
\begin{algorithm} % latex2html id marker 726 [h] \caption{Converging algorithm}... ...onstruct $\mathcal{T}_{n+1}$} \par \ENDLOOP \end{algorithmic} \end{algorithm}
In 2D at the $ n-th$ iteration in Algorithm 1 each element in the set $ \hat{\mathcal{T}}_n\cup \tilde{\mathcal{T}}_n$is refined using the ``bisection5'' algorithm, as illustrated in Figure 1c. An advantage of this technique is the creation of a new node in the middle of each marked side in $ \hat{\mathcal{S}}_n$ and also a new node in the interior of each marked element.

Figure 1: The refinement procedure applied to an element of the mesh. In (a) the element before the refinement, in (b) after the three sides as been refined and in (c) after the bisection of one of the three new segments.
\begin{figure} \setlength{\unitlength}{3cm} \begin{picture}(1, 1) \put(0... ....353, 0.45){\line(1, 0){0.7}} \put(2.6, 0){(c)} \end{picture} \end{figure}

Valid CSS!Valid HTML 4.01 Transitional