A posteriori analysis:
contains our a posteriori error estimator
and the definition of the mesh adaptivity algorithm for which
convergence will be proved in the following sections.
Recalling the mesh
defined above, we let
denote the set of all the edges (or
the set of faces in 3D) of the elements of the mesh
. For each
, we assume that we
chosen a preorientated unit normal vector
and we denote by
the elements sharing (i.e.
In addition we
and edges are to be considered closed. Furthermore we denote the
diameter of by .
The error estimator
which we shall use is obtained by
adapting the standard estimates for source problems to the eigenvalue
For a function ,
which is piecewise
continuous on the mesh
, we introduce its jump across an
Then for any function with
continuous gradient on
we define, for
The error estimator
is defined as
where each term
, which is the local contribution to the
residual, is defined by
The following lemma
is proved, in a standard way, by
adapting the usual arguments for source problems.
shall see below that
constitutes a ``higher order term''.
The idea is to
refine a subset of the elements of
whose side residuals
sum up to a
fixed proportion of the total residual .
To satisfy the
we need first of all to compute all the ``local residuals''
and sort them
according their values. Then the
edges (faces) are inserted into
in decreasing order
, starting from the
edge (face) with the
biggest local residual, until the condition (2.5)
is satisfied. Note that a minimal subset
may not be unique.
construct another set
, containing all the
which share at least one edge
In order to prove
the convergence of the adaptive
method, we require an additional marking strategy, which will be
defined in Definition 2.6
below. The latter
marking strategy is driven by oscillations. The same argument has been
already used in many papers about convergence for source problems, but
to our knowledge has not yet been used for analysing convergent
algorithms for eigenvalue problems.
The concept of
``oscillation'' is just a measure of how
well a function may be approximated by piecewise constants on a
particular mesh. For any function
, and any mesh
, we introduce its
projection onto piecewise
constants defined by:
Then we make the definition:
and that (by standard approximation theory and the ellipticity of
The second marking
strategy (introduced below) aims to
reduce the oscillations corresponding to a particular approximate
Note that a minimal subset
may not be unique. To
we need first of all to
compute all the local terms
according their values. Then the elements are
in decreasing order of the size of
those local terms, until the condition (2.9)
algorithm can then be stated:
In 2D at the iteration in
each element in the set
is refined using the ``bisection5''
algorithm, as illustrated in Figure 1c.
advantage of this technique is the creation of a new node in the middle
of each marked side in
and also a new node
of each marked element.
refinement procedure applied to an element of the mesh. In (a) the
element before the refinement, in (b) after the three sides as been
refined and in (c) after the bisection of one of the three new