A posteriori analysis:
This section
contains our a posteriori error estimator
and the definition of the mesh adaptivity algorithm for which
convergence will be proved in the following sections.
Recalling the mesh
sequence
defined above, we let
denote the set of all the edges (or
the set of faces in 3D) of the elements of the mesh
. For each
, we assume that we
have already
chosen a preorientated unit normal vector
and we denote by
and
the elements sharing (i.e.
).
In addition we
write
.
Elements, faces
and edges are to be considered closed. Furthermore we denote the
diameter of by .
The error estimator
which we shall use is obtained by
adapting the standard estimates for source problems to the eigenvalue
problem.
For a function ,
which is piecewise
continuous on the mesh
, we introduce its jump across an
edge (face)
by:
Then for any function with
piecewise
continuous gradient on
we define, for
for
The error estimator
on
the mesh
is defined as

(2.1) 
where each term
, which is the local contribution to the
residual, is defined by

(2.2) 
The following lemma
is proved, in a standard way, by
adapting the usual arguments for source problems.
Remark 2.2
We
shall see below that
defined above
constitutes a ``higher order term''.
The idea is to
refine a subset of the elements of
whose side residuals
sum up to a
fixed proportion of the total residual .
To satisfy the
condition (2.5),
we need first of all to compute all the ``local residuals''
and sort them
according their values. Then the
edges (faces) are inserted into
in decreasing order
of
, starting from the
edge (face) with the
biggest local residual, until the condition (2.5)
is satisfied. Note that a minimal subset
may not be unique.
Then we
construct another set
, containing all the
elements of
which share at least one edge
(face)
.
In order to prove
the convergence of the adaptive
method, we require an additional marking strategy, which will be
defined in Definition 2.6
below. The latter
marking strategy is driven by oscillations. The same argument has been
already used in many papers about convergence for source problems, but
to our knowledge has not yet been used for analysing convergent
algorithms for eigenvalue problems.
The concept of
``oscillation'' is just a measure of how
well a function may be approximated by piecewise constants on a
particular mesh. For any function
, and any mesh
, we introduce its
orthogonal
projection onto piecewise
constants defined by:
for all

(2.6) 
Then we make the definition:
Note that
and that (by standard approximation theory and the ellipticity of
),
for all

(2.8) 
The second marking
strategy (introduced below) aims to
reduce the oscillations corresponding to a particular approximate
eigenfunction .
Note that a minimal subset
may not be unique. To
satisfy the
condition (2.9),
we need first of all to
compute all the local terms
forming
and
sort them
according their values. Then the elements are
inserted into
in decreasing order of the size of
those local terms, until the condition (2.9)
is
satisfied.
Our adaptive
algorithm can then be stated:
In 2D at the iteration in
Algorithm 1
each element in the set
is refined using the ``bisection5''
algorithm, as illustrated in Figure 1c.
An
advantage of this technique is the creation of a new node in the middle
of each marked side in
and also a new node
in
the interior
of each marked element.
Figure 1:
The
refinement procedure applied to an element of the mesh. In (a) the
element before the refinement, in (b) after the three sides as been
refined and in (c) after the bisection of one of the three new
segments.
