rigorous a posteriori error bounds

Transcription

rigorous a posteriori error bounds
C. R. Acad. Sci. Paris, Ser. I 337 (2003) 619–624
Numerical Analysis
Reduced-basis approximation of the viscous Burgers equation:
rigorous a posteriori error bounds
Karen Veroy, Christophe Prud’homme, Anthony T. Patera
Department of Mechanical Engineering, M.I.T., Room 3-264, Cambridge, MA 02139-4307, USA
Received 6 August 2003; accepted 1 September 2003
Presented by Philippe G. Ciarlet
Abstract
We present rigorous, sharp, and inexpensive a posteriori error bounds for reduced-basis approximations of the viscosityparametrized Burgers equation. There are two critical ingredients: the Brezzi, Rappaz and Raviart (Numer. Math. 36
(1980) 1–25) framework for analysis of approximations of nonlinear elliptic partial differential equations; and offline/online
computational procedures for efficient calculation of the necessary continuity and stability constants, and of the dual norm of
the residual. Numerical results confirm the performance of the error bounds. To cite this article: K. Veroy et al., C. R. Acad.
Sci. Paris, Ser. I 337 (2003).
 2003 Académie des sciences. Published by Éditions scientifiques et médicales Elsevier SAS. All rights reserved.
Résumé
Approximation par base réduite de l’équation de Burgers visqueuse : bornes d’erreur a posteriori rigoureuses. Nous
présentons des bornes d’erreur a posteriori rigoureuses, précises et peu coûteuses pour l’approximation par base réduite de
l’équation de Burgers avec la viscosité comme paramètre. Il y a deux composantes essentielles : l’approche de Brezzi, Rappaz
et Raviart (Numer. Math. 36 (1980) 1–25) pour l’analyse d’approximations d’équations aux dérivées partielles nonlinéaires
elliptiques ; et une procédure hors-ligne/en-ligne pour le calcul efficace des constantes nécessaires de continuité et de stabilité,
et de la norme duale du résidu. Les résultats numériques confirment les performances de ces bornes d’erreur. Pour citer cet
article : K. Veroy et al., C. R. Acad. Sci. Paris, Ser. I 337 (2003).
 2003 Académie des sciences. Published by Éditions scientifiques et médicales Elsevier SAS. All rights reserved.
Version française abrégée
Considérons un espace Y ≡ H01 (Ω) auquel on associe un produit scalaire (·, ·)Y , une norme · Y et un crochet
de dualité ·, ·. On définit l’application G : Y → Y donnée par (2) dont on cherche le zéro u(ν) dans Y étant donné
ν ∈ D ≡ [ν min , ν max ] ⊂ R1+ , (1), on définit également sa dérivée de Frechet DG(·, ·) en un point (z, ν) ∈ Y ×D, (3).
Finalement, on définit la constante κ, (4), qui est essentielle à la majoration de la constante de continuité de DG.
Concernant l’approximation par base réduite (BR), on introduit un échantillonage emboîté de paramètres
max
∈ D} et on construit l’espace d’approximation BR WN ≡
S1 = {ν̄ 1 ∈ D} ⊂ · · · ⊂ SN max ≡ {ν̄ 1 ∈ D, . . . , ν̄ N
E-mail address: [email protected] (A.T. Patera).
1631-073X/$ – see front matter  2003 Académie des sciences. Published by Éditions scientifiques et médicales Elsevier SAS. All rights
reserved.
doi:10.1016/j.crma.2003.09.023
620
K. Veroy et al. / C. R. Acad. Sci. Paris, Ser. I 337 (2003) 619–624
span{ζn ≡ u(ν̄ n ), 1 n N}. Notre approximation BR s’écrit alors sous la forme (5) où on suppose que
DG(uN max (ν); ν) est un isomorphisme et on note βN (ν) ≡ DG(uN (ν); ν)−1 −1
Y ,Y . La Tableau 1 présente l’erreur
normalisée de l’approximation BR en fonction de N . On notera la convergence rapide de cette approximation.
Quant à l’estimateur d’erreur a posteriori, il requiert une borne supérieure (respectivement, inférieure) de κ
(respectivement de βN (ν)). Une borne supérieure de κ, κ̃, peut être calculée a priori et on choisit κ̃ = 14 . Pour
la construction de la borne inférieure de βN (ν), β̃(ν), on introduit un ensemble de paramètres UJ ≡ {ν̃ 1 ∈ D,
. . . , ν̃ J ∈ D}, une distance d(ν, ν̃; N) ≡ 2βN max (ν̃)−1 [|ν − ν̃| + 12 uN (ν) − uN max (ν̃)L2 (Ω) ], et une application
IN ν ≡ arg minν̃∈UJ d(ν, ν̃; N) ; on définit finalement β̃(ν) ≡ 12 βN max (IN ν). Le Lemme 3.1 montre alors que β̃(ν)
N (UJ ) ≡ {ν ∈ D | d(ν, IN ν; N) 1}. En suite, on
est bien une borne inférieure pour βN (ν) pour tout ν dans D
définit ∆N (ν) ≡ 32 εN (ν)/β̃(ν) et τ (ν) ≡ 32 κ̃/β̃(ν), où εN (ν) ≡ G(uN (ν); ν)Y est la norme duale du résidu.
N (UJ )) et en utilisant la théorie
Le Théorème 3.2 montre que, sous certaines hypothèses (∆N (ν)τ (ν) 1, ν ∈ D
développée par Brezzi, Rappaz et Raviart [3,4], ∆N (ν) définit des bornes d’erreur rigoureuses et précises pour
notre approximation BR.
Dans la pratique, on cherche à évaluer s(ν) = L, u(ν), où L est un élément donné de Y , rapidement et sûrement : rapidement en approchant s(ν) par sN (ν) ≡ L, uN (ν) et sûrement en fournissant la borne a posteriori
∆sN (ν) ≡ LY ∆N (ν) ; sous les hypothèses du Théorème 3.2, on a |s(ν) − sN (ν)| ∆sN (ν). À cette fin – l’évaluation rapide et fiable de s(ν), – le stratagème algorithmique opère une décomposition hors-ligne/en-ligne [2,6,11] :
elle permet de s’affranchir de la dépendance en N , la dimension de l’approximation élément fini « de référence » de
Y , lors de l’étape en-ligne pour les différentes composantes de l’approximation BR ainsi que l’estimateur d’erreur.
Pour deux valeurs du paramètre (ν = 0,305 et ν = 0,013) et en fonction de N , la Tableau 2 confirme les
résultats de notre méthodologie. Le choix des deux cas montre clairement deux scénarios de comportement de
l’estimateur d’erreur. Dans le premier cas, on observe non seulement la convergence rapide de ∆N mais aussi que
N (UJ ) du Théorème 3.2 sont bien vérifiées pour N 1 ; tandis que pour
les conditions ∆N (ν)τ (ν) 1 et ν ∈ D
le deuxième cas, elle ne sont vérifiées que pour N 10. Dans les deux cas, on observe que les bornes sont très
précises. La propriété de convergence rapide de l’approximation BR est véritablement essentielle non seulement
pour obtenir efficacement la précision désirée, mais également pour vérifier, encore une fois efficacement, la
condition ∆N (ν)τ (ν) 1 : la précision étant requise pour la prédiction et la certification.
1. Problem statement
The reliability of very low-dimensional reduced-basis (RB) approximations [1,2,5–12] of parametrized partial
differential equations can only be assured by rigorous a posteriori error estimation procedures. In our earlier work
we develop rigorous a posteriori error bounds for coercive linear [11], noncoercive linear [12], and special “coercive” nonlinear [12], elliptic partial differential equations. In this Note we demonstrate that rigorous and sharp
a posteriori error bounds can also be obtained for RB approximations of certain noncoercive nonlinear elliptic partial differential equations. We consider here a very simple model problem, the viscous Burgers equations in one dimension;1 however, the method directly extends to the multidimensional incompressible Navier–Stokes equations.
We define Y ≡ H01 (Ω ≡ ]0, 1[ ⊂ R1 ), where H01 (Ω) = {v ∈ H 1 (Ω) | v(0) = v(1) = 0} and H 1 (Ω) = {v, vx ∈
1
L2 (Ω)}; here Lp (Ω), 1 p ∞, is the space of measurable functions for which vLp (Ω) ≡ ( 0 |v|p )1/p is
1
1/2
finite. We associate to Y the inner product and norm (w, v)Y = 0 wx vx + wv and wY = (w, w)Y , respectively.
We denote the dual space of Y as Y , with duality pairing ·, ·.
Given a ν ∈ D ≡ [ν min , ν max ] ⊂ R1+ , we look for u(ν) ∈ Y such that
G u(ν); ν = 0,
(1)
where G : Y → Y is the ν-parametrized C 1 mapping given by
1 The method we describe here is similar in form, but greatly improved in interpretation, to the approach presented in [12].
K. Veroy et al. / C. R. Acad. Sci. Paris, Ser. I 337 (2003) 619–624
G(w; ν), v ≡ ν
1
1
wx vx −
2
0
621
1
w2 vx − F, v,
∀w, v ∈ Y,
(2)
0
and F ∈ Y is a given linear functional.
The Frechet derivative of G at (z; ν) is given by
DG(z; ν)w, v ≡ ν
1
1
wx vx −
0
zwvx ,
∀w, v ∈ Y,
(3)
0
for any z ∈ Y . We further define
1
0 zwvx
κ = sup sup sup
,
z∈Y w∈Y v∈Y zY wY vY
(4)
in terms of which we can bound the continuity constant γ (z; ν) ≡ DG(z; ν)Y,Y as γ (z; ν) ν + κzY . Note
that κ, and hence γ (z; ν), is finite thanks to the continuous embedding of H 1 (Ω) in L4 (Ω).
2. Reduced-basis approximation
max
We first introduce a nested set of parameter samples S1 ≡ {ν̄ 1 ∈ D} ⊂ · · · ⊂ SN max ≡ {ν̄ 1 ∈ D, . . . , ν̄ N ∈ D}.2
Then, for any N N max , we define the RB approximation space as WN ≡ span{ζn ≡ u(ν̄ n ), 1 n N}. In
practice, u(ν̄ · ) is replaced by a “truth” Galerkin approximation uN (ν̄ · ) ∈ Y N ⊂ Y . We assume that the dimension
of Y N , N , is sufficiently large that uN (ν) may be effectively equated with u(ν); as we shall see, the online
complexity is independent of N .
Our RB approximation is then: Given a ν ∈ D, find uN (ν) ∈ WN such that
(5)
G uN (ν); ν , v = 0, ∀v ∈ WN .
We shall assume that DG(uN max (ν); ν) is an isomorphism; more quantitatively, we suppose that βN max (ν) β0 > 0, ∀ν ∈ D, where βN (ν) ≡ DG(uN (ν); ν)−1 −1
Y ,Y .
Standard proofs demonstrate that both (1) and (5) admit unique solutions for ν sufficiently large; observations
suggest that, in fact, both (1) and (5) admit unique solutions for all positive ν. We present in Table 1
maxν∈D (e(ν)Y /u(ν)Y ) as a function of N for D ≡ [0.01, 10.0]. We observe that the error e(ν) ≡ u(ν)−uN (ν)
tends to zero (uniformly in ν) quite rapidly.
3. A posteriori error bounds
Our a posteriori error estimators require rigorous upper (respectively, lower) bounds for κ (respectively, βN (ν)).
The upper bound on κ, κ̃, is readily derived a priori: from the continuous embedding H 1 (Ω) ⊂ L4 (Ω) we derive
1
1
∞
0 zwvx zL4 (Ω) wL4 (Ω) vY ; it further follows from the embedding H (Ω ≡ ]0, 1[) ⊂ L (Ω ≡ ]0, 1[) that
1
1
wL4 (Ω) wL∞ (Ω) 2 wY . We thus choose κ̃ = 4 .
Table 1
Normalized error in the reduced-basis approximation as a function of N
N
1
5
9
13
17
e(ν)Y
maxν∈D u(ν)
Y
3.04 × 100
1.20 × 10−1
7.30 × 10−3
3.34 × 10−5
3.80 × 10−6
2 The samples are constructed adaptively based on the inexpensive a posteriori error estimators of Section 3 [12].
622
K. Veroy et al. / C. R. Acad. Sci. Paris, Ser. I 337 (2003) 619–624
To construct a lower bound for βN (ν), β̃(ν), we introduce a parameter set UJ ≡ {ν̃ 1 ∈ D, . . . , ν̃ J ∈ D}, a
distance d(ν, ν̃; N) ≡ 2βN max (ν̃)−1 [|ν − ν̃| + 12 uN (ν) − uN max (ν̃)L2 (Ω) ], and a mapping IN ν ≡ arg minν̃∈UJ d(ν,
ν̃; N). We then define β̃(ν) ≡ 12 βN max (IN ν), and prove
N (UJ ), where
Lemma 3.1. The inf-sup approximation β̃(ν) satisfies 0 < β0 /2 β̃(ν) βN (ν), ∀ν ∈ D
DN (UJ ) ≡ {ν ∈ D | d(ν, IN ν; N) 1}.
Proof. We can derive, from (3) and the embedding H 1 (Ω ≡ ]0, 1[) ⊂ L∞ (Ω ≡ ]0, 1[), that
1
DG uN (ν); ν − DG uN max (ν̃); ν̃ w, v = (ν − ν̃)
1
wx vx −
0
uN (ν) − uN max (ν̃) wvx
0
1
|ν − ν̃|wY vY + uN (ν) − uN max (ν̃)L2 (Ω) wY vY ,
2
∀ν, ν̃ ∈ D;
N (UJ ). Thus,
hence DG(uN max (IN ν); IN ν)−1 (DG(uN (ν); ν) − DG(uN max (IN ν); IN ν))Y,Y 12 , ∀ν ∈ D
−1
−1
max
max
DG(uN (ν); ν) = [DG(uN (IN ν); IN ν)+(DG(uN (ν); ν)−DG(uN (IN ν); IN ν))] exists, and βN (ν)−1
2βN max (IN ν)−1 [4]. The lower bound on β̃(ν) follows from βN max (ν) β0 , ∀ν ∈ D. ✷
We now define ∆N (ν) ≡ 32 εN (ν)/β̃(ν), τ (ν) ≡ 32 κ̃/β̃(ν), and ρN (ν) ≡ 3 γ (uN (ν); ν)/β̃(ν), where εN (ν) ≡
G(uN (ν); ν)Y is the dual norm of the residual. We can then prove
N (UJ ), and ∆N (ν) τ (ν)−1 , (i) there exists a unique solution of (1), u(ν), in
Theorem 3.2. For ν in D
B(uN (ν), 32 τ (ν)−1 ) for which (ii) u(ν) − uN (ν)Y ∆N (ν), and (iii) ∆N (ν) ρN (ν) u(ν) − uN (ν)Y . Here
B(z, r) = {w ∈ Y | w − zY < r}.
Proof. The proofs of (i) and (ii) follow directly from Lemma 3.1 and the Brezzi, Rappaz and Raviart framework,
in particular Theorem 3.1 of [4] (slightly specialized to the quadratic nonlinearity of interest here).
To derive the effectivity result, (iii), we note that e(ν) ≡ u(ν) − uN (ν) satisfies
1
DG uN (ν); ν e(ν), v = − G uN (ν); ν , v +
2
1
e(ν)2 vx .
(6)
0
We now note from standard duality arguments that εN (ν) = ê(ν)Y , where ê(ν) ∈ Y satisfies (ê(ν), v)Y =
−G(uN (ν); ν), v, ∀v ∈ Y . We next choose v = ê(ν) in (6), apply continuity, and invoke e(ν)Y ∆N (ν) ≡
3 εN (ν)
to obtain ∆N (ν) ρN2(ν) e(ν)Y + (∆N (ν)τ (ν)) ∆N2(ν) ; the result then follows from ∆N (ν) τ (ν)−1 .
2
β̃(ν)
4. Computational complexity: offline/online decomposition
In actual practice, our interest is not in u(ν) per se, but rather in a (say) linear-functional output, s(ν) ≡
L, u(ν), where L is a prescribed member of Y . We wish to reliably evaluate s(ν) rapidly in the limit of many
queries – as demanded in the (adaptive) design optimization and (real-time) control contexts. For “rapidly”,
we approximate s(ν) by sN (ν) ≡ L, uN (ν). For “reliably”, we provide the a posteriori bound ∆sN (ν) ≡
LY ∆N (ν); under the hypotheses of Theorem 3.2, |s(ν) − sN (ν)| ∆sN (ν).
We now discuss the computational stratagem by which we efficiently evaluate sN (ν) and ∆N (ν) (and hence
∆sN (ν)). The fundamental ingredient is an offline/online computational decomposition [2,6,11] that breaks the
requisite calculations into two parts: an expensive offline stage performed once, and an inexpensive online stage
performed many times – for each new evaluation ν → sN (ν), ∆N (ν). The complexity of the online stage will
K. Veroy et al. / C. R. Acad. Sci. Paris, Ser. I 337 (2003) 619–624
623
depend on N , which is typically small (see Table 1), but not on N , which is typically large; we will thus realize
marginal real-time – and, thanks to ∆sN (ν), reliable – response.
We begin with the calculation of uN (ν) and sN (ν) = L, uN (ν). To obtain uN (ν), we apply a Newton
k
k
k
k
iterative scheme: given ukN (ν) ∈ WN , find δN
(ν) ≡ uk+1
N (ν); ν)δN (ν), v =
N (ν) − uN (ν) ∈ WN such that DG(u
N
−G(ukN (ν); ν), v, ∀v ∈ WN . This can be expressed in terms of our RB expansions ukN (ν) = n=1 ukN n (ν)ζn and
k
k
(ν) = N
δN
n=1 δN n (ν)ζn as
N N
k
νAN m,n + 2
ukN n (ν)BN m,n ,n δN
n (ν)
n =1
n=1
N N
ukN n (ν)BN m,n ,n ukN n (ν),
νAN m,n +
= Fm −
n=1
1 m N,
(7)
n =1
1 n dζm
dζm
1 1
N , and FN m = F, ζm ,
where AN m,n ≡ 0 dζ
dx dx , 1 n, m N, BN m,n ,n ≡ − 2 0 ζn ζn dx , 1 n, n , m N
1 m N . Once the iteration has converged, we evaluate our output as sN (ν) = n=1 uN n (ν)LN n , where
LN n = L, ζn , 1 n N . The offline/online decomposition is clear.
max 2
max 3
In the offline stage, we form the parameter-independent quantities AN max ∈ R(N ) , BN max ∈ R(N ) , and
max
max
N
N
max
3
FN max ∈ R
, LN max ∈ R
– at cost O((N ) N ). In the online stage, for any given ν, we first construct and
solve (7) – at cost (per Newton iteration) O(N 3 ); and then, upon convergence, we evaluate sN (ν) – at cost O(N).
In the online stage, the complexity is independent of N .3
We now turn to the a posteriori error bound. The two critical computational tasks are the construction of β̃(ν),
and the evaluation of εN (ν). The latter is developed in detail in [12] – the resulting offline and online complexity
are O((N max )4 N ) and N 4 /4, respectively. We thus only discuss the former: we must first find IN ν and verify
N (UJ ); we can then evaluate 1 βN max (IN ν). To determine IN ν we need only compare d(ν, ν̃; N) at a few
ν ∈D
2
points ν̃ ∈ UJ in the vicinity of ν. Once ν̃ i = IN ν is obtained, we simply evaluate 12 βN max (ν̃ i ). The offline/online
decomposition is clear.
In the offline stage we compute the parameter-independent quantities at cost O(J N 3 )+O(J N )+O((N max )2 N ).
In the online stage, for any given ν and uN (ν), we evaluate d(ν, ν̃ j ; N) for ν̃ j ≈ ν to obtain ν̃ i = IN ν – at cost
O((N max )2 ); then, assuming d(ν, ν̃ i ; N) 1,4 we “look up” βN max (ν̃ i ). In summary, we may compute not only
sN (ν), but also ∆N (ν), at online cost independent of N .5
5. Numerical results
We present here results for the case D = [ν min = 0.01, ν max = 10.0] and N max = 18; the resolution of our
(uniform) truth approximation is N = 1,000. We find empirically that βN max (ν min ) ∼ 12 (ν min )−1 and γ (uN max ; ν) ∼
Const as ν min → 0; it follows that ρN (ν min ) ∼ Const(ν min )−1 . (Note if our primary interest is the ν-dependence
of L2 output functionals, we may gainfully reduce the relative weight of the H 1 -seminorm contribution to · Y .)
N (UJ ) = D, J increases quite slowly with decreasing ν min ; for the particular
We also observe that, to achieve D
UJ =28 results presented here, DN (UJ ) = D for all N 9.
We present in Table 2 ∆N (ν)τ (ν), ∆N (ν)/u(ν)Y (the relative error), and ∆N (ν)/e(ν)Y (the effectivity)
as a function of N for ν = 0.305 and ν = 0.013. For ν = 0.305 we observe that, even for small N, ∆N (ν)τ (ν) 1
3 For the Burgers equation in R1 , the cost savings provided by the RB method — O(N 3 ) vs. O(N ) – are not interesting; however, for the
incompressible Navier–Stokes equations in Rd=2,3 , the cost savings will be very significant.
4 To minimize the risk that d(ν, ν̃ i ; N ) > 1, we would like to choose U such that D
N (UJ ) = D for some suitably large N (<N max ). This
J
can be very efficiently effected.
5 The offline/online decomposition applies generally to any affine parameter (data, property, or geometry) dependence [11].
624
K. Veroy et al. / C. R. Acad. Sci. Paris, Ser. I 337 (2003) 619–624
Table 2
N (UJ )
Numerical results for ν = 0.305 and ν = 0.013; for N = 1 and N = 5, 0.013 ∈
/D
ν = 0.305
N
1
5
9
13
17
ν = 0.013
∆N (ν)τ (ν)
∆N (ν)
u(ν)Y
∆N (ν)
e(ν)Y
∆N (ν)τ (ν)
∆N (ν)
u(ν)Y
∆N (ν)
e(ν)Y
7.95 × 10−1
4.57 × 10−1
5.65 × 10−2
1.59 × 10−5
2.76 × 10−6
3.07 × 10−1
1.76 × 10−1
2.18 × 10−2
6.16 × 10−6
1.06 × 10−6
3.19
3.21
3.22
3.21
3.21
–
–
1.13 × 100
6.69 × 10−3
1.92 × 10−4
–
–
2.43 × 10−3
1.43 × 10−5
4.12 × 10−7
–
–
7.10
4.96
4.63
– and hence we can always provide a definitive error bound; that the relative error bound ∆N (ν)/u(ν)Y tends to
zero rapidly; and that the effectivity is very good. Note also that the “uniqueness radius”, 32 τ (0.305)−1 = 0.571, is
comfortably large relative to u(0.305)Y = 0.986.
N (UJ ), and for N < 10, ∆N (ν)τ (ν) > 1 – and hence
For ν = 0.013 we observe that, for N < 9, 0.013 ∈
/D
we can obtain rigorous error bounds only for very accurate RB approximations; that the relative error bound
∆N (ν)/u(ν)Y still tends to zero rapidly with N – the sample SN is constructed to provide uniform convergence;
and that the effectivity is much better than the theoretical upper bound. For this value of ν the “uniqueness radius”,
3
−1 = 0.026, is small relative to u(0.013) = 8.113.
Y
2 τ (0.013)
The rapid convergence of the RB method is important not only in efficiently achieving the desired accuracy, but
also in efficiently satisfying ∆N (ν)τ (ν) 1; accuracy is required both to predict and to certify.
Acknowledgements
We thank Professor Yvon Maday of University Paris VI for many helpful comments. This work was supported
by DARPA and AFOSR through Grant F49620-01-1-0458, and by the Singapore–MIT Alliance.
References
[1] B.O. Almroth, P. Stern, F.A. Brogan, Automatic choice of global shape functions in structural analysis, AIAA J. 16 (1978) 525–528.
[2] E. Balmes, Parametric families of reduced finite element models: Theory and applications, Mech. Systems and Signal Processing 10 (4)
(1996) 381–394.
[3] F. Brezzi, J. Rappaz, P.A. Raviart, Finite dimensional approximation of nonlinear problems. Part I: Branches of nonsingular solutions,
Numer. Math. 36 (1980) 1–25.
[4] G. Caloz, J. Rappaz, Numerical analysis for nonlinear and bifurcation problems, in: P.G. Ciarlet, J.-L. Lions (Eds.), Handbook of Numerical
Anaylsis, Vol. V, Techniques of Scientific Computing (Part 2), Elsevier, 1997, pp. 487–637.
[5] J.P. Fink, W.C. Rheinboldt, On the error behavior of the reduced basis technique for nonlinear finite element approximations, Z. Angew.
Math. Mech. 63 (1983) 21–28.
[6] L. Machiels, Y. Maday, I.B. Oliveira, A.T. Patera, D.V. Rovas, Output bounds for reduced-basis approximations of symmetric positive
definite eigenvalue problems, C. R. Acad. Sci. Paris, Ser. I 331 (2) (2000) 153–158.
[7] Y. Maday, A.T. Patera, G. Turinici, Global a priori convergence theory for reduced-basis approximation of single-parameter symmetric
coercive elliptic partial differential equations, C. R. Acad. Sci. Paris, Ser. I 335 (2002) 1–6.
[8] A.K. Noor, J.M. Peters, Reduced basis technique for nonlinear analysis of structures, AIAA J. 18 (4) (1980) 455–462.
[9] J.S. Peterson, The reduced basis method for incompressible viscous flow calculations, SIAM J. Sci. Statist. Comput. 10 (4) (1989) 777–
786.
[10] T.A. Porsching, Estimation of the error in the reduced basis method solution of nonlinear equations, Math. Comput. 45 (172) (1985)
487–496.
[11] C. Prud’homme, D. Rovas, K. Veroy, Y. Maday, A.T. Patera, G. Turinici, Reliable real-time solution of parametrized partial differential
equations: Reduced-basis output bound methods, J. Fluids Engrg. 124 (1) (2002) 70–80.
[12] K. Veroy, C. Prud’homme, D.V. Rovas, A.T. Patera, A posteriori error bounds for reduced-basis approximation of parametrized
noncoercive and nonlinear elliptic partial differential equations, in: Proceedings of the 16th AIAA Computational Fluid Dynamics
Conference, 2003.