Slide 3 - HKBU MATH

Transcription

Slide 3 - HKBU MATH
Tridiagonal matrices
Gérard MEURANT
October, 2008
1
Similarity
2
Cholesky factorizations
3
Eigenvalues
4
Inverse
Similarity
Let

α1
 β1


Tk = 


ω1
α2
..
.

ω2
..
.
..
.
βk−2 αk−1
βk−1





ωk−1 
αk
and βi 6= ωi , i = 1, . . . , k − 1
Proposition
Assume that the coefficients ωj , j = 1, . . . , k − 1 are different from
zero and the products βj ωj are positive. Then, the matrix Tk is
similar to a symmetric tridiagonal matrix. Therefore, its
eigenvalues are real.
Proof.
Consider Dk−1 Tk Dk which is similar to Tk , Dk diagonal matrix
with diag. el. δj
Take
βj−1 · · · β1
, j = 2, . . . k
δ1 = 1, δj2 =
ωj−1 · · · ω1
Let

α1
 β1


Jk = 


β1
α2
..
.

β2
..
.
..
.
βk−2 αk−1
βk−1





βk−1 
αk
where the values βj , j = 1, . . . , k − 1 are assumed to be nonzero
Proposition
det(Jk+1 ) = αk+1 det(Jk ) − βk2 det(Jk−1 )
with initial conditions
det(J1 ) = α1 ,
det(J2 ) = α1 α2 − β12 .
The eigenvalues of Jk are the zeros of det(Jk − λI )
The zeros do not depend on the signs of the coefficients
βj , j = 1, . . . , k − 1
We suppose βj > 0 and we have a Jacobi matrix
Cholesky factorizations
Let ∆k be a diagonal matrix with diagonal elements δj ,
j = 1, . . . , k and


1

l1 1




.
.
..
..
Lk = 





lk−2
1
lk−1 1
Jk = Lk ∆k LT
k
δ1 = α1 ,
δj = αj −
2
βj−1
δj−1
, j = 2, . . . , k,
l1 = β1 /δ1
lj = βj /δj , j = 2, . . . , k − 1
The factorization can be completed if no δj is zero for
j = 1, . . . , k − 1
This does not happen if the matrix Jk is positive definite, all the
elements δj are positive and the genuine Cholesky factorization can
be obtained from ∆k
Jk = LCk (LCk )T
1/2
with LCk = Lk ∆k
which is
√
δ1
 √β1
 δ1


C
Lk = 



√

δ2
..
.
..
.
√βk−2
δk−2
p
δk−1
√βk−1
δk−1







√ 
δk
The factorization can also be written as
−1 D T
Jk = LD
k ∆k (Lk )
with


δ1
β1


LD
=

k


δ2
..
.
..
.
βk−2
δk−1
βk−1 δk






Clearly, the only elements we have to compute and store are the
diagonal elements δj , j = 1, . . . , k
To solve a linear system Jk x = c, we successively solve
LD
k y = c,
T
(LD
k ) x = ∆k y
The previous factorizations proceed from top to bottom (LU)
We can also proceed from bottom to top (UL)
−1
Jk = L̄T
k Dk L̄k
with

(k)
d1

 β1

L̄k = 




(k)
d2
..
.
..
.
(k)
βk−2 dk−1
(k)
βk−1 dk







(k)
and Dk a diagonal matrix with elements dj
(k)
dk
= αk ,
(k)
dj
= αj −
βj2
(k)
dj+1
, j = k − 1, . . . , 1
From LU and UL factorizations we can obtain all the so-called
“twisted” factorizations of Jk
Jk = Mk Ωk MkT
Mk is lower bidiagonal at the top for rows with index smaller than
l and upper bidiagonal at the bottom for rows with index larger
than l
2
βj−1
ω1 = α1 , ωj = αj −
, j = 2, . . . , l − 1
ωj−1
ωk = αk ,
ωj = αj −
βj2
ωj+1
, j = k − 1, . . . , l + 1
2
βl−1
β2
ωl = αl −
− l
ωl−1 ωl+1
Eigenvalues
The eigenvalues of Jk are the zeros of det(Jk − λI )
(k)
(k)
det(Jk − λI ) = δ1 (λ) · · · δk (λ) = d1 (λ) · · · dk (λ)
This shows that
δk (λ) =
Theorem
det(Jk − λI )
,
det(Jk−1 − λI )
(k+1)
(k)
d1 (λ) =
det(Jk − λI )
det(J2,k − λI )
The eigenvalues θi
of Jk+1 strictly interlace the eigenvalues of
Jk
(k+1)
(k)
(k+1)
(k)
(k)
(k+1)
θ1
< θ1 < θ2
< θ2 < · · · < θk < θk+1
(Cauchy interlacing theorem)
Proof.
Eigenvector x = (y ζ)T of Jk+1 corresponding to θ
Jk y + βk ζe k = θy
βk yk + αk+1 ζ = θζ
Eliminating y from these relations, we obtain
(αk+1 − βk2 ((e k )T (Jk − θI )−1 e k ))ζ = θζ
αk+1 − βk2
k
X
ξj2
(k)
j=1
θj
−θ =0
−θ
where ξj is the last component of the jth eigenvector of Jk
(k)
The zeros of this function interlace the poles θj
Inverse
Theorem
There exist two sequences of numbers {ui }, {vi }, i = 1, . . . , k such
that


u1 v1 u1 v2 u1 v3 . . . u1 vk
u1 v2 u2 v2 u2 v3 . . . u2 vk 




−1
Jk = u1 v3 u2 v3 u3 v3 . . . u3 vk 
 ..
..
..
.. 
..
 .
.
. 
.
.
u1 vk
u2 vk
u3 vk
. . . uk vk
Moreover, u1 can be chosen arbitrarily, for instance u1 = 1
see Baranger and Duc-Jacquet; Meurant
Hence, we just have to compute Jk−1 e 1 and Jk−1 e k
Jk v = e 1
Use UL factorization of Jk
v1 =
1
(k)
d1
,
vj = (−1)j−1
β1 · · · βj−1
(k)
(k)
, j = 2, . . . , k
d1 · · · dj
For
vk J k u = e k
use LU factorization
uk =
1
,
δ k vk
uk−j = (−1)j
βk−j · · · βk−1
, j = 1, . . . , k − 1
δk−j · · · δk vk
Theorem
The inverse of the symmetric tridiagonal matrix Jk is characterized
as
(k)
(k)
dj+1 · · · dk
−1
j−i
(Jk )i,j = (−1) βi · · · βj−1
, ∀i, ∀j > i
δi · · · δk
(k)
(Jk−1 )i,i
(k)
d · · · dk
= i+1
, ∀i
δi · · · δk
Proof.
(k)
ui = (−1)−(i+1)
(k)
d1 · · · dk
1
β1 · · · βi−1 δi · · · δk
The diagonal elements of Jk−1 can also be obtained using twisted
factorizations
Theorem
Let l be a fixed index and ωj the diagonal elements of the
corresponding twisted factorization
Then
1
(Jk−1 )l,l =
ωl
In the sequel we will be interested in (Jk−1 )1,1
(Jk−1 )1,1 =
1
(k)
d1
Can we compute (Jk−1 )1,1 incrementally?
Theorem
−1
(Jk+1
)1,1 = (Jk−1 )1,1 +
Proof.
Jk+1 =
(β1 · · · βk )2
(δ1 · · · δk )2 δk+1
Jk
βk (e k )T
βk e k
αk+1
−1
The upper left block of Jk+1
is the inverse of the Schur
complement
−1
βk2 k k T
e (e )
Jk −
αk+1
Inverse of a rank-1 modification of Jk
Use the Sherman–Morrison formula
(A + αxy T )−1 = A−1 − α
A−1 xy T A−1
1 + αy T A−1 x
This gives
Jk −
βk2 k k T
e (e )
αk+1
−1
= Jk−1 +
(Jk−1 e k )((e k )T Jk−1 )
αk+1
βk2
− (e k )T Jk−1 e k
Let l k = Jk−1 e k
−1
(Jk+1
)1,1 = (Jk−1 )1,1 +
βk2 (l1k )2
αk+1 − βk2 lkk
l1k = (−1)k−1
β1 · · · βk−1
,
δ1 · · · δk
lkk =
1
δk
To simplify the formulas, we note that
αk+1 − βk2 lkk = αk+1 −
βk2
= δk+1
δk
We start with (J1−1 )1,1 = π1 = 1/α1 and c1 = 1
t = βk2 πk ,
δk+1 = αk+1 − t,
πk+1 =
1
,
δk+1
This gives
−1
(Jk+1
)1,1 = (Jk−1 )1,1 + ck+1 πk+1
ck+1 = t ck πk
J. Baranger and M. Duc-Jacquet, Matrices
tridiagonales symétriques et matrices factorisables, RIRO, v 3,
(1971), pp 61–66
G.H. Golub and C. Van Loan, Matrix Computations,
Third Edition, Johns Hopkins University Press, (1996)
G. Meurant, A review of the inverse of tridiagonal and
block tridiagonal matrices, SIAM J. Matrix Anal. Appl., v 13
n 3, (1992), pp 707–728