RÉSUMÉ

Transcription

RÉSUMÉ
1
RÉSUMÉ
On se propose d’étudier les propriétés spectrales l’opérateur de Schrödinger avec potentiel aléatoire dans un ruban. En fait les modèles étudiés sont de deux types:
• Le cas indépendant: Les potentiels en chaque site sont supposés former une famille
de variables aléatoires indépendantes et équidistribuées.
• Le cas Markovien: Les potentiels en chaque site sont gouvernés par un processus
ou une chaine de Markov selon que le modèle est discret ou continu.
Il s’agit donc d’étendre à ces modèles les résultats prouvés antérieurement dans le cas
unidimensionnel, c’est à dire la propriété de localisation exponentielle et les résultats
de régularté de la distribution d’état. Le premier ingrédient utilisé est la simplicité du
spectre de Lyapunov pour un produit de matrices symplectiques. Le second outil est
fourni par la théorie spectrale des opérateurs associés à la transformation de Laplace sur
certaines frontières du groupe symplectique. Il s’en suit que le spectre est purement
ponctuel dans le cas où la loi des potentiels est absolument continue. La preuve de
localisation dans le cas général résulte des propriétés de décroissance de la fonction
de Green établies dans le cas multidimensionnel. Enfin les résultats précédents sont
utilisés pour prouver certaines propriétés de régularité de la distribution d’états à savoir
continuité Höldérienne et dérivabilité.
2
SOURCES
Cette thèse résulte essentiellement de la compilation d’articles publiés par l’auteur.
Certaines démonstrations ont été actualisées et complétées pour tenir compte d’ un
certain nombres de développements récents.
1. Problèmes probabilistes liés a l’étude des opérateurs aux différences aléatoires.
Publié en 1982 dans Ann. Inst. Elie Cartan 7, Marches aléatoires et processus
stochastiques sur les groupes de Lie, 80-95.
2. Singularité du spectre de l’opérateur de Schrödinger aléatoire dans un ruban ou
un demi-ruban.
Publié en 1983 dans Ann. Inst. H. Poincaré A 38 385-399
3. Localisation pour l’opérateur de Schrödinger aléatoire dans un ruban.
Publié en 1984 dans Ann. Inst. H. Poincaré A 40 97-116
4. Computations of the sum of positive Lyapunov exponents for the Llyod model in
a strip.
Publié en 1986 dans Lectures Notes in Math. 1186 258-264
5. Localization for the Anderson model on a strip with singular potentials.
Ecrit en collaboration avec A.Klein & A.Speis. A paraitre dans J. Functional
Anal. 27
6. Regularity of the density of states in the Anderson model on a strip for potentials
with singular continuous distributions.
Ecrit en collaboration avec A.Klein & A.Speis. Publié en 1990 dans J. Statist.
Phys. 57, 65-88
Les résultats obtenus dans ces travaux ont étés en partie repris et développés dans deux
livres consacrés à l’étude des propriétés spectrales de l’opérateur de Schrödinger avec
potentiel aléatoire:
• Products of random matrices with applications to Schrödinger operators.
Ouvrage écrit en collaboration avec P. Bougerol, publié en 1985 dans Progress in
probability and statistics (Birkhäuser)
• Spectral properties of Random Schrödinger Operators.
Ouvrage écrit en collaboration avec R. Carmona, publié en 1990 dans Probability
and Its Applications (Birkhäuser)
Cette thèse s’appuie essentiellement sur des extraits de ce dernier ouvrage qui avaient
été rédigés par l’auteur, en tenant compte de certains résultats postérieurs à sa publication. Ceci explique que la plus grande partie de ce travail soit rédigée en langue
anglaise (tout au moins l’auteur espère-t-il l’indulgence du lecteur, vu son niveau de
connaissance rudimentaire de cette langue. . . ).
3
INTRODUCTION
Cette thèse a pour but d’illustrer l’importance et la nécessité de l’utilisation de la théorie
des produits de matrices aléatoires pour l’étude spectrale des opérateurs de Schrödinger
à potentiels aléatoires. L’exposé ne concerne que la dimension un et le ruban, mais il est
fort probable qu’en dimension supérieure, on doive encore faire appel a ses résultats pour
montrer que la théorie de la localisation multidimensionnelle actuellement disponible
est bien fondée. En effet son système d’hypothèses requiert le contrôle de la fonction de
Green sur une boite initiale suffisamment grande et ceci ne peut résulter, en dehors de
cas extrèmes (très grande énergie ou très grand désordre), que de l’étude des exposants
de Lyapounov dans un ruban de grande largeur.
Nous avons choisi de faire une présentation simultanée des cas discrets et continus
(en fait sur le ruban il faut entendre par cas continu, le cas de plusieurs équations de
type Schrödinger continues couplées, le cas du Laplacien bidimensionnel restant, pour
l’instant, un problème complétement ouvert). L’utilisation d’un formalisme commun,
clarifie de façon significative les preuves originales du modèle continu. En effet les
propriétés ergodiques, et plus généralement les propriétés asymptotiques des systèmes
continus se déduisent facilement du cas discret (tout au moins tant que l’on s’intéresse
à des propriétés presque sûres ou génériques, car il est facile de trouver des exemples
déterministes où ces deux types d’opérateurs sont aussi dissemblables que possible. . . ).
Du point de vue mathématique le problème la localisation dans les systèmes désordonnés
est le suivant: les opérateurs autoadjoints utilisés comme hamiltoniens dans ces systèmes
ont une tendance à avoir un spectre purement ponctuel, contrairement au cas des milieux périodiques où ce spectre est absolument continu. La première preuve rigoureuse
de ce phénomène a été fournie par Goldsheid, Molcanov & Pastur en 1977 [38]. Dans cet
article les auteurs étudient le cas unidimensionnel continu, le potentiel étant généré par
un mouvement Brownien. A la suite de ce travail H. Kunz et B. Souillard ont établi un
résultat identique dans le cas unidimensionnel discret, les potentiels en chaque site formant une famille de variables aléatoires indépendantes et équidistribuées. Ces preuves
ont été ensuite simplifiées et généralisées par R. Carmona [14][15], G.Royer [93], J.
Brossard [11] pour le cas continu et par l’auteur dans le cas discret [71] puis de nouveau par F. Delyon, H. Kunz & B. Souillard [23] dans le cas de potentiels aléatoires
“perturbés” par un bruit déterministe. D’autre part de très nombreux travaux ont étés
consacrés à la distribution d’états (“integrated density of states”) en dimension un et
plus particulièrement au problème de sa régularité en tant que mesure. En particulier
il faut citer les travaux de Le Page [80],[81], Craig & Simon [22], et Campanino & Klein
[13] que nous généraliserons au cas du ruban.
Bien sûr chacun s’accorde à penser que l’extension à un ruban des questions précédentes
4
ne doit pas faire apparaitre de phénomènes qualitativement nouveaux. Malheureusement une bonne partie des techniques utilisées à une dimension ne s’adaptent qu’au
prix d’un effort important. De façon naı̈ve ont peut concevoir qu’il est certainement
plus délicat de manipuler des matrices plutôt que des nombres réels. D’un point de vue
mathématique la difficulté nouvelle est la suivante:
• En dimension un, le groupe qui apparaı̂t naturellement est le groupe unimodulaire
SL(2, R) dont l’unique frontière est la droite projective. Celle-ci s’identifie peu
ou prou à la droite réelle et une bonne partie de la démonstration peut donc
être effectuée sans référence explicite à la théorie des frontières des groupes semisimples.
• Il en va tout diffèremment sur le ruban où maintenant le groupe sur lequel on
doit travailler est le groupe symplectique SP (ℓ, R) dont les diffèrentes frontières
s’identifient à des variétés Lagrangiennes. De plus il faut maintenant contrôler la
positivité d’une famille d’exposants de Lyapunov et non plus d’un seul. Dans ce
cas une approche plus intrinsèque du problème s’avère nécessaire.
Signalons enfin l’existence d’autres approches du problème de la localisation dans le
cas de potentiels absolument continus, citons en particulier:
• La théorie des perturbations de rang un des opérateurs autoadjoints utilisée par
Simon et Wolff [96]
• La méthode de Kotani[68] que l’on peut assimiler à une extension du théorème
de Fubini au cas des transitions de probabilités.
Dans ces travaux l’on n’utilise plus du tout l’approche par approximation de type
“limite thermodynamique” le long de boites croissant vers l’espace tout entier. L’aspect
constructif de la preuve présentée ici n’apparaı̂t donc plus, en particulier l’on perd tout
contrôle de la vitesse de convergence des approximations de la mesure spectrale et par
voie de conséquence de la régularité éventuelle de la distribution d’états.
Nous introduisons maintenant quelques notations utiles pour la description des chapitres
qui suivent.
Soit ℓ un entier positif, d = 2ℓ ,et T = R ou Z. Le ruban de hauteur ℓ est défini comme
l’ensemble {(i, t); i = 1 . . . ℓ, t ∈ T } c’est à dire la somme de ℓ copies de T . L’opérateur
de Schrödinger sur le ruban peut alors être considéré comme la juxtaposition de ℓ
opérateurs unidimensionnels couplés. Soit {V (i, t); i = 1 . . . ℓ, t ∈ T } une famille de
nombres réels. On définit la matrice V (n) d’ordre ℓ × ℓ par:

 V (i, t) si i = j
Vij (t) =
−1
si |i − j| = 1

0
si |i − j| > 1
5
L’opérateur de Schrödinger H sur le ruban de hauteur ℓ opère sur les suites {ψ(t); t ∈ T }
de vecteurs à ℓ composantes complexes (ψ1 (t), · · · , ψℓ (t)) par la formule:
Hψ(t) = −ψ(t + 1) − ψ(t − 1) + V (t)ψ(t)
Hψ(t) = −ψ̈(t) + V (t)ψ(t)
cas discret
cas continu
En fait, dans le cas discret, toute la théorie s’applique sans grandes modifications à des
opérateurs plus généraux de la forme:
[Hψ](n) = Bn−1 (−ψ(n + 1) − ψ(n − 1) + V (n)ψ(n))
où {Bn ; n ∈ Z} est une suite de matrices symétriques définies positives dont le spectre
reste contenu dans un intervalle [a, b] avec a > 0.
Dans le cas aléatoire on considère que les {Vω (i, t); i = 1 . . . ℓ, t ∈ T } forment une
famille de variables aléatoires réelles de même loi, définies sur un espace probabilisé
(Ω, P). Les deux modèles les plus étudiés correspondent au cas de variables aléatoires
indépendantes ou en dépendance Markovienne. Dans le cas aléatoire on note par Hω
l’opérateur correspondant à la trajectoire ω.
6
I
PRODUITS DE MATRICES ALEATOIRES
Ce chapitre contient l’essentiel des outils et résultats généraux utilisés par la suite. Nous
avons choisi de regrouper l’approche via le théorème d’Osseledec empruntée à Ledrappier [77], qui traite des systèmes ergodiques bilatères généraux et celle de Guivarc’h &
Raugi [44] qui poursuivant les travaux de Furstenberg & Kesten [31], [30], s’intéressent
plus particulièrement au cas des systèmes indépendants. La première présentation
s’applique immédiatement au cas des systèmes multiplicatifs Markoviens étudiés par
Bougerol dans [7], [8] tandis que la seconde permet d’obtenir des résultats plus précis
en ce qui concerne la simplicité du spectre de Lyapounov. Nous avons modifié un certain nombre d’énoncés de [19] afin de tenir compte du résultat récent de Goldsheid et
Margulis sur la fermeture de Zariski des matrices de Schrödinger qui n’était pas publié
au moment de l’écriture de ce livre. L’étude spectrale de la transformation de Laplace
sur les espaces de fonctions höldériennes est empruntée à Le Page [79].
Les principales contributions de l’auteurs à ce chapitre sont les suivantes:
• L’étude des relations d’entrelacement des opérateurs de Fourier Laplace sur SP (ℓ, R)
associées à la multiplication par un cocycle de Radon-Nikodym et leur conséquences
sur les propriétés spectrales de ces opérateurs construits à partir de noyaux de
Poisson. (Sections 3 , 5.3).
• La vérification des conditions assurant la validité des théorèmes généraux dans
le cas de matrices de Schrödinger dans un ruban, par exemple l’existence d’une
densité pour les matrices de transfert sur le groupe SP (ℓ, R) (section 4.c). Cette
preuve a été reprise par Glaffig [34] et étendue au cas où seuls les potentiels de la
ligne supérieure du ruban admettent une densité, les autres étant constants.
• L’étude des variétés Lagrangiennes, le calcul de leurs noyaux de Poisson et l’établissement
de formules donnant le plus petit exposant de Lyapunov positif (section 6).
7
II
L’OPERATEUR DE SCHRÖDINGER DETERMINISTE
Ce chapitre contient une bonne partie de ce qu’il est convenu d’appeler le “folklore” du
sujet. . . . Son originalité tient dans le calcul barycentres de mesures spectrales dans des
boites par rapport a une loi de Cauchy sur les “conditions au bord” aussi bien dans le
cas continu que dans le cas discret. Cette technique semble apparaı̂tre pour la première
fois dans Atkinson [3] pour le réseau unidimensionnel. Ce procédé utilisé par l’auteur
d’abord en dimension un [71] puis sur un ruban [73] permet d’obtenir des conditions
de localisation très simples. Il a été depuis repris de nombreuses fois car il permet de
transformer une famille de mesures discrètes en une mesure absolument continue, par
une opération de barycentre. Tout ceci peut apparaı̂tre surprenant dans la mesure où
le but final de la manœuvre est, dans beaucoup de cas, de prouver que la limite de ces
mesures spectrales est purement ponctuelle. Le choix de la mesure de Cauchy comme
répartition des conditions au bord reste arbitraire, la seule raison de celui-ci étant
qu’il fournit les calculs et les expressions finales les plus simples. En effet, l’utilisation
systématique de la notion de noyau de Poisson pour un espace Riemannien symétrique
permet de simplifier notablement certains calculs antérieurs fastidieux effectués par
l’auteur [73] et R. Carmona [15]. Ces approximations sont ensuite très utiles pour
donner des critères de localisations directement reliés aux propriétés asymptotiques des
produits de matrices de transfert (ou de l’opérateur d’évolution dans le cas continu).
Le critère du cas Markovien unidimensionnel est emprunté à Carmona [14] et celui du
ruban figurait déjà dans les travaux de l’auteur [73].
8
III
L’OPERATEUR DE SCHRÖDINGER ALEATOIRE
L’essentiel du travail de préparation ayant déjà été effectué dans les deux chapitres
précédents il ne reste plus qu’à mettre en œuvre ces divers outils. On établit d’abord la
singularité du spectre sous des hypothèses assez générales. La technique de démonstration
est très largement inspirée des preuves fournies par Ishii [52] et surtout Pastur [90].
En fait la singularité du spectre est une conséquence immédiate de l’hyperbolicité du
système dynamique pour toute valeur de l’énergie et pour presque tout ω. Cette propriété découle directement de la simplicité du spectre de Lyapunov pour un produit
aléatoire de matrices de SP (ℓ, R) ou bien pour le système multiplicatif engendré par
le propagateur dans le cas continu. La positivité du plus petit exposant de Lyapunov
non négatif est assurée par le résultat de Goldsheid & Margulis (on peut aussi utiliser
les résultats antérieurs de Guivarc’h & Raugi dans le cas où la loi des potentiels admet
une densité ou bien lorsque son support contient un ouvert puisqu’on a prouvé dans le
chapitre I qu’il en est de même d’une puissance de convolution de la loi des matrices
de transfert sur SP (ℓ, R)).
La preuve de localisation pour des potentiels absolument continus repose essentiellement sur l’utilisation systématique des propriétés spectrale des opérateurs de Fourier
Laplace sur le groupe symplectique. Dans le cas unidimensionnel cette approche avait
déjà été utilisée par Goldsheid, Molcanov & Pastur de façon plus ou moins explicite.
La trame de la preuve présentée ici est aussi très proche du travail de Kunz et Souillard
[70] mais utilise les opérateurs associés à des noyaux de Poisson plutôt que ceux utilisés
par Kunz et Souillard. Ces derniers correspondent à des cocycles singuliers et n’ont
pas d’équivalents pour le ruban.
De façon plus précise, si l’on dénote par B la Lagrangienne Lℓ−1,ℓ les approximations
de la mesure spectrale calculées au chapitre I font apparaı̂tre les opérateurs Tt,λ opérant
sur l’ensemble des fonctions continues sur B par:
Z
Tt,λ f (b) = f (gb)(χ(g, b))t/2 dµλ (g)
où µλ est la loi des matrices de transfert et le cocycle χ est précisé ci-dessous.
Une condition suffisante de localisation est alors que pour chaque intervalle borné de
R il existe des constantes C1 , C2 et ρ < 1 telles que pour n ∈ N et λ ∈ I on ait :
n
kT1,λ
k ≤ C 1 ρn
,
n
kT2,λ
k ≤ C2
Le cocycle χ(g, (ȳ, x̄)) sur Lℓ−1,ℓ est défini par:
−1
χ(g, (ȳ, x̄)) = rℓ−1,ℓ (g, (ȳ, x̄))rℓ−1
(g, x̄) = (
kyk −2 kxk 2
) (
)
kgyk
kgxk
9
Dans cette formule rℓ−1 est le noyau de Poisson de Lℓ−1 soit:
rℓ−1 (g, ȳ) =
kyk ℓ+2
dg−1 mℓ−1
(ȳ) = (
)
dmℓ−1
kgyk
et rℓ−1,ℓ est le noyau de Poisson de Lℓ−1,ℓ soit:
rℓ−1,ℓ (g, (ȳ, x̄)) = (
kyk ℓ kxk 2
)(
)
kgyk kgxk
Des opérateurs analogue apparaissent dans le cas Markovien. Dans le cas où la loi des
potentiels possède une densité on peut prouver que ces opérateurs sont compact sur
C(B) et leurs propriétés spectrales résultent du chapitre I.
On pourrait déduire des estimations précédentes la décroissance exponentielle des fonctions propres comme dans Carmona [14] ou Royer [93] mais la méthode de Kotani
donne ce résultat beaucoup plus simplement et nous n’aborderons donc pas ce sujet
ici. Signalons enfin que contrairement au cas unidimensionnel, le fait que les valeurs
propres soient non dégénérées n’est pas automatique. Voir à ce propos la preuve de
Delyon, Levy, Souillard utilisant le “Kotani’s trick” [24].
Dans le cas de potentiels quelconques, il s’agit d’adapter la preuve de localisation donnée
dans le modèle unidimensionnel par Carmona, Klein et Martinelli [18]. En fait cette
preuve consiste à vérifier que les hypothèses d’un théorème, impliquant la localisation
en toutes dimensions, sont satisfaites dans le cas du ruban pour tout désordre et toute
énergie. Ce théorème a été établi par Dreifus et Klein [28] d’aprés un travail original de
Martinelli et Scoppola [83]. Il peut sembler curieux de recourir à une analyse sophistiquée du modèle multidimensionnel pour conclure dans ce cadre, à priori plus simple.
Ceci tendrait à prouver que le sujet est loin d’être clos...
Soit Λ la boite {(k, i); i = 1 . . . ℓ, k ∈ [−n, +n]}, les conditions à vérifier sont les suivantes:
1. Pour toute énergie λ il existe des constantes χ > 0, ζ > 0 telles que
−n χ
P{λ0 6∈ Σ(H Λ ) et kGΛ
} ≥ 1 − e−ζ
λ0 (0, ±n)k ≤ e
pour n assez grand.
2. Pour tout intervalle borné I il existe τ > 0 tel que:
P{d(λ , Σ(H Λ )) ≤ e−
pour λ ∈ I et n assez grand.
√
n
} ≤ e−τ
√
n
√
n
10
La première condition va résulter de la théorie générale des produits de matrice lorsque
l’on s’est assuré que les conditions standards sont applicables, à savoir forte irréductibilité
et contractivité sur toutes les variétés Lagrangiennes de la loi des matrices de transfert.
Ce point résulte à nouveau, dans le cas d’une distribution quelconque des potentiels,
du théorème de Goldsheid et Margulis.
La seconde condition est plus difficile à mettre en évidence et requiert une démonstration
de la continuité Hölderienne de la densité d’états sur le ruban. Ce résultat est dû à Le
Page en dimension un [80] et est étendu au ruban dans l’annexe (A). On peut aussi
trouver une extension du même type dans un article récent du même auteur [81].
11
IV
LA DISTRIBUTION D’ETATS
On se restreint ici au cas d’un ruban discret. L’existence de la distribution d’états
dans un réseau ne pose pas de problèmes techniques aussi délicats que dans le cas
continu. Cette probabilité, notée N , est définie comme la limite vague presque sûre
des répartitions empiriques N Λ (ω) des valeurs propres de H Λ (ω) lorsque la boite Λ
tend vers le ruban entier (en fait, vu la stationnarité il suffit que l’un des cotés de la
boite tende vers l’infini). Dans le cas de potentiels bornés cette convergence ne pose
pas de problèmes et résulte du théorème de Birkhoff (voir par exemple [9]) car toutes
les probabilités N Λ (ω) sont portées par un compact fixe. Dans le cas général ergodique
où l’on ne suppose que l’intégrabilité de log(1 + |V |) il faut utiliser des méthodes de
troncature (voir Fukushima [29]). De plus on obtient que N est égale à l’espérance
de la mesure spectrale. Le fait que les sous espaces propres de H soient de dimension
≤ ℓ implique que la mesure N est continue (cette propriété reste vraie dans le cas
multidimensionnel voir Delyon & Souillard [25]). La fonction de répartition de N étant
toujours continue, on peut se poser le problème d’une régularité supériure:
• Continuité Höldérienne
• Existence d’une densité, et dans ce cas déterminer l’ordre de dérivabilité de celleci.
Ces problèmes ont déjà été étudiés en dimension un par de nombreux auteurs et nous
nous proposons ici de les étendre au ruban.
L’une des extensions les plus simples concerne la preuve de la formule de Thouless pour
le ruban. Cette étape est cruciale car cette formule permettra de prouver la régularité
de N en utilisant les propriétés de la transformation de Hilbert. La preuve donnée ici
ne diffère de la preuve originale de Craig & Simon que par l’utilisation systématique
du formalisme Lagrangien ce qui supprime la partie algébrique de leur travail.
Cas de potentiels absolument continus
L’un des plus anciens résultats établissant l’existence d’une densité d’états est celui de
Wegner [104]. Nous reproduisons ici la preuve originale à ceci près que l’on remplace
l’astuce consistant à intégrer la fonction de Green sur une boite par un lemme plus
général d’intégration de mesures spectrales d’une matrice symétrique. Ceci permet
l’extension de cette preuve à des Hamiltoniens plus généraux ne faisant plus intervenir
que les seuls plus proches voisins. Ce résultat suppose l’existence d’une densité bornée
pour les potentiels.
On peut utiliser les opérateurs de Fourier Laplace et leurs propriétés démontrées au
12
chapitre précédent pour obtenir une formule de représentation de la densité d’états en
termes de fonctions propres de ces opérateurs ce qui fournit la continuité de la densité
d’états. Une telle formule avait déjà été obtenue en dimension un par Kunz & Souillard
[70], l’originalité tient ici à l’apparition de certaines “torsions” de la mesure invariante
sur la frontière maximale. La preuve se trouve dans l’annexe (C) et donne prétexte à
une étude plus détaillée des opérateurs de Fourier Laplace.
En supposant toujours les potentiels absolument continus, Le Page a prouvé dans [81]
que le plus grand exposant de Lyapunov est C ∞ . Ceci implique immédiatement la
même régularité de la densité d’états en dimension un via la formule de Thouless.
Malheureusement l’extension de ce résultat au ruban n’est pas V
immédiate car ceci
supposerait l’existence d’une densité sur la puissance extérieure ℓ (Rd ) ce qui n’est
pas le cas. Nous reprenons donc les arguments de [81] en les adaptant à notre situation
pour obtenir le même résultat sur le ruban.
Cas de potentiels singuliers
Dans ce cas (par exemple pour des lois de Bernoulli) on a seulement la locale continuité
Höldérienne de la fonction de répartition de N . Ceci a été prouvé en dimension un par
Le Page [80]. Pour l’étendre au ruban on doit prouver ce type de continuité pour la
somme des exposants de lyapunov positifs ce qui est fait dans l’annexe (A) en vue de la
localisation pour des potentiels singuliers. En fait, dans une publication ultérieure, Le
Page [81] obtient le même résultat par une méthode légèrement différente qui consiste à
ne considérer que le plus grand exposant et à passer ensuite aux puissances extérieures
(Il n’y a pas ici la même objection que pour la dérivabilité).
Si maintenant, tout en englobant le cas de certains potentiels singuliers, on impose
quelques restrictions sur ceux-ci, par exemple en demandant que la transformée de
Fourier de leur loi soit C0p pour un certain p, alors on peut obtenir un ordre de dérivalité
[(p + 1)/2] pour la fonction de répartition de la distribution d’états. Ceci a été prouvé
en dimension un par Campanino & Klein [13] et l’extension au ruban se trouve dans
l’annexe (B). Dans les deux cas l’on utilise les techniques de la “super-symétrie” et cela
demande quelques mots d’explications. Cette théorie (voir [?], [?], [?]), très utilisée
en physique des milieux désordonnés, peut laisser perplexes certains mathématiciens
lorsque les Grasmaniennes sont de dimension infinie, certains résultats n’ayant guère
de justification théorique. Par contre, en dimension finie, ce formalisme est tout à fait
rigoureux et nous n’utiliserons cette approche que dans ce cas.
Tout d’abord le lien entre la distribution d’états et la fonction de Green est évident
puisque le théorème spectral implique que pour ℑm(z) 6= 0 on a:
Z
dN (t)
E{trace(Gz (0, 0))} =
t−z
où Gz est la matrice ℓ × ℓ de Green définie au chapitre II. Si l’on se souvient que Gz est
13
limite des GΛ
z lorsque la boite Λ tend vers le ruban tout entier, l’on est conduit à tenter
de trouver une formule intéressante pour le calcul de l’inverse de la matrice H Λ − zI.
Soit A = U + iV , où U et V sont des matrices symétriques d’ordre r avec U définie
positive. Un simple calcul d’intégrale gaussienne permet d’écrire:
Z
r
r
Y
X
d2 ϕj
1
A(i, j)ϕi .ϕj )
=
exp(−
detA
π
j=1
i,j=1
Z
r
r
Y
X
d2 ϕj
A−1 (a, b)
A(i, j)ϕi .ϕj )
=
ϕa .ϕb exp(−
detA
π
i,j=1
j=1
où les ϕi i = 1 . . . r sont r vecteurs de R2 , la notation ϕi .ϕj désigne le produit scalaire
usuel et d2 ϕ signifie l’intégration par rapport aux deux coordonnées de ϕ. Dans cette
formule la présence du déterminant de A est gênante. Une astuce souvent utilisée en
(k)
physique sous le nom “replica trick” consiste à integrer sur n copies ϕi k = 1 . . . n , i =
1 . . . r et l’on obtient:
−1
A
n
(a, b) = (detA)
Z
(1)
ϕ(1)
a .ϕb
exp(−
r X
n
X
i,j=1 k=1
(k) (k)
A(i, j)ϕi .ϕj )
(k)
r Y
n
Y
d2 ϕj
j=1 k=1
π
et ensuite faire tendre n vers 0 !!. . . Ce procédé est pour le moins audacieux et pour
l’éviter on introduit le formalisme de la super-symétrie pour prendre en compte le
déterminant.
Pour i = 1 . . . r soient ψi , ψ̄i des vecteurs de Rs avec s suffisamment grand pour que
ces 2r éléments soient linéairement indépendants. On note par G l’algèbre extérieure
engendrée par les ψi , ψ̄i et pour simplifier l’écriture on notera le produit extérieur
comme un produit ordinaire. Pour X ∈ G et pour i donné on peut écrire de façon
unique X = Y + Zψi ψ̄i de telle façon que les éléments Y et Z de G ne contiennent plus
le produit extérieur ψi ψ̄i . On définit alors une application lináire de G dans lui-même,
que l’on note sous forme intégrale:
Z
Xd(ψi ψ̄i ) = −Z
Plus généralement si F est une fonction suffisament dérivable de Rd dans R on notera
F (X1 , . . . , Xd ) l’expression obtenue en remplaçant dans le développement de Taylor
de F les monômes en X1 , . . . , Xd par les produits extérieurs correspondants. Un tel
développement est donc toujours nul à partir d’un certain rang et l’on peut donc écrire
des pseudo intégrales du genre
Z
r
Y
d(ψi ψ̄i )
F (X1 , . . . , Xd )
i=1
14
qui n’est en fait qu’une forme linéaire. Avec ces notations on vérifie immédiatement
que l’on a:
Z
r
r
Y
X
A(i, j)ψi ψ̄j )
d(ψk ψ̄k ) = detA
exp(−
Z
ψa ψ̄b exp(−
i,j=1
r
X
A(i, j)ψi ψ̄j )
i,j=1
k=1
r
Y
d(ψk ψ̄k ) = (detA)A−1 (a, b)
k=1
En utilisant les notations condensées
Φi = (ϕi , ψi , ψ̄i ),
dΦi =
d2 ϕi
d(ψi ψ̄i ),
π
1
Φi .Φj = ϕi .ϕj + (ψi ψ̄j + ψj ψ̄i )
2
et en réunissant les deux types d’intégrales on obtient:
Z
r
r
Y
X
dΦk = A−1 (a, b)
A(i, j)Φi .Φj )
ψa ψ̄b exp(−
i,j=1
Z
(1)
ψa(1) ψ̄b exp(−
r
X
n
X
(k)
(k)
A(i, j)Φi .Φj )
k=1
r
n
YY
(j)
dΦk
= A−1 (a, b)
k=1 j=1
i,j=1 k=1
La dernière formule étant obtenue en considérant n copies du champ des Φi . On peut
donc appliquer ces formules au cas du calcul du noyau de Green dans une boite avec
r = |Λ|. Soit z = λ + iν avec ν > 0. En prenant A = ν + i(H Λ − λ) on obtient pour
deux sites a et b de la boite Λ:
Z
X
(a,
b)
=
i
ψa ψ̄b exp(−i
GΛ
(H Λ − z)(x, y)Φx .Φy ) dΦ
z
x,y∈Λ
L’espérance de la fonction de Green ne fait donc apparaitre les potentiels qu’au travers
de la transformée de Fourier de leur loi et c’est ce qui fait l’intérêt de cette formule.
De plus on peut réécrire cette expression comme résultant de l’itération d’un certain
opérateur Tz opérant sur des espaces de fonctions “super-symétriques” que l’on va
décrire ci dessous.
• Le “super-espace” Lr est défini comme l’ensemble des variables Φ = (Φ1 , . . . , Φr ).
• Une “super-fonction” est une application de Lr dans G de la forme:
X
F (Φ) =
Fα (ϕ1 , . . . , ϕr ) Xα
α
où les Fα sont des fonctions complexes définies sur R2r et Xα des éléments de G.
La différentiabilité d’une telle fonction fait référence à celle de ses composantes
Fα .
15
• Une application de Lr dans lui-même est appellée “super-symétrie” si elle conserve
le pseudo produit scalaire
r
X
Φk .Φ′k
ΦΦ′ =
k=1
Une super-fonction est dite super-symétrique si elle est invariante par toutes les
super-symétries de Lr .
Ce formalisme est présenté dans la section 3 de l’annexe (B) avec une complication
supplémentaire due au fait que l’on considère simultanément plusieurs copies du superespace. On montre alors dans la section 4, en utilisant les résultats obtenus pour les
produits de matrices aléatoires sur le ruban, que les oérateurs Tz opérant sur certains
espaces de Hilbert de fonctions super-symétriques différentiables ont des propriétés
spectrales intéressantes. En particulier ce sont des opérateurs compacts et leur spectre
consiste en la valeur propre simple 1 et une partie contenue dans un disque de rayon
strictement plus petit que 1. Ceci permet de passer à la limite lorsque la taille des
boites tend vers l’infini dans le calcul de l’espérance de la fonction de Green puis de
faire tendre z vers l’axe réel en récupérant les propriétés de différentiabilité cherchées.
16
Chapter 1
Products of Random Matrices
Contents :
1. General Ergodic Theorems
2. Matrix Valued Systems
3. Group Action on Compact Spaces
(a) Definitions and Notations
(b) Laplace Transform on the Space of Continuous functions
(c) Laplace Transform on the Space of Hölder Continuous functions
4. Products of Independent Random Matrices
(a) The Upper Lyapunov Exponent
(b) The Lyapunov Spectrum
(c) Schrödinger Matrices
5. Markovian Multiplicative Systems
(a) The Upper Lyapunov Exponent
(b) The Lyapunov Spectrum
(c) Laplace Transform
6. Boundaries of the Symplectic Group
7. Notes and Complements
Part of the one dimensional or quasi-one dimensional theory of localization can be reduced to
the study of products of random matrices. One of the most important result in this direction
17
18
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
is the extension to matrix valued random variables of the strong law of large numbers. Unfortunately the identification of the limit (called the Lyapunov exponent) in this theorem is
more complicated than in the classical case of real valued random variables. In particular this
limit can no longer be written as a single expectation. Moreover its determination involves the
computation of some invariant measure on the projective space.
1.1
General Ergodic Theorems
Let (Ω, F, P) be a probability space and T be the set of time parameter that we will
choose to be the set of non negative integers (discrete case) or non negative real numbers
(continuous case). Endowed with a semi-group {θt ; t ∈ T } of measure preserving
transformations of Ω the system (Ω, F, θt , P) will hence’ forth be called a dynamical
system. (In the continuous case a dynamical system is often called a “dynamical flow”).
An invariant random variable is a measurable function Z such that Z ◦ θt = Z for all
t ∈ T . A dynamical system (Ω, F, θt , P) is said to be ergodic if every invariant random
variable is constant P-a.e.
Definition 1.1.1 A subadditive process is a measurable real valued process {X(t) ; t ∈
T } such that :
X(0) = 0 ,
X(t + s) ≤ X(t) + X(s) ◦ θt
s, t ∈ T
If the inequality is replaced by an equality in the above definition then X(t) is called an
additive process.
The Kingman subadditive ergodic theorem stated below is one of the basic tools in this
chapter .
Theorem 1.1.2 Let {X(n) ; n ≥ 0} be a subadditive process with discrete time such
that :
(i) X(n) is integrable for all n
1 E{X(n)} ; n ≥ 1} is bounded below
(ii) The sequence { n
Then there exists an integrable invariant random variable Z such that the sequence
1
1
n X(n) converges P-a.e. and in expectation to Z. Moreover E{Z} = inf n≥1 n E{X(n)}.
Proof:
There are a lot of proofs available for the subadditive ergodic theorem. The original
one can be found in Kingman [58], and certainly the simplest one has been given by
J.M. Steele[98]
1.1. GENERAL ERGODIC THEOREMS
19
Pn−1
In the additive case we remark that X(n) = k=0
X(1) ◦ θk thus if X(1) is integrable
then conditions (i) and (ii) are satisfied and the Kingman Theorem reduces to the
Birkhoff Theorem. It is well known that Theorem 1.1.2 is false for continous time
(even in the additive case) but it becomes true with some extra regularity assumptions
on the process {X(t) ; t > 0}
Corollary 1.1.3 Let {X(t) ; t ≥ 0} be a subadditive process with continuous time such
that
(i) X(t) is integrable for all t ∈ T ,
(ii) The set { 1t E{X(t)} ; t > 0} is bounded below,
(iii) The process X(t) is separable and M = sup0≤s<t≤1 |X(t − s) ◦ θs | is integrable.
Then 1t X(t) converges P-a.e. and in expectation to to an integrable invariant random
variable Z and E{Z} = inf t>0 1t E{X(t)}.
Proof:
For n = 0, 1, 2 . . . we can apply Theorem 1.1.2 to the process X(n). If n is the integral
part of t then we can write :
X(n + 1) − X(n + 1 − t) ◦ θt ≤ X(t) ≤ X(n) + X(t − n) ◦ θn
We remark that |X(n + 1 − t) ◦ θt | ≤ M ◦ θn and |X(t − n) ◦ θn | ≤ M ◦ θn . The
1 M ◦ θ converges P-a.e. to zero and this yields the
Borel-Cantelli lemma implies that n
n
result.
The Birkhoff Theorem admits an important improvement which essentially says that
when the limit is not obviously zero then it has to be positive and this property will
be very useful in order to prove that the limit is positive in the ergodic case.
Theorem 1.1.4 Let {X(n) ; n ≥ 1} be an integrable additive process with discrete time
such that X(n) converges P-a.e. to +∞. Then the limit random variable given by the
Birkhoff ergodic theorem has a positive expectation.
Proof:
Suppose that the conclusion is false and let us denote by Z the limit. We know that Z
is a nonnegative random variable thus if we suppose that it as a nul expectation then Z
is zero P-a.e. For a positive real number ǫ we define the sequence of random variables
An = m(
n
[
[X(k) − ǫ , X(k) + ǫ] )
k=1
20
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
1 X(n) to
where m(I) is the Lebesgue measure of the set I. Using the convergence of n
1 E{A } also converges to zero. We now write :
zero one can conclude that n
n
An − An−1 ◦ θ ≥ 2ǫ 1{∩n {|X(k) − X(1)| > 2ǫ}}
k=2
E{An } − E{An−1 } ≥ 2ǫ P{∩nk=2 {|X(k) − X(1)| > 2ǫ}}
≥ 2ǫ P{∩∞
k=2 {|X(k) − X(1)| > 2ǫ}}
1 E{A } = 0 one obtains that P{∩∞ {|X(k) − X(1)| > 2ǫ}} = 0 and by
From lim n
n
k=2
stationarity that
∞
P{∪∞
n=1 ∩k=1 {|X(k + n) − X(n)| > 2ǫ}} = 0
This last relation contradicts the convergence of X(n) to ∞.
1.2
Matrix Valuated Systems
Let GL(d, R) be the group of linear automorphisms of Rd . For a matrix g in GL(d, R)
g′ denotes the transpose, g−1 the inverse, tr(g) the trace and det(g) the determinant.
Every matrix g ∈ GL(d, R) has a polar decomposition g = k(g)a(g)u(g) where u(g)
and k(g) are orthogonal matrices and a(g) is a diagonal matrix with diagonal entries
a1 (g) ≥ a2 (g) ≥ . . . ≥ ad (g) > 0. The matrix a(g) is well defined since the scalars ai (g)
are the root-squares of the eigenvalues of g′ g (but in general the matrices k(g) and u(g)
are not uniquely defined). We denote by Λp (Rd ) the vector space of alternating p linear
forms on the dual of Rd . For u1 , u2 , . . . , up in Rd and f1 , f2 , . . . , fp in the dual of Rd let
m be the matrix defined by mi,j = fi (uj ). The formula u1 ∧u2 ∧. . .∧up (f1 , f2 , . . . , fp ) =
det(m) defines an element u = u1 ∧ u2 ∧ . . . ∧ up of Λp (Rd ) called a decomposable pvector. For a p-vector u we also denote by u the d × p matrix of the coordinates of
u1 , u2 , . . . , up in the usual basis of Rd .
Lemma 1.2.1 It is well known that :
(i) The decomposable p vectors constructed with the vectors of a basis of Rd span the
linear space Λp (Rd )
(ii) The p-vector u1 ∧ u2 ∧ . . . ∧ up is nonzero iff the system (u1 , u2 , . . . , up ) is linearly
independent
(iii) The linear extension to the whole of the space Λp (Rd ) of the formula < u1 ∧ u2 ∧
. . . ∧ up , v1 ∧ v2 ∧ . . . ∧ vp >= det(u′ v) defines an inner product on Λp (Rd ).
1.2. MATRIX VALUATED SYSTEMS
21
In the sequel we will always use the norm on Λp (Rd ) associated to the dot product
defined in Lemma 1.2.1(iii), that is kuk2 = det(u′ u).
For g ∈ GL(d, R) we let Λp g be the linear automorphism of Λp (Rd ) defined by Λp g(u1 ∧
u2 ∧ . . . ∧ up ) = gu1 ∧ gu2 ∧ . . . ∧ gup . We note that Λp mn = Λp mΛp n thus if k is
an orthogonal matrix so is Λp k. We can deduce from these properties that kΛp gk =
a1 (g)a2 (g) . . . ap (g) and this implies kΛp gk ≤ kgkp .
When d = 2ℓ there is a subgroup of GL(d, R) of particular interest called the symplectic
group of order ℓ and denoted by SP (ℓ, R).
Definition 1.2.2 The symplectic group SP (ℓ, R) is the set of matrices g in GL(d, R)
satisfying :
0
Iℓ
′
g Jg = J with J =
where Iℓ is the identity matrix of order ℓ.
−Iℓ 0
We remark that for ℓ = 1 the symplectic group is the unimodular group SL(2, R) but
that SP (ℓ, R) is a proper subgroup of SL(2ℓ, R) for greater values of ℓ. One can found
more details about the symplectic group in section 6 of this chapter. The proof of the
following Lemma is an easy exercise :
Lemma 1.2.3 Let g be a matrix in SP (ℓ, R). Then one has :
(i) det(g) = 1 and g′ ∈ SP (ℓ, R)
(ii) If λ is an eigenvalue of g so is 1/λ
(iii) In the polar decomposition of g we have a(g)i = a(g)−1
d−i+1 for any i = 1 . . . d
For a sequence γ1 ≥ γ2 ≥ . . . ≥ γd of real numbers we let λ1 > λ2 > . . . > λr be the
set of its distinct values . We call multiplicity of λi the number of occurence of λi in
the sequence (γ1 , γ2 , . . . , γd ). The next Theorem is called the “Deterministic” Oseledec
theorem.
Theorem 1.2.4 Let {g(n) ; n ≥ 0} be a sequence in GL(d, R) and let assume that there
exist numbers γ1 ≥ γ2 ≥ . . . ≥ γd such that the product U (n) = gn−1 . . . g0 satisfies for
p = 1, 2, . . . , d :
1 log kΛp g(n)k = 0,
(i) limn→∞ n
1 log kΛp U (n)k = γ + . . . + γ .
(ii) limn→∞ n
1
p
Let us denote by r the number of distinct Lyapunov exponents, then there exists a
strictly decreasing family of subspaces of Rd such that
22
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
(i) Rd = V 1 ⊃ V 2 ⊃ . . . ⊃ V r ⊃ V r+1 = {0},
1 log kU (n)vk = λ
(ii) v ∈ V i \ V i+1 ⇐⇒ limn→∞ n
i
(iii) dim V i − dim V i+1 = multiplicity of λi
i = 1, 2, . . . , r,
i = 1, 2, . . . , r.
Proof:
The original proof is in Oseledec [86] and a simpler one can be found in Ledrappier
[77].
In the particular case of products of symplectic matrices Lemma 1.2.3(iii) implies that
γi = −γd−i+1 for i = 1, 2, . . . , d and since a symplectic matrix has a determinant equal
to one we obtain γ1 ≥ γ2 ≥ . . . ≥ γℓ ≥ 0
We now introduce the important concept of multiplicative process. Such processes are
defined and studied in full generality for example in Bougerol [8],[7] but we will use a
simpler definition adapted to our situation.
Definition 1.2.5 Let {U (t) ; t ∈ T } be a process taking its values in the group GL(d, R)
and defined on the dynamical system (Ω, F, θt , P). We say that it is a mutiplicative
process if :
• U (0) = I ,
U (t + s) = (U (s) ◦ θt )U (t)
s, t ∈ T
Such a process is said to be regular if :
• In the discrete case ( log+ kU (1)k + log+ kU −1 (1)k) is integrable.
• In the continuous case it is a separable stochastic process and the random variable
M = ( sup0≤t≤1 log+ kU (t)k + sup0≤t≤1 log+ kU −1 (t)k) is integrable.
We note that | log kgk | ≤ log+ kgk + log+ kg−1 k whenever g ∈ GL(d, R) and in the
particular case of matrices in SP (ℓ, R) we have kgk = kg−1 k ≥ 1. Moreover, the
integrability conditions in (ii) are also satisfied by the exterior powers of the matrices
U (t) since kΛp gk ≤ kgkp . In the discrete case a mutiplicative process can be written as
a product of matrices of GL(d, R) since if we denote by g(n) the matrix U (1) ◦ θn then
U (n) = g(n − 1) . . . g(0).
The application of the deterministic Oseledec’s theorem to multiplicative processes is
often refered as the “Random” Oseledec theorem :
Theorem 1.2.6 Let {U (t) ; t ∈ T } be a regular multiplicative process defined on an
ergodic dynamical system. Then there exists an invariant set Ω0 of full P measure such
that for ω ∈ Ω0 one has :
1.2. MATRIX VALUATED SYSTEMS
23
(1) limt→∞ 1t log kΛp U (t)k = limt→∞ 1t E{log kΛp U (t)k} = γ1 +. . .+γp for p = 1, 2, . . . , d.
( The numbers γ1 ≥ γ2 ≥ . . . ≥ γd are called the Lyapunov exponents of the process U (t)).
(2) Let r be the number of distinct Lyapunov exponents, then there exists a strictly
decreasing sequence of measurable subspaces of Rd denoted by Vωi such that
(i) Rd = Vω1 ⊃ Vω2 ⊃ . . . ⊃ Vωr ⊃ Vωr+1 = {0},
(ii) v ∈ Vωi \ Vωi+1 ⇐⇒ limt→∞ 1t log kU (t)vk = λi i = 1, 2, . . . , r,
(iii) dim Vωi − dim Vωi+1 = multiplicity of λi i = 1, 2, . . . , r,
(iv) Vθit ω = U (t)Vωi .
Proof:
For p = 1, . . . , d the process X(t) defined by log kΛp U (t)k is subadditive and the integrability conditions in the Definition 1.2.5 make it possible to apply the Kingman
subadditive ergodic Theorem to the process X(t). This yields conclusion (1) and if we
define g(n) = U (1) ◦ θn then U (n) = g(n − 1) . . . g(0) and we obtain the conclusions
2 (i) and (ii) and (iii) from Theorem 1.2.4 at least for discrete time. The same result
follows for continuous time if we remark that for n ≤ t ≤ n + 1 one has :
| log kU (t)vk − log kU (n)vk |
kU (t)vk
|
= | log
kU (n)vk
k(U (t − n) ◦ θn )U (n)vk
|
= | log
kU (n)vk
≤ max(| log kU (t − n) ◦ θn k | , | log kU −1 (t − n) ◦ θn k |) ≤ M ◦ θn
and the Borel-Cantelli lemma yields the result. It remains only to check the measurability assumptions on the spaces Vωi and conclusion 2(iv); these are consequence of the
relation Vωi = {v ∈ Rd ; limn→∞ n1 log kU (n)vk ≤ λi }.
Dynamical systems for which {θt ; t ∈ T } form a group of invertible mappings of Ω will
be called invertible dynamical systems. For such systems one can apply the preceding
theorems in the “two directions” of T and it turns out that the limits at +∞ and
−∞ are closely related. We will add the superscript ± to the Lyapunov exponents
and subspaces defined in Theorem 1.2.6. We still consider limits for t → ±∞ but the
normalization factor is now 1 rather than 1t .
|t|
Proposition 1.2.7 Let us denote by (Ω, F, θt , P) an invertible ergodic dynamical system and by U (t) a regular multiplicative process. Then:
24
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
(i)
−γ
p
= −+ γd−p+1
p = 1 . . . d,
(ii)
−λ
= −+ λr−i+1
i = 1 . . . r,
i
(iii) Rd =
The spaces
+V i
ω
±V i
ω
⊕
− V r−i
ω
i = 1 . . . r − 1.
are defined for ω in an invariant set Ω0 of full P-measure
Proof:
Because U (−t) = U −1 (t)◦θ−t the process U (t) is multiplicative. This implies ap (U (−t)) =
a−1
d−p+1 (U (t)◦θ−t ). The invariance of the probability measure P yields E{log ap (U (−t))} =
−E{log ap (U (t))} proving (i) and (ii). To prove (iii) we remark that we have dim± Vωi =
(# of exponents ≤± λi ) and (ii) implies that the sum of the dimensions of the subspaces
appearing in (iii) is equal to d. To prove that the intersection of these subspaces is the
zero vector, we pick a unit vector v in + Vωi ∩ − Vωr−i and we consider the random
±
1 log(kU (n)vk/kvk) −± λ . We can write v = (U (−n) ◦ θ )U (n)v
variables Zi,n
(v) = n
i
n
and obtain :
−
+
0 ≤ n(− λr−i ++ λi + Zr−i,−n
◦ θn (U (n)v) + Zi,n
(v))
.
We know from (ii) that − λr−i = −+ λi+1 and from Theorem 1.2.6 that U (n)v ∈− Vθr−i
nω
±
±
i
Since Zi,n (v) converges in probability to zero for v ∈ Vω we get a contradiction.
The decomposition (iii) is often called a “splitting of Rd ”
This proposition has an important consequence for multiplicative processes with values
in SP (ℓ, R). In this case there is no need to put a superscript ± to the Lyapunov
exponents since − γp = −+ γd−p+1 =+ γp and the “splitting of Rd ” has a simpler interpretation.
Corollary 1.2.8 Let (Ω, F, θt , P) be an invertible ergodic dynamical system and U (t)
be a regular multiplicative process with values in SP (ℓ, R). Then with probability one
for any non-zero vector v ∈ Rd we have :
1
1
max( lim
log kU (t)vk , lim
log kU (t)vk) ≥ γℓ )
t→−∞ |t|
t→+∞ t
Proof:
If s is the number of non negative distinct Lyapunov exponents then r = 2s and we
apply (ii) and (iii) of Proposition 1.2.7 with i = s. We then remark that if v 6∈± Vωs
then the above limits are greater than γℓ .
When the lowest non negative exponent γℓ is positive this corollary says that for any
nonzero vecror v the process U (t)v has an “hyperbolic behavior”,i.e. that it is exponentially growing in at least a direction of T . One of the essential problems in the sequel
1.2. MATRIX VALUATED SYSTEMS
25
will be to prove that γℓ is positive under suitable hypothesis. We will see in the next
sections that this can be done for independent and Markovian models. Obviously the
problem is much simpler in the case d = 2 where γℓ is also the upper exponent !
We end this section with the classical construction of a product dynamical system
associated with (Ω, F, θt , P) and a regular multiplicative process U (t). This will turn
out to be very useful when we try to obtain an explicit formula for the upper Lyapunov
exponent γ1 . Let P (Rd ) be the projective space of Rd , that is the set of directions in
Rd and let B be its Borel sigma-algebra . For a non-zero vector v in Rd we denote by v̄
the image of v in P (Rd ). GL(d, R) acts on P (Rd ) by the formula gv̄ = gv and we set :
Ω̃ = Ω × P (Rd ),
F̃ = F × B,
θ̃t (ω, v̄) = (θt ω, U (t, ω)v̄)
It is easy to check that we have the semigroup property θ̃t+s = θ̃t ◦ θ̃s and that the
function v ֒→ log(kU (t)vk/kvk) is actually a function of v̄.
Proposition 1.2.9 Let us assume that (Ω, F, θt , P) is an ergodic dynamical system
and that P̃ is a θ̃t -invariant probability measure with projection P on Ω. Then :
R
(i) The function t −→ log(kU (t, ω)vkkvk−1 ) dP̃(ω, v̄) is additive.
(ii) If P̃{(ω, v̄) ; ω ∈ Ω0 , v ∈ Vω2 } = 0 (notations of Theorem 1.2.6) then:
Z
kU (1, ω)vk
γ1 = log
dP̃(ω, v̄)
kvk
(iii) If (Ω̃, F̃ , θ̃t , P̃) is ergodic and P̃{(ω, v̄) ; ω ∈ Ω0 , v 6∈ Vω2 } > 0 then the formula
(ii) holds
Proof:
Let X(t, ω, v̄) = log(kU (t, ω)vkkvk−1 ) be defined on the dynamical system (Ω̃, F̃ , θ̃t , P̃).
It is readily seen that X(t) is an additive process and the invariance of P̃ gives conclusion
(i). To prove (ii) we remark that X(1) is integrable thus the Birkhoff ergodic Theorem
1 X(n) converges P̃-a.e. and in expectation to an invariant
implies that the sequence n
random variable Z. The Oseledec Theorem 1.2.6 implies that for P-almost all ω and
for all v in Rd − Vω2 this sequence convergesRto γ1 thus this sequence converges to the
same limit P̃-a.e. It follows that Z = γ1 = Z(ω, v̄) dP̃(ω, v̄) and we obtain (ii). The
same proof works for (iii) since we already know that Z is constant thus it is enough
to have the above convergence on a set of positive P̃-measure.
We will see in the next sections how to construct measures P̃ satisfying (ii) or (iii) in
the independent and Markovian cases.
26
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
1.3
Group Action on Compact Spaces
1.3.1
Definitions and Notations
Let G be a locally compact σ-compact metric group with unit e and let B be a metric
topological space.
Definition 1.3.1 We say that G acts on B if one can associate continuously to each
(g, b) in G × B an element gb of B such that :
(i) g1 (g2 b) = (g1 g2 )b
(ii) eb = b
∀g1 , g2 ∈ G , b ∈ B
∀b ∈ B
We define the “pseudo-convolution” of a probability measure µ on G and a measure ν
on B as the unique measure µ ∗ ν on B determined by :
Z
µ ∗ ν(f ) = f (gb) dµ(b) dν(g)
where f is any bounded measurable function on B. We use the same symbol than for
the ordinary convolution on G since when B = G the two definitions coincide. We also
remark that (µ1 ∗ µ2 ) ∗ ν = µ1 ∗ (µ2 ∗ ν).
From now on we fix a probability measure µ on G. The nth power of convolution of µ
on G is denoted by µn . We also denote by Tµ the smallest closed sub-semi-group of G
containing the support of µ and by Gµ the smallest closed sub-group of G containing
the support of µ. A measure ν on B is said to be µ invariant if µ∗ν = ν. If B is compact
then it is readily seen that that there always exist invariant probability measures (Take
1 Pn µn ∗m where m is any probability measure
any weak limit point of the sequence n
j=1
on B). Uniqueness of the invariant measure will be one of the problems discussed in
the next section.
On abelian groups the harmonic analysis of µ is carried out using the Fourier-Laplace
transform associated to the set of exponentials on G. Unfortunately general groups
(Simple Lie Groups for instance) have very few exponentials. This is why we need to
introduce cocycles generalizing the notion of exponentials.
Definition 1.3.2 A continuous map σ from G × B with values into (0 , ∞) is called
a cocycle if
σ(g1 g2 , b) = σ(g1 , g2 b)σ(g2 , b)
g1 , g2 ∈ G, b ∈ B
1.3.
GROUP ACTION ON COMPACT SPACES
27
We remark that if B is reduced to a single point then a cocycle is nothing but an
exponential. Products and real powers of cocycles are again cocycles and σ(e, b) = 1
for any b ∈ B. A cocycle σ is said µ-integrable if the function σ̄(g) = supb∈B σ(g, b) is
µ-integrable. One remarks that if we set σ̃(g) = supb∈B | log σ(g, b)| then exp(σ̃(g)) is µ
integrable iff σ and σ −1 are µ integrable. The following lemma is an easy consequence
of the definition of a cocycle.
Lemma 1.3.3 Let σ be a µ integrable cocycle, then the sequences an and bn defined by
:
Z
Z
n
an = σ̄(g) dµ (g)
and
bn = sup
σ(g, b) dµn (g)
{b ∈ B}
satisfy an+m ≤ an am and bn+m ≤ bn bm .
We will often use the following Proposition :
Proposition 1.3.4 Let ν be an invariant probability measure and let σ be a cocycle
1 log σ(U (n), b) converges P ⊗ νsuch that σ̃(g) is µ integrable. Then the sequence n
almost surely to some random variable Z. If we assume furthermore that the sequence
σ(U (n), b) converges to +∞ for P ⊗ ν-almost all (ω , b), then the expectation
R
Z(ω, b) dP(ω)dν(b) is strictly positive.
Proof:
Let (Ω̃, F̃ , θ̃t , P̃) be the product dynamical system defined by :
Ω̃ = Ω × B,
P̃ = P ⊗ ν,
θ̃(ω, b) = (θ(ω), ω0 b).
The process X(n) = log σ(U (n), b) is additive on (Ω̃, F̃ , θ̃t , P̃) and the Birkhoff ergodic
theorem implies that n−1 X(n) converges P̃-almost surely. The last assertion is an easy
consequence of Proposition 1.1.4
Proposition 1.3.5 Let σ be a cocycle such that
(i) There exists a positive number τ such that σ t is integrable for |t| ≤ τ ,
R
(ii) There exists an integer N such that supb∈B log σ(g, b) dµN < 0.
Then there exists a positive real number α such that for 0 < t ≤ α there exist positive
constants Ct < +∞ and ρt < 1 such that :
Z
n = 1, 2, . . .
sup σ t (g, b) dµn (g) ≤ Ct ρnt
b∈B
28
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
Proof:
R
The inequality
≤ σ t +σ −t implies that for any integer p then supb∈B exp t| log σ(g, b)| dµp
R exp(t| log σ|)
and supb∈B | log σ(g, b)|p dµp are finite. From the inequality :
t2
| log σ| exp(t| log σ|)
0≤t≤τ
2
R
one can deduce that if we set bn (t) = supb∈B σ(g, b)t dµn (g) then for 0 ≤ t ≤ τ then
aN (t) is less than
σ t ≤ 1 + t log σ +
1 + t sup
b∈B
Z
t2
log σ(g, b) dµ (g) + sup
2 b∈B
N
Z
| log σ(g, b)| et| log σ(g,b)| dµN (g)
The hypothesis (ii) implies that for t sufficiently small one has bN (t) < 1 and the
conclusion follows from the subadditivity of the sequence log bn (t) which implies that
this sequence converges to its infimum.
We now define the operators extending the notion of Fourier-Laplace transform of a
measure on a commutative group. For a cocycle σ and a complex number z we define
formally the operator Tσ,z on the set B(B) of bounded measurable function on B by
the formula :
Z
Tσ,z f (b) =
σ z (g, b)f (gb) dµ(g)
Tσ,z is a bounded operator on B(B) when |σ z | is µ-integrable. It is readily seen that
n is defined by the same formula but with µ replaced by µn . When z = 1 we write T
Tσ,z
σ
instead of Tσ,z and we only write T when the cocycle
σ
is
equal
to
the
constant
one.
The
R
R
operator Tσ is non negative thus kTσ k = supb∈B σ(g, b) dµ and T f (b) = f (gb) dµ(g)
defines a Markov kernel on B.
Proposition 1.3.6 Let τ be a positive real number such that the cocycle σ t is µ integrable for |t| ≤ τ and let us denote by ρ(t) the logarithm of the spectral radius of Tσ,t
acting on B(B). Then for |t| ≤ τ we have :
(i) The function t ֒→ ρ(t) is convex and ρ(0) = 0,
R
(ii) ρ(t) ≥ t log σ(g, b) dµ(g) dν(b) for any invariant probability measure ν on B.
(iii) If the function ρ admits a derivative at the origin then its value is given by :
dρ(t)
|
=
dt t=0
Z
log σ(g, b) dµ(g) dν(b)
(Hence this integral is independent from the choice of ν)
1.3.
GROUP ACTION ON COMPACT SPACES
29
Proof:
The spectral radius can be computed from the formula :
Z
1
ρ(t) = lim log sup σ t (g, b) dµn (g)
n
b∈B
Hence (i) is a consequence
inequality. We now remark that for an invariant
R oft the Hölder
n
measure ν the quantity σ (g, b) dµ (g) dν(b) is additive in n and thus (ii) is a simple
consequence
of Jensen’s inequality for the logarithm function. The function ρ(t) −
R
t log σ(g, b) dµ(g) dν(b) is non negative so that its derivative at the origin, if any has
to be zero proving conclusion (iii).
The next proposition extends a well known property of the Fourier-Laplace transform.
Proposition 1.3.7 If there exists τ > 0 such that the cocycle σ t is µ-integrable for
|t| ≤ τ , then z ֒→ Tσ,z is an analytic function from a neighbourhood of the origin to the
Banach space of bounded operators on B(B).
Proof:
R
Note that if we set
D
f
(b)
=
( log σ(g,Rb))n f (gb) dµ(g) then we have thePupper bound
n
R
1
n
kDn k ≤ supb∈B | log σ(g, b)|n dµ(g) ≤ σ̃ n (g) dµ(g) and thus the series
n! |z| kDn k
is convergent for |z| ≤ τ .
When µ is a “smooth” measure on G we will see in subsection 2 that these operators
have nice properties on the space C(B) of continuous functions but this is no longer the
case for general µ and this is why we will introduce the spaces Lα of Hölder continuous
functions in subsection 3.
1.3.2
Laplace Operators on the Space of Continuous functions
Throughout this subsection we assume that B is a compact metric space.
If σ is a µ-integrable cocycle then Tσ is a bounded operator on C(B). We denote by
Tσ∗ its transpose. It acts on the measures on B and T̆σ denotes the operator associated
to the measure µ̆ image of µ by g ֒→ g−1 .
If m is a measure on B we denote by
• gm the image of m by the map b ֒→ gb,
• f m the measure of density f with respect to m for f ∈ C(B),
30
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
• < f, h >m the continuous bilinear form
C(B).
R
f h dm defined on the Banach space
Let us assume that the support of m is equal to B and that for any g ∈ G the measures
dg−1 m
gm and m have the same null sets. Then if the density r(g, b) = dm (b) is a
continuous function of the couple (g, b) it is a cocycle called a m-Radon-Nikodym
cocycle.
Proposition 1.3.8 Let σ be a µ-integrable cocycle and let r be a m-Radon-Nikodym
cocycle. Let us assume that rσ −1 is µ-integrable. Then, for any positive integer k and
any complex number λ we have :
(i) < T̆σ f , h >m =< Trσ−1 h , f >m
f, h ∈ C(B)
k
∗
(ii) [ker(T̆σ − λI)k ]m ⊂ ker(Trσ
−1 − λI)
Proof:
If f , h ∈ C(B), then :
∗
[Trσ
−1 (f m)](h)
=
=
=
Z
Z
Z
f (b)r(g, b)σ −1 (g, b)h(gb) dµ(g) dm(b)
f (b)h(g−1 b)σ −1 (g−1 , b) dµ̆(g) dgm(b)
f (gb)σ(g, b)h(b) dµ̆(g) dm(b)
= [(T̆σ f )m](h)
The assertion (ii) is a simple consequence of (i). Notice that (i) implies Tr∗ m = m.
We now assume that the action of G on B admits a continuous lifting i.e. that there
exists a continuous map s : B ֒→ G and a fixed element b0 ∈ B such that s(b)b0 = b
for any b ∈ B. This implies that a sequence bn in B converges to b iff there exists a
sequence gn in G converging to e such that bn = gn b. In others words, the uniform
structure on B induced by the action of G is identical to its initial uniform structure
of compact space.
Proposition 1.3.9 Suppose that µ has a density with respect to a Haar measure on G
and that σ is an integrable cocycle. Then Tσ is a compact operator on C(B).
Proof:
Let dg be a right Haar measure on G and dµ = φ dg . One can write for u ∈ G
Z
Tσ f (ub) = f (gb)σ(g, b)σ −1 (u, b)φ(gu−1 ) dg
1.3.
GROUP ACTION ON COMPACT SPACES
and :
|Tσ f (ub) − Tσ f (b)| ≤ kf k
Z
31
σ(g, b)|φ(g) − σ −1 (u, b)φ(gu−1 )| dg
If φ is a continuous function with compact support then we can conclude that Tσ
maps a bounded set of C(B) in an equicontinuous set and the compactness follows
from the Arzela-Ascoli theorem. In the general case there exists a sequence φn (g) of
continuous functions with compact support converging to φ(g)σ̄(g) in L1 (dg). Taking
ψn (g) = σ̄ −1 (g)φn (g) and denoting by n Tσ the operator associated to the measure
µn = ψn dg we obtain :
Z
n
k Tσ − Tσ k ≤ |φ(g)σ̄(g) − ψn (g)| dg
We can conclude that Tσ is compact as limit in norm of compact operators.
One remarks that the above proof is somewhat simpler in the following case. Assume
that B is a homogeneous space under the action of a subgroup K ⊂ G and that σ is
a K invariant cocycle (that is σ(k, b) = 1 for k ∈ K and b ∈ B). Then in the above
proof one can chooses u ∈ K and the term σ(u, b) disappears. Actually this will be the
general situation for G = SP (ℓ, R), K the orthogonal subgroup of G and B a boundary
of the symplectic group.
For a compact operator T on a Banach space it is known that T and T ∗ have the same
spectrum. More precisely, the dimensions of the kernels of (T − λI)k and (T ∗ − λI)k
are the same for any positive integer k and non-zero complex number λ. This yields :
Proposition 1.3.10 Let σ be a cocycle and let r be a m-Radon-Nikodym cocycle such
that σ and rσ −1 are µ and µ̆ integrable. If µ has a density with respect to a Haar
measure on G, then for any positive integer k and non zero complex number λ we have
:
k
∗
(i) (ker(T̆σ − λI)k )m = ker(Trσ
−1 − λI)
(ii) dim(ker(T̆σ − λI)k ) = dim ker(Trσ−1 − λI)k
Proof:
It is enough to apply Proposition 1.3.8 to the cocycles σ and rσ −1 replacing µ by µ̆ for
the latter and to take in account the above remark.
An important consequence of proposition 1.3.10 is that when µ has a density on G then
for a µ and µ̆ integrable m-Radon-Nikodym cocycle r the operators T and Tr have the
same spectral properties . Thus we can “shift” the spectral properties of T which are
well known for a Markov operator to the operator Tr which is much more difficult to
study.
32
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
Definition 1.3.11 A Markov operator T on the Banach space C(B) is said aperiodic
if 1 is the only eigenvalue of modulus one of T .
The following result can be understood as a generalization of the Peron-Frobenius
theorem for positive operators :
Proposition 1.3.12 Let r be a µ and µ̆ integrable m-Radon-Nikodym cocycle. Let us
assume that µ has a density, that µ and µ̆ admit unique invariant measures ν and ν̆ and
that T and T̆ are aperiodic. Then the invariant measures µ and µ̆ admit continuous
densities ψ and ψ̆ with respect to m and we have the decompositions :
(i) T n f = ν(f ) + Qn f
(ii) Trn f = m(f )ψ + Qnr f
the operators Q and Qr having spectral radius strictly less than one. It follows that the
spectral radius of Tr is equal to one and that the sequence kTrn k is bounded.
Proof:
The unicity of the invariant measure added to the compactness of the operators T given
by Proposition 1.3.9 implies that the eigenvalue 1 is of geometric multiplicity one. The
algebraic multiplicity is also one since the equation T ∗ τ = τ + ν has obviously no
solution τ in the space of measures on B (otherwise the total mass of ν would be zero
!). This implies immediately the decomposition of T .
From Proposition 1.3.10 we deduce that T and Tr have the same spectral properties.
The relation ν̆ = ψm is then a consequence of the equality ker(T̆ ∗ −I) = (ker(Tr −I))m.
Proposition 1.3.13 Under the hyphothesis of Proposition 1.3.12 if we denote by ρ(t)
the logarithm of the spectral radius of Tr,t then we have :
(i) ρ(t) is a convex function on [−1
1] with ρ(0) = ρ(1) = 0.
R
(ii) If there exists an integer N with supb∈B log r(g, b) dµN (g) < 0, then ρ(t) < 0 for
t ∈ (0 , 1)
(iii)
R
dρ(t)
|
= log r(g, b) dµ(g) dν(b)
dt t=0
Thus if this integral is negative then ρ(t) < 0 for t ∈ (0 , 1)
Proof:
As already noticed in the proof of Proposition 1.3.6 the convexity of ρ(t) is a consequence of Hölder’s inequality and (i) follows from Proposition 1.3.8. Conclusion (ii) is
1.3.
GROUP ACTION ON COMPACT SPACES
33
a consequence of Proposition 1.3.5 which implies that for a small positive t we have
ρ(t) < 0. We know that the spectrum of T splits in the simple eigenvalue 1 and a
part contained in a disk of radius strictly less than one. Furthermore Proposition 1.3.9
(stated on the space C(B) rather than on B(B) ) implies that z ֒→ Tr,z is analytic and
the analytic perturbation theory (See Kato [54]) yields that ρ(t) is smooth at the origin
and (iii) follows from Proposition 1.3.6(iii).
One can apply the preceeding results in the particular case of the action of GL(d, R)
on the projective space B = P (Rd ). In this situation there exists a probability measure
m invariant under all the rotations of SO(d, R). It is called the Cauchy measure and
it satisfies :
dg−1 m
kvk d
r(g, v̄) =
(v̄) =
dm
kg vk
This cocycle is µ and µ̆ integrable whenever the random variables kgkd and kg−1 kd are
µ-integrable.
Using the notations and definitions that will be introduced in section 6 we now state a
proposition which will allow us to check the hypotheses of Proposition 1.3.12.
Proposition 1.3.14 Assume that the probability measure µ has a density on the symplectic group SP (ℓ, R), then:
• There exists a unique µ-invariant probability measure for the operator T acting
on each boundary LI .
• The operator T acting on C(LI ) is aperiodic.
Proof:
The unicity of the invariant measure will follow from Proposition 1.4.25 and 1.6.2,
moreover Proposition 1.3.9 yields the compactness of T . To obtain the aperiodicity it
is enough to prove this property for the “maximal boundary” associated to I = {1 . . . ℓ}
that we will call B. Let f ∈ C(B) with T f = λf , kf k = 1 , |λ| = 1 and let us pick
b ∈ B with |f (b)| = 1. The equation T n f = λn f first implies that |f (gb)| = |f (b)| for
g ∈ Tµ . Secondly the same eigenvalue equation yields that f (gb) = λn f (b) when g is
in the support of µn say Sn . The absolute continuity of µ implies that it is impossible
that all the sets Sn be disjoint since their Cauchy measure is bounded below (See
Tutubalin[100] for a computation of the Cauchy measure on B using the Haar measure
of G) and it follows that λ is a root of the unity. The unicity of the invariant measure
then implies that λ = 1.
The decomposition in Proposition 1.3.12 part (i) does not hold in general if we do not
assume the existence of a density for µ. If we only assume that µ is “spread out”,i.e.
34
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
there exists an integer n such that µn is not orthogonal to the Haar measure of G,
then the operator T is quasi-compact (See Brunel and Revuz [12]). Such an operator
has a finite number of eigenvalues of modulus one and their multiplicities are finite.
Moreover the rest of the spectrum is contained in a disk of radius strictly less than one.
Unfortunately this property is not true for the operators Tσ .
1.3.3
The Laplace Operators on the Space of Hölder Continuous Functions
We still assume that B is a compact metric space.
We denote by δ the distance of B. For a positive real number α we define the space of
Hölder continuous functions of order α say Lα as the set of continuous functions f on
B such that :
|f (a) − f (b)|
< +∞
mα (f ) =
sup
δα (a, b)
(a,b) ∈B×B ; a6=b
Endowed with the norm kf kα = kf k∞ + mα (f ) the linear space Lα is a Banach space.
Moreover the map α ֒→ Lα is nonincreasing and the identity map from Lα to Lβ is
continuous for β ≤ α.
Let σ be a cocycle defined on the set {(a, b) ∈ B × B a 6= b} by :
σ(g, (a, b)) =
δ(ga , gb)
δ(a , b)
The following proposition is essentially due to Le Page [79] :
Proposition 1.3.15 Assume that the cocyle σ defined above satisfies :
(i) There exists a positive number τ such that σ t is integrable for |t| ≤ τ
R
(ii) sup{(a,b)∈B a6=b} log σ(g, (a, b)) dµN (g) < 0 for some integer N .
Then there exists a positive real number α0 such that for 0 < α ≤ α0 we have :
(i) T is a bounded operator on Lα ,
(ii) There exist constants Cα < ∞ and ρα < 1 such that :
kT n f − ν(f )kα ≤ kf kα Cα ρnα
f or n = 1, 2 . . .
and f ∈ Lα
for any invariant probability measure ν (In particular this implies the uniqueness
of the invariant probability measure).
1.3.
GROUP ACTION ON COMPACT SPACES
35
Proof:
We can write
Z
|T f (a) − T f (b)|
≤ mα (f ) σ̄ α (g) dµ(g)
δα (a, b)
The obvious bound kT f k∞ ≤ kf k∞ and hypothesis (i) yield
conclusion (i). We know
R
from Proposition 1.3.5 that for α sufficiently small then σ α (g, (a, b)) dµn (g) ≤ Cα ρnα
hence we obtain for such an α :
|T n f (a) − T n f (b)|
δα (a, b)
≤ mα (f )
Z
σ α (g, (a, b)) dµn (g)
≤ mα (f )Cα ρnα
Let now ν be an invariant probability measure then :
Z
Z
n
n
|T f (a) − ν(f )| = | f (ga) dµ − f (gb) dµn (g) dν(b)|
Z
≤
|f (ga) − f (gb)| dµn (g) dν(b)
Z
≤ mα (f ) σ α (g, (a, b)) dµn (g) dν(b)
≤ mα (f )Cα ρnα
sup
δα (a, b)
(a,b)∈B×B
This proves conclusion (ii).
This result gives the exponential convergence of T n to the rank one operator N (f ) =
ν(f ). In some sense it is an operator form of the classical Dœblin Theorem for Markov
chains.
Corollary 1.3.16 Under the hypothesis of Proposition 1.3.15 for α sufficiently small
the operator T on Lα has eigenvalue 1 and the rest of the spectrum is contained in a
disk of radius strictly less than one. Moreover T admits the decomposition :
T n f = ν(f ) + Qn f
f ∈ Lα
Where ν is the invariant measure and Q an operator of spectral radius strictly less than
1 on Lα .
We will see in the next section that one can check the hypotheses of Proposition 1.3.15
without any smoothness assumption on µ. Using the theory of analytic perturbations
it is possible to extend this property to the operators Tσ,z for small values of z, in order
to obtain central limit and large deviations theorems (See Bougerol-Lacroix [9]). We
will not dwell on this kind of result there.
36
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
1.4
Products of Independent Random Matrices
This section contains the essential results of the theory of products of independent random matrices. Such products yield one of the most interesting example of multiplicative
process. Following Ledrappier [77] we obtain in the first subsection some immediate
consequences of Proposition 1.2.9. We also give the classical result of Furstenberg [30]
establishing the positivity of γ1 . Under stronger hypotheses we obtain in the second
subsection the simplicity of the Lyapunov spectrum. This last part is essentially borrowed from Guivarc’h-Raugi [44].
If µ is a probability measure on the linear group GL(d, R) we define the following
ergodic dynamical system :
• Ω is the set of sequences ω = (ω0 , ω1 , . . .) with ωi ∈ G
• P is the infinite product measure P = µ ⊗ µ . . .
• θ is the shift operator θ(ω) = (ω1 , ω2 , . . .)
The coordinates functions g(n, ω) = ωn form an independent sequence of identically
distributed matrix valued random variables and the ergodicity of the dynamical system
follows from the classical zero-one law. One sets U (0) = I and U (n) = g(n − 1) . . . g(0)
for n ≥ 1. We will always assume the integrability condition :
Z
{log+ kgk + log+ kg−1 k} dµ(g) < ∞
Under these assumptions {U (n) ; n ≥ 0} is a regular multiplicative process.
At times we will also consider the invertible dynamical system constructed in the same
way but with the set of all integers as time set. In this case the product U (n) is
defined for n ≤ −1 by U (n) = g−1 (n) . . . g−1 (−1). It is easy to check that U (n + m) =
(U (m) ◦ θn )U (n) for any integers m, n. We also remark that for a fixed non-zero vector
v ∈ Rd the sequence U (n)v̄ is a Markov
chain on B starting at v̄. Its transition kernel
R
T is given by the formula T f (b) = f (gb) dµ(g).
1.4.1
The Upper Lyapunov exponent
Definition 1.4.1 A subsetSS of GL(d, R) is said to be not strongly irreducible if there
exists a finite union W = ri=1 Vi of proper subspaces Vi of Rd such that gW = W for
all g ∈ S.
1.4. PRODUCTS OF INDEPENDENT RANDOM MATRICES
37
Definition 1.4.2 We say that the probability measure µ is strongly irreducible if the
support of µ is strongly irreducible.
One remarks that µ is strongly irreducible iff Gµ is strongly irreducible.
Definition 1.4.3 A probability measure ν on P (Rd ) is said to be a proper measure if
for any proper subspace V of Rd we have ν(V ) = 0.
Lemma 1.4.4 If µ is strongly irreducible then any invariant probability measure ν is
proper.
Proof:
Let us assume that the conclusion of the Lemma is false and let d0 be the smallest
dimension of the subspaces V for which ν(V ) 6= 0 and α be the supremum of the ν
measure of the subspaces of dimension d0 . The set of subspaces V of dimension d0
for which ν(V ) = α is finite. Let W be the union of these subspaces. The invariance
equation implies that for µ-almost all g we have gW = W . This proves the Lemma.
Proposition 1.4.5 If µ is strongly irreducible then for any invariant probability measure ν we have the formula :
Z
kgvk
γ1 = log
dµ(g) dν(v̄)
kvk
Proof:
Let ν be an invariant measure. Then the probability measure P̃ = P ⊗ ν is invariant
for the shift operator θ̃ introduced in Proposition 1.2.9. By the previous lemma ν is
proper and the result now follows from Proposition 1.2.9(ii).
Lemma 1.4.6 Let Φ be the function defined by Φ(v̄) =
Z
log
a continuous function on P (Rd ) and we have the relation :
Z
kU (k + 1)vk
Φ(gv̄) dµk (g) = E{log
}
kU (k)vk
kgvk
dµ(g). Then Φ is
kvk
Proof:
The first conclusion is a consequence of the Lebesgue theorem and the second is the
result of an easy computation.
Proposition 1.4.7 If µ is strongly irreducible then for any non zero-vector v ∈ Rd we
have :
kU (n)vk
1
= γ1
P-a.e
lim log
n→∞ n
kvk
38
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
Proof:
For P
a nonzero v ∈ Rd any limit point ν of the sequence of probability measures νn,v =
n−1 k
−1
n
k=0 µ ∗δv̄ is invariant and we can write with the notations of the previous Lemma
:
Z
γ1 =
Φ(ū)dν(ū)
Z
= lim Φ(ū) dνn,v (ū)
kU (n)vk
1
= lim E log
n→∞ n
kvk
1
kU (n)vk
≤ E lim sup log
kvk
n→∞ n
The last inequality follows from the Fatou’s lemma and the bound
| log
kU (n)vk
| ≤ log+ kU (n)k + log+ kU −1 (n)k
kvk
Moreover, we know from the Oseledec theorem that there exists an invariant set Ω0
of full P-measure for wich the above lim sup is actually a limit and is equal to one
of the Lyapunov exponents. This yields the result. Note nevertheless that the set
of convergence in this proposition depends upon the choice of v and that there is no
contradiction with the Oseledec Theorem !
Actually the last two propositions can be proved under the weaker hypothesis of mere
irreducibility defined below.
Definition 1.4.8 A subset S of GL(d, R) is said to be not irreducible if there exists a
proper subspace V of Rd such that gV = V for all g ∈ S.
Definition 1.4.9 We say that the probability µ is irreducible if its support is irreducible.
One sees easily that it is the same than to say that Gµ is irreducible and that if Gµ is a
connected subgroup of GL(d, R) then the irreducibility implies the strong irreducibility.
Lemma 1.4.10 If µ is irreducible then for any invariant probability measure ν and
any proper subspace V of Rd we have ν(V ) < 1.
Proof:
1.4. PRODUCTS OF INDEPENDENT RANDOM MATRICES
39
Let us assume that the conclusion is false and let d0 be the smallest dimension of the
subspaces V for which ν(V ) = 1. For a subspace V of dimension d0 and such that
ν(V ) = 1, the invariance equation :
Z
ν(V ) = ν(gV ) dµ(g)
implies that for µ almost all g we have gV = V and this proves the Lemma.
As a consequence if we only assume that µ is irreducible one can prove Proposition 1.4.5
and hence Proposition 1.4.7 using a theorem of Breiman [10] which asserts that when ν
is an extremal invariant probability measure then the dynamical system (Ω̃, F̃ , θ̃t , P̃) is
ergodic. Then one can apply Proposition 1.2.9(iii) to an extremal invariant measure to
obtain the formula for γ1 and then integrate on the set of invariant extremal measures.
Theorem 1.4.11 Assume that the probability measure µ is strongly irreducible and
that Gµ is a non compact subgroup of SL(d, R), then γ1 > 0.
Proof:
We only give a sketch of the original proof of Furstenberg [30] since we will prove again
this result in the next subsection. First we remark that the strong irreducibility of µ
implies that there is no probability measure π on P (Rd ) such that gπ = π for µ-almost
all g. Actually we can replace the hypothesis of the theorem by this property and
the irreducibility of µ. If we assume that the invariant measure ν appearing in the
formula of Proposition 1.4.5 has a continuous strictly positive density with respect to
the Cauchy measure m on P (Rd ) then we have the cocycle relation :
dg−1 m dg−1 ν dν
kvk d dg−1 m
=
=
kgvk
dm
dg−1 ν
dν dm
This yields
1
γ1 = −
d
Z
log
dg−1 ν
(b) dν(b)dµ(g)
dν
If the conclusion of the Theorem was false Jensen’s inequality would imply that ν is
left invariant by µ-almost all g and this gives a contradiction. It still remains to extend
this proof to a general ν and this is the hard part of the work ! (See also Ledrappier
[77] for a complete proof )
When µ is supported by the symplectic group one of the crucial questions is the positivity of the exponent γℓ . Unfortunately when d > 2 Theorem 1.4.11 says nothing
about this problem thus we need the stronger results proved in the next sub-section.
We also remark that we cannot replace the strong irreducibility by the irreducibility in
this Theorem.
40
1.4.2
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
The Lyapunov Spectrum
Definition 1.4.12 A subset T of GL(d, R) is said to be contractive if there exists a
sequence hn in T such that hn khn k−1 converges to a rank one matrix.
Definition 1.4.13 The measure µ is said to be contractive if Tµ is contractive.
An equivalent definition is inf {g∈Tµ } (a2 (g)a−1
1 (g)) = 0 (recall that a(g) is the diagonal part in the polar decomposition of g). One remarks that contractivity cannot be
related directly to the group Gµ . This explains why it will be difficult to check this
property (since semi-groups are in general less tractable than groups). Nevertheless in
the particular case of SL(2, R) one has a very simple criterion :
Proposition 1.4.14 If the probability measure µ is supported by SL(2, R) then µ is
contractive if and only if Gµ is not compact.
Proof:
It is known that a compact semi-group is actually a group (See [48]). Hence one has
only to check that the sequence hn khn k−1 has a rank one limit point iff the sequence
khn k is unbounded.
We also remark that if µ is strongly irreducible and contractive then the image measure
µ′ of µ under the map g ֒→ g′ has the same properties. We will denote by Rn the
right product Rn = g(1)g(2) . . . g(n). We are now in position to state and prove the
fundamental result of Guivarc’h-Raugi :
Theorem 1.4.15 Let us assume that µ is strongly irreducible and contractive. Then
there exists a random variable Z with values in the projective space P (Rd ) such that for
P-almost all ω every limit point of the sequence Rn (ω)kRn (ω)k−1 is a rank one matrix
and the direction of its range is precisely given by Z(ω).
Proof:
Let ν be an invariant probability measure onR B = P (Rd ) and let f be a continuous
function on B. The random variables Zn = f (Rn b) dν(b) form a bounded martingale with respect to the family Fn of σ-algebras generated by the random variables
g1 , . . . , g(n). Taking a countable dense subset of the set of continuous functions on B
we can conclude that the sequence Rn ν is P-almost
converging to some probabilPsurely
∞
−n−1
ity measure νω . Let λ be the probability measure n=0 2
µn . An easy computation
1.4. PRODUCTS OF INDEPENDENT RANDOM MATRICES
shows that if Zng =
n=∞
X
n=1
R
41
f (Rn gb) dν(b) then
2
2
E{(Zn+r − Zn ) } ≤ 2rkf k
and
∞ Z
X
n=0
(Zn − Zng )2 dλ(g) < ∞
This implies that the sequence Rn gν is P ⊗ λ-almost surely converging to νω . Let now
A(ω) be a limit point of the sequence Rn (ω)kRn (ω)k−1 . Since ν is proper hν is defined
for any non zero matrix h and one can conclude that A(ω)gν = νω P ⊗ λ-almost surely.
The set of g for which this equality holds being closed we obtain the same relation for
P-almost all ω and any g in Tµ . Taking again in account the fact that ν is proper and
the contractivity property of µ we obtain that there exists a rank one matrix h such
that A(ω)h is a non zero matrix and A(ω)hν = νω . Hence νω is a point measure P-a.e.
and we can conclude from A(ω)ν = νω that A(ω) is a rank one matrix whose range
has the direction of the support of νω . Note that this construction implies that the
direction of the support of νω is independent from the choice of the invariant measure
ν.
This theorem is false in general for the left product U (n). Note nevertheless that the
sequence U (n)kU (n)k−1 always have rank one limit points (applying the above theorem
to the right product of transposes ) but the direction of the range depends of the choice
of the limit point !
Lemma 1.4.16 Let assume that µ is strongly irreducible and contractive. Then for
any non zero vector v ∈ Rd we have P-a.e. :
(i) kU (n)vk ≥ Cv kU (n)k where Cv (ω) is a positive random variable
(ii) limn→∞
kΛ2 (U (n))k
= 0.
kU (n)vk2
Proof:
Let Sn be the right product Sn = g′ (1)g′ (2) . . . g′ (n) and Z ′ the corresponding random
variable from Theorem 1.4.15. We remark that Sn′ = U (n) and kU (n)k = kSn k =
a1 (U (n)), and writing a polar decomposition U (n) = k(U (n))a(U (n))u(U (n)) Theorem 1.4.15 implies immediately that P-a.e. we have the following properties:
a (U (n))
(*) The bounded sequence 2
converges to 0,
a1 (U (n))
(**) The sequence u′ (U (n))e¯1 converges to Z ′ .
We can write :
n
X
kU (n)vk2 =
a2k (U (n)) < v , u′ (U (n))ek >2
k=1
kU (n)vk2
kU (n)k2
= < v , u′ (U (n))e1 >2 +
k=n
X
k=2
a2k (U (n))
< v , u′ (U (n))ek >2
a21 (U (n))
42
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
It follows from (*) and (**) that
kU (n)vk
= | < v, z ′ > |
n→∞ kU (n)k
lim
where z is a unit vector of direction Z ′ . The law of Z ′ , which is the invariant measure for
µ′ , is proper hence for a fixed non zero v the set {ω ; v is orthogonal to the direction of Z ′ }
is of zero P-measure and this yields conclusion (i). Conclusion (ii) is a consequence of
(i) and (*) once we remark that kΛ2 (U (n))k = a1 (U (n))a2 (U (n)).
Theorem 1.4.17 Let assume that µ is strongly irreducible and contractive. Then one
has:
(i) There exists a unique invariant probability measure ν and ν is proper.
R
kgvk
dµ(g) dν(v̄)
kvk
kU (n)vk
1
converges uniformly to the exponent γ1 with respect to v̄ ∈
(iii) n E log
kvk
P (Rd ).
(ii) γ1 =
log
Proof:
Recall that a probability measure m on P (Rd ) is said to be proper if m(V ) = 0 for
any proper subspace V of Rd . Theorem 1.4.15 implies that for any proper probability
measure m on P (Rd ) the sequence Rn (ω)m converges weakly to the Dirac measure
at the point Z(ω). Let ν be an invariant
probability measure and f be a continuous
R
function on P (Rd ). The sequence f (Rn (ω)v̄) dν(v̄) is a bounded martingale with
expectation ν(f ). We know from Lemma 1.4.4 that ν is proper, thus ν has to be equal to
the law of the random variable Z proving (i). The inequalities Cv kU (n)k ≤ kU (n)vk ≤
kU (n)k yield immediately (ii). To prove (iii) it is enough to show that for any sequence
v̄n converging to a point v̄0 then the sequence n−1 E{logP
kU (n)vn k kvn k−1 } converges
n−1 n
−1
to γ1 . Any weak limit point of the sequence νn = n
k=0 µ ∗ δv̄n is µ invariant.
Hence this sequence converges weakly to the unique invariant measure ν. If we apply
R
kgvk
dµ(g)
this sequence of probability measures to the continuous function v̄ ֒→ log
kvk
we obtain the result.
Theorem 1.4.18 Let us assume that µ is strongly irreducible and contractive. Then
we have the strict inequality γ1 > γ2 .
Proof:
Let B1 be the set of norm one matrices of Rd and B2 the set of norm one matrices of
Λ2 (Rd ). Following the lines of the paper of Furstenberg and Kesten [31] Theorem I.4.1,
1.4. PRODUCTS OF INDEPENDENT RANDOM MATRICES
43
one can prove that there exist probability measures ν1 and ν2 on B1 and B2 such that
:
1
log kU (n)M k = γ1
P ⊗ ν1 − a.e.
(1)
n→∞ n
1
lim log kΛ2 U (n)N k = γ1 + γ2 P ⊗ ν2 − a.e.
(2)
n→∞ n
lim
Notice that the group G acts on B = B1 × B2 by g.(M, N ) = (g.M, g.N ) and so if we
kgM k2
, it is easy to check that any limit
define the cocycle σ by σ(g, (M, N )) =
kΛ2 (g)N k
P
point ν of the sequence n−1 nk=1 µn ∗ (ν1 ⊗ ν2 ) is invariant and admits ν1 and ν2 as
projections on B1 and B2 . Lemma 1.4.16(ii) then implies that
lim σ(U (n), (M, N )) ≥ lim
kU (n)M k2
= +∞
kΛ2 (U (n))k
P ⊗ ν − a.e.
From Proposition 1.3.4, the sequence n−1 log σ(U (n), (M, N )) converges P ⊗ ν-a.e.to a
random variable having a positive expectation. We now remark that (1) and (2) imply
that this limit is P⊗ν-almost surely equal to 2γ1 −(γ1 +γ2 ) = γ1 −γ2 and this completes
the proof.
It is worth to mention that Theorem 1.4.18 implies that γ1 is strictly positive when
µ is carried by SL(d, R) since in this case the sum of the Lyapunov exponent is nul.
In the particular case of SL(2, R), Proposition 1.4.14 shows that this result is actually
equivalent to the Furstenberg Theorem 1.4.11. But the most important consequence of
Theorem 1.4.18 is the contractive action of the sequence U (n) on the projective space
P (Rd ). One defines the projective distance δ on P (Rd ) by the formula
δ(x̄, ȳ) =
kx ∧ yk
kxk kyk
x , y ∈ Rd
The following proposition shows that the distance between two columns of the matrix
U (n) goes to zero exponentially fast.
Proposition 1.4.19 If µ is strongly irreducible and contractive then for non-zero vectors x, y ∈ Rd x 6= y one has :
(i)
1
δ(U (n)x̄, U (n)ȳ)
log
<0
n→∞ n
δ(x̄, ȳ)
lim
(ii) there exists an integer N such that
P − a.e.
sup
x, y ∈ Rd
Z
log
δ(gx̄, gȳ) N
dµ < 0
δ(x̄, ȳ)
44
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
Proof:
From the definition of δ we obtain
δ(U (n)x̄, U (n)ȳ)
δ(x̄, ȳ)
kyk
kU (n)x ∧ U (n)yk kxk
kx ∧ yk
kU (n)xk kU (n)yk
kxk
kyk
≤ kΛ2 (U (n))k
kU (n)xk kU (n)yk
=
1 log kU (n)x ∧ U (n)yk can be deduced from the OsThe convergence of the sequence n
kx ∧ yk
eledec Theorem applied on Λ2 (Rd ) and (i) follows from Theorem 1.4.18. To prove (ii)
we write :
Z
δ(gx̄, gȳ) n
1
dµ
log
n
δ(x̄, ȳ)
1
kU (n)xk
1
kU (n)yk
1
E{log kΛ2 (U (n))k} − E{log
} − E{log
}
≤
n
n
kxk
n
kyk
It only remains to apply again Theorem 1.4.18 together with the uniform convergence
proved in Proposition 1.4.17(iii).
The uniform contraction in mean of the projective space by the sequence U (n) expressed by this proposition can be viewed as a subsitute for the Dœblin condition for
Markov chains (which is in general not satisfied under the hypothesis of this section).
A consequence of this property is the exponential speed of convergence of the iterates
T n of the kernel of the Markov chain U (n)v̄ to the invariant measure ν.
Proposition 1.4.20 Let us assume that µ is strongly irreducible and contractive and
that there exists a positive τ such that kgkt is µ integrable for −τ ≤ t ≤ τ . Then there
exists a positive real number α0 such that for 0 < α ≤ α0 we have :
(i) T is a bounded operator on Lα and
(ii) there exist Cα < ∞ and ρα < 1 such that for n = 1, 2 . . .
f ∈ Lα we have:
kT n f − ν(f )kα ≤ kf kα Cα ρnα
kvk
)α } ≤ Cα ρnα
E{(
kU (n)vk
Proof:
This result is an immediate consequence of Proposition 1.3.15 and Proposition 1.4.19.
Up to now we have seen that the behavior of the norm of the columns of the matrix
U (n) is the same as the norm of U (n) and that it is given by the upper Lyapunov
1.4. PRODUCTS OF INDEPENDENT RANDOM MATRICES
45
exponent. Actually under the hypothesis of Proposition 1.4.20 one can show that all
the coefficients of the matrix U (n) also share this property. This has been proved by
Guivarc’h and Raugi in [44] where they first obtain some regularity property of the
invariant measure ν. Since the proof is somewhat involved we only state their result.
The interested reader will find a proof in Bougerol-Lacroix [9].
Proposition 1.4.21 Under the hypothesis of Proposition 1.4.20 we have
R kxk kyk α
dν(x̄) < +∞ for some positive α
(i) sup
|<x,y>|
y ∈ Rd
(ii) limn→∞
1
n
log | < U (n)x, y > | = γ1 P − a.e.
for any non zero vectors x, y ∈ Rd
There is an easy way to extend the foregoing results to the Lyapunov exponents of higher
order and thus to obtain that all the Lyapunov exponents are distinct, in which case we
say that the Lyapunov spectrum is simple. For p = 1 . . . d − 1, let Λp µ be the image of
µ under the mapping g ֒→ Λp g. Using a polar decomposition it is readily seen that the
two upper Lyapunov exponents associated with the action of Λp GL(d, R) on Λp (Rd ) are
equal to γ1 + . . . + γp and γ1 + . . . + γp−1 + γp+1 . It follows from Theorem 1.4.18 that if
the action of Λp µ on Λp (Rd ) is strongly irreducible and contractive then γp > γp+1 . We
also remark that the unique invariant probability measure νp on P (Λp (Rd )) is actually
supported by the set of directions of p-vectors since this set is invariant. Tutubalin
was the first one to show that when µ has a density on SL(d, R) then all the Lyapunov
exponents are distinct. See Tutubalin [101] and also Virtser [103]. In this situation
Tµ is an open set of SL(d, R) and the following proposition generalizes the result of
Tutubalin.
Proposition 1.4.22 If Tµ contains an open set of SL(d, R) then for p = 1 . . . d − 1
the probability measure Λp µ is strongly irreducible and contractive,from which it follows
that the Lyapunov spectrum is simple.
Proof:
The group SL(d, R) is connected and hence one can conclude that Gµ is equal to
SL(d, R) and the strong irreducibility is obvious.
A complex eigenvalue λ of a matrix g is called simple if the subspace ker(g − λI) is
one-dimensional and equal to ker(g − λI)2 . This eingenvalue is called dominating if
|λ| is strictly greater than the modulus of any other eigenvalue of g. If Tµ contains a
matrix g with a simple dominating eigenvalue, then using a Jordan decomposition of
g one sees that µ is contractive. The hypothesis implies that Tµ contains a matrix g
with d eigenvalues of distinct moduli thus Λp (Tµ ) has a simple dominating eigenvalue
for p = 1 . . . d − 1.
46
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
If we keep in mind that we want to handle the case of the symplectic group it is worth to
mention that Proposition 1.4.22 is not sufficient! Fortunately there is a way to extend
the result to this situation.
Definition 1.4.23 Let p be an integer in (1, . . . , ℓ)
• A p-vector u = u1 ∧ u2 ∧ . . . ∧ up is said to be Lagrangian
if forany couple (i, j)
0 I
.
of integers in (1, . . . , p) one has u′i Juj = 0 where J =
−I 0
• Let Lag(p) be the subspace of Λp (Rd ) spanned by the Lagrangian p-vectors. A
subset S of Λp SP (ℓ, R) is said to be Lag(p)-strongly irreducible if its action on
the vector space Lag(p) is strongly irreducible.
• The measure Λp µ is said Lag(p) strongly irreducible if its support is Lag(p)
strongly irreducible.
It is readily seen that Lag(p) is a stable subspace of Λp (Rd ) under the action of SP (ℓ, R),
thus one cannot expect an irreducible action of Λp µ on Λp (Rd ) for p > 1. Nevertheless
the following proposition allows to overcome this lack of irreducibility.
Proposition 1.4.24 Let p be an integer in {1 . . . ℓ}, g be a symplectic matrix, and r
be the dimension of the linear space Lag(p). Let us denote by ḡ the element of GL(r, R)
associated to the restriction of Λp g to the invariant subspace Lag(p) of Λp (Rd ) and by
µ̄r the image of Λp µ by the mapping Λp g ֒→ ḡ. Then one has:
1. The two first Lyapunov exponents of Λp µ and µ̄r are equal.
2. If Λp µ is contractive and Lag(p) strongly irreducible then the probability measure
µ̄r is contractive and strongly irreducible when we consider the action of GL(r, R)
on Rr ).
Proof:
A symplectic matrix g has a polar decomposition g = kau in SP (ℓ, R) and one has
Λp g = Λp kΛp aΛp u. The matrices Λp k and Λp u are orthogonal and leave the subspace
Lag(p) invariant. It follows that the norms computed in Λp (Rd ) and Lag(p) are the
same and one has:
kḡk = kΛp gk
and
kΛ2 (ḡ)k = kΛ2 (Λp g)k
This yields the first conclusion.
Secondly we remark that the claim about strong irreducibility is obvious. Since Λp µ is
contractive there exists a sequence hn in the closed semigroup generated by its support
1.4. PRODUCTS OF INDEPENDENT RANDOM MATRICES
47
such that the sequence hn converges to a rank one matrix h. Obviously Lag(p) is left
khn k
invariant by h and the above remark about the norms imply that the sequence h̄n
kh̄n k
converges to h̄. This last matrix is of rank a most 1 and is certainly not zero since its
norm is 1 hence it is a rank one matrix.
Hence if Λp µ is contractive and Lag(p)-strongly irreducible then Λp µ viewed as a probability measure on the linear group of Lag(p) is strongly irreducible and contractive. The
hypothesis implies that Tµ contains a matrix g with d eigenvalues of distinct moduli ,
hence one can use the proof of Proposition 1.4.22 to obtain:
Proposition 1.4.25 If Tµ contains an open set of SP (ℓ, R) then for p = 1 . . . ℓ the
probability measure Λp µ is contractive and Lag(p)-strongly irreducible, from which it
follows that the Lyapunov spectrum is simple.
This last proposition does not answer our problem when µ is too singular. Thus we state
without proof the strongest result in this direction. It is originally due to Goldsheid and
Margulis and extended by Guivarc’h and Raugi. See [39] and [45]. The Zariski’s closure
of a subset A of an algebraic manifold is defined as the set of zeroes of the polynomial
vanishing on A . One can check that the Zariski closure of a sub semi-group of GL(d, R)
is a group.
Proposition 1.4.26 If the Zariski closure of Gµ is equal to SL(d, R), then for p =
1 . . . d − 1 the probability measure Λp µ is contractive and strongly irreducible. If the
Zariski closure of Gµ is equal to SP (ℓ, R) then for p = 1 . . . ℓ the probability measure
Λp µ is contractive and Lag(p)-strongly irreducible. The Lyapunov spectrum is simple
in both cases.
The negation of the strong irreducibility of Λp µ can be expressed by a finite number
of polynomial equations satisfied on Gµ , thus the strong irreducibility is actually a
property of the Zariski closure of Gµ . It is much more difficult to prove that the
contractivity can also be checked on the Zariski closure of Tµ . We remark that Gµ is
already an algebraic group under the hypothesis of Proposition 1.4.22 or 1.4.25.
1.4.3
Schrödinger Matrices
There is a particular type of symplectic matrix which will be used in the following
chapters and which we call “Schrödinger matrix”.
48
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
Definition 1.4.27 Let x be a point in Rℓ , one defines the Schrödinger matrix H(x)
by the formula :

 xi if i = j
Q(x) −Iℓ
H(x) =
with [Q(x)]i,j = −1 if |i − j| = 1
Iℓ
0

0
if |i − j| > 1
where Iℓ is the identity matrix of order ℓ.
In the case ℓ = 1 the strong irreducibility and contractivity of a measure supported by
Schrödinger matrices is easily checked :
Proposition 1.4.28 Let µ be a probability measure on SL(2, R) supported by the set of
Schrödinger matrices. If µ is not concentrated on one point then µ is strongly irreducible
and contractive.
Proof:
The support of µ contains at least two matrices H(x) and H(y) with x 6= y. Looking at
the sequences un and v n defined from u = H −1 (x)H(y) and v = H(x)H −1 (y) it is readily seen that Gµ is not compact and the contractivity follows from Proposition 1.4.14.
If there exists a finite subset Σ of the projective line which is stable under the action
of Gµ then applying the sequences un and v n to the points in Σ we see that Σ has to
be equal at the same time to ē1 and ē2 which is absurd.
Let v = ab be a vector in R2 . We identify the projective line P (R2 ) with the set
{R ∪ ∞} by the mapping Ψ:
a
6 0
Ψ(v̄) = b if b =
∞ if b = 0
The image of a measure τ on P (R2 ) by the mapping Ψ will be denoted by τ̃ .
Corollary 1.4.29 Let µ be a probability measure on SL(2, R) supported by the set of
Schrödinger matrices. Let us assume that the probability measure µ is not concentrated
on one point and that E{log kgk dµ(g)} < ∞. Then one has :
1. There exists a unique invariant measure ν on P (R2 ) and its image by the mapping
Ψ is carried by R.
2. The upper Lyapunov exponent γ is strictly positive and is given by the formula:
Z
kgvk
dν(v̄) dµ(g)
γ = log
kvk
1.4. PRODUCTS OF INDEPENDENT RANDOM MATRICES
49
3. If there exists a positive real number η with E{kgkη } < ∞ then one has:
Z
log |t| dν̃(t)
γ=
R
Proof:
The first conclusion is a direct consequence of Proposition 1.4.28 and Theorem 1.4.17
since on the projective line, a measure is proper iff it is continuous, hence ν̃{∞} = 0.
The positivity of the upper lyapunov exponent γ is also a direct consequence of Proposition 1.4.28 and Theorem 1.4.18. In order to prove the formula
remark that
R 3. we
2
α
Proposition
1.4.21(i) implies that for some positive α one has (1 + t ) dν̃(t) < ∞
R
hence log(1 + t2 ) dν̃(t) < ∞. Let us denote by m the Cauchy measure on P (R2 ) and
let r be the measure on the projective space such that r̃ is the Lebesgue measure on
R. Denoting by f the density of m with respect to the measure r one has:
kvk2
dg−1 m
(v̄) =
dm
kgvk2
=
=
dg−1 m
dg−1 r
dr
(v̄)
(v̄)
(v̄)
−1
dg r
dr
dm
f (gv̄) dg−1 r
(v̄)
f (v̄) dr
The invariance of ν and the integrability of the function log(f ) stated above implies
immediately that:
Z
f (gv̄)
dν(v̄) dµ(g) = 0
log
f (v̄)
Then taking in account the formula (ii) in Theorem 1.4.17 one obtains:
γ=
Z
Z
kgvk
1
dg−1 r
log
dµ(g) dν(v̄) = −
(v̄) dµ(g) dν(v̄)
log
kvk
2
dr
Z
=
log |t| dν̃(t)
One obtains the last relation by computing the Jacobian of the mapping t ֒→ x − 1/t
wich is equal to t−2 . This term is independent from the value of the real number
x appearing in the Schrödinger matrix hence the double integral reduces to a single
integral.
This computation will be extented to the strip in Proposition 1.6.6 and a very useful
formula relating the lyapunov exponent and the Fourier transform of the distribution
of µ due to Pastur can be found in [90].
For ℓ > 1 the situation is much more complicated. If one wants to use the criterion
given in Proposition 1.4.25 one sees immediately that the problem is not trivial since
50
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
the matrices H(x) are very degenerated. Working in the group SL(d, R) gives rise to
tremendous computations of Jacobians (even in the case ℓ = 2). This is why we will
use some properties of Lie Algebras.
Definition 1.4.30 The Lie Algebra
G of the group SP (ℓ, R) is the set of d×d matrices
M1 M2
g of the form g =
where M2 and M3 are symmetic matrices of order ℓ.
M3 −M1′
We will use the following notations :
• Eij is the ℓ×ℓ matrix which has a 1 in the ith row and jth column and 0 elsewhere.
•
1
Xij =
2
′
0 Eij + Eij
0
0
Yij =
′
Xij
Zij =
Eij
0
0
−Eij
The next identities are easily obtained.
Lemma 1.4.31 One has the relations :
[Zij , Xkr ] = δjk Xir + δjr Xik
[Ykr , Zij ] = δik Yrj + δir Ykj
1
[Xij , Ykr ] = (δjk Zir + δjr Zik + δki Zjr + δri Zjk )
4
It follows that G is generated by the matrices Xij and Yij for |i − j| ≤ 1.
Lemma 1.4.32 Let O be an open subset of Rℓ . Then the subgroup of SL(d, R) generated by the set {H(x) ; x ∈ O} is equal to SP (ℓ, R).
Proof:
Looking at products of the form H(x)H −1 (y) and H −1 (x)H(y) it is readily seen that
the Lie algebra of the group contains all the matrices Xii and Yii and this is enough in
the case ℓ = 2. In the case ℓ > 2 we write :
−Q(x)Eii Q(x)Eii Q(x)
−1
H(x)(exp Xii )H (x) = exp
= exp Ai (x)
−Eii
Eii Q(x)
Applying this formula to two points x and y we obtain :
Ai (x) − Ai (y) = (yi − xi )Zii + Xi,i+1 + Xi,i−1 + (x2i − yi2 )Xii
where Xij is nul if the index j is not in (1, . . . , ℓ). A direct application of Lemma 1.4.31
yields that the Lie Algebra of the generated group is equal to G.
1.4. PRODUCTS OF INDEPENDENT RANDOM MATRICES
51
C.Glaffig [34] has proved that this result holds if we merely assume that one coordinate
section of O contains an open set in R using the same technique.
Lemma 1.4.33 Let O be an open subset in Rℓ and let X1 . . . Xr be non-zero vectors
of G. Then there exists a finite subset {x1 , x2 . . . xp } of O such that for gk = H(xk ) the
rank of the system :
Ad(g1−1 )Xk , Ad(g1−1 g2−1 )Xk , . . . Ad(g1−1 g2−1 . . . gp−1 )Xk
k = 1...r
is equal to the dimension of G.
Proof:
For a finite word ω = {g1 , . . . , gn } we denote by Vω the linear span of the above system of
vectors in G. We have Vωω′ = Vω +Ad(g1−1 g2−1 . . . gn−1 )Vω′ where ωω ′ is the concatenation
of the two words ω and ω ′ . Let Vω0 be such a subspace of maximal dimension associated
−1
to a word h1 . . . hp . Then Vω0 ω = Vω0 and Ad(h−1
1 . . . hp )Vω ⊂ Vω0 thus Vω ⊂ Vω0 for
−1
any ω. This yields that Ad(g )Vω0 ⊂ Vω0 for g ∈ O and hence for any g ∈ SP (ℓ, R)
from Lemma 1.4.32. The group SP (ℓ, R) is a simple Lie group hence one can concludes
that Vω0 = G.
Proposition 1.4.34 Let O be an open subset of Rℓ . Then there exist p = (2ℓ + 1)
points a1 , a2 . . . ap in O such that the mapping Φ defined by :
Φ(x1 , x2 , . . . , xp ) = H(xp )H(xp−1 ) . . . H(x1 )
is a submersion from Rpℓ to SP (ℓ, R) near the point (a1 , a2 . . . ap ).
Proof:
In the Lemma 1.4.33 we choose the elements Xi ∈ G for i = 1 . . . ℓ to be the matrix
Xii defined above. Then we obtain the existence of p points ai with
Pthe rank property
stated in this lemma. If for t ∈ Rℓ we define the matrix T (t) = ℓk=1 ti Xii then we
have the relation H(x + t) = (exp T (t))H(x) and hence :
Φ(a1 + t1 , a2 + t2 , . . . , ap + tp )
= (exp T (tp ))H(ap )(exp T (tp−1 ))H(xp−1 ) . . . (exp T (t1 ))H(x1 )
The result now follows from the computation of the differential of the function Φ at the
point (a1 , a2 , . . . , ap ). The dimension of G is equal to ℓ × (2ℓ + 1) thus p is certainly no
less than 2ℓ + 1. Using the notations of Lemma 1.4.33 and the linear independence of
the vectors Xii one sees that the maximal number of generators of Vω0 is ℓ × (2ℓ + 1)
and this completes the proof.
If τ is a probability measure on Rℓ let us denote by µ the image of τ by the mapping
H.
52
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
Proposition 1.4.35 If the support of τ contains an open set of Rℓ then Tµ contains
an open set of SP (ℓ, R).
Proof:
Direct application of Proposition 1.4.34.
Proposition 1.4.36 If the law τ is absolutely continuous with respect to the Lebesgue
measure of Rℓ then µ2ℓ+1 is absolutely continuous with respect to the Haar measure of
SP (ℓ, R).
Proof:
We apply Proposition 1.4.34 choosing O = Rℓ . The function Φ is an analytical mapping
from (Rℓ )2ℓ+1 to SP (ℓ, R). Its Jacobian is also an analytic function which is not identically zero. If follows that the vanishing set of this Jacobian function is a countable
union of submanifolds of lower dimension in (Rℓ )2ℓ+1 . This completes the proof.
Notice that under the hypothesis of Proposition 1.4.36 then Tµ contains an open set of
SP (ℓ, R). We end this section by the very important result obtained by Goldsheid and
Margulis in [40].
Theorem 1.4.37 Let Σ be a product set in Rℓ of the form A1 × . . . × Aℓ such that each
Ai contains at least two points. Then the Zariski closure of the group generated by the
set {H(x) ; x ∈ Σ} is equal to the whole of SP (ℓ, R).
1.5
Markovian Multiplicative Systems
Sequences of independent random matrices have no counterpart in the continuous case
and moreover in various models products of random matrices are actually governed by
a Markov chain or a Markov process. Hence we are lead to the study of Markovian
multiplicative processes both for continuous and discrete time. We will not study these
models in full generality but we will try to focus on some of the typical examples arising
in the theory of random linear difference or differential operators.
Let M be a compact metric space and {X(t) ; t ∈ T } be a M -valued Markov process.
As usual we suppose that X(t) is the tth coordinate on the product space Ω = M T and
we denote by {θt ; t ∈ T } the semi-group of shift operators such that X(t+h) = X(t)◦θh
and by Ft the sigma-algebra generated by the the random variables {X(s) ; s ≤ t}.
We assume some strong regularity properties for the process X(t) namely
1.5. MARKOVIAN MULTIPLICATIVE SYSTEMS
53
• The transition semigroup Qt is a Feller semi-group, i.e. Qt maps continous functions into continuous functions. This implies that in the continuous time case
the process X(t) has a version with right-continuous sample paths and left-hand
limits.
• The kernel Q1 has an unique invariant probability measure π whose support is
equal to the whole space M . Furthermore Q1 (x, dy) = q(x, y)dπ(y) where q is a
π ⊗ π positive function.
In the continuous time case a typical example of such a process X(t) is the following.
Let M be a C ∞ compact Riemannian manifold and A1 (x) . . . Am (x) be C ∞ vector fields
on M , such that at each point x ∈ M the Lie algebra generated by these vector fields is
equal to the tangent space. If B 1 (t) . . . B m (t) are real independent Brownian motions
then the Stratonovitch stochastic differential equation :
dX(t) =
m
X
k=1
Ak (X(t)) ◦ d B k (t)
has a unique solution which satisfies the above requirements. Moreover, the invariant
measure π has a density with respect to the Riemannian volume element dx and this
density and the kernel qt (x, y) are smooth positive functions. (See [50] [66] for example).
For any x ∈ M there is a unique probability measure Px on Ω such that the process
X(t) is Markovian and Px {X(0) = x} = 1. In the sequel the expectation with respect
to Px will be denoted by Ex . The system (Ω, F, θt , P) is anR ergodic dynamical system
if we define the probability measure P by the formula P = Px dπ(x). We then define
the process U (t) in the following way :
• In the discrete case we let F be a continuous function from M to GL(d, R) and
we set g(n) = F (X(n)). The sequence U (n) is then defined for n ≥ 1 as the left
product U (n) = g(n − 1) . . . g(0) and U (0) = I.
• In the continuous case we let F be a continuous function from M to the set of
d × d real matrices. The process U (t) is then defined as the (unique) solution of
the matrix valued differential equation dU (t) = F (X(t))U (t)dt with the initial
condition U (0) = I.
In the continuous case the matrix U (t) is actually in GL(d, R) since the process V (t)
solution of the equation dV (t) = −V (t)F (X(t))dt with initial condition V (0) = I
verifies d(U (t)V (t)) = 0 hence V (t) = U −1 (t). Moreover it is readily seen that U (t)
is a multiplicative process defined on the ergodic dynamical system (Ω, F, θt , P). In
the continuous case the integrability properties follow from Gronwald’s theorem and
the separability follows from the Feller property of X(t). In general U (t) is not a
Markov process but we will see below that for each fixed b ∈ B = P (Rd ) the process
Z(t) = (X(t), U (t)b) is Markovian.
54
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
Lemma 1.5.1 Let f be a measurable bounded function on Ω × B and b ∈ B. For t ∈ T
define ft (ω) = f (θt (ω), U (t, ω)b) then
Z
Ex {ft |Ft } = f (ω ′ , U (t)b) dPX(t) (ω ′ )
Proof:
Let us first assume that f (ω, b) = φ(ω)ψ(b) then
Ex {ft |Ft } = Ex {(φ ◦ θt ) ψ(U (t)b)|Ft } = EX(t) {φ} ψ(U (t)b)
hence the formula is true for such a function f and by a monotone class argument for
any bounded measurable function f .
Proposition 1.5.2 The process Z(t) is Markovian with respect to the increasing sequence of sigma-algebras Ft and any probability measure Px . Moreover the semi-group
{Rt ; t ∈ T } defined by
Rt f (x, b) = Ex {f (X(t), U (t)b)
is a Feller semi-group on M × B.
Proof:
Let f be a bounded measurable function on M × B then one can write :
Ex {f (Z(t + h))|Ft } = Ex {f (X(h) ◦ θt , (U (h) ◦ θt ) U (t)b)|Ft }
Z
=
f X(h), ω ′ ), U (h), ω ′ )U (t)b dPX(t) (ω ′ )
= Rh f (X(t), U (t)b)
The semi-group property of Rt is an easy consequence of the above formula and the
Feller property follows from the joint continuity of U (t) with respect to initial conditions
X(0) and U (0).
We remark that θt is no longer the classical shift operator of the probabilists for the
Markov process Z(t) and that R1 always admits invariant probability measures since
one can pick
any limit point of the sequence of probability measures νn defined by
Pn−1
1
νn (f ) = n k=0 Rn f (x, b).
Using the Markov process Z(t) one can recover most of the results obtained in the case
of independent products. Unfortunately the proofs are more involved. In particular
checking measurability properties related to the strong irreducibility defined below is
a tedious job and we will refer to the work of Bougerol [7] [8] for the details. One can
also remark that even in the continuous case all the limit properties can be stated only
using the Markov chain obtained by restricting the time parameter to integer values.
1.5. MARKOVIAN MULTIPLICATIVE SYSTEMS
1.5.1
55
The Upper Lyapunov Exponent
Definition 1.5.3 The process U is said to be not strongly irreducible if there exist
an integer r and measurable functions V1 (x) . . . Vr (x) with values in the set of proper
subspaces of Rd such that if we note W (x) = ∪m
i=1 Vi (x) then we have
U (1)W (X0 ) = W (X(1))
P-a.e
The strong reccurence hypothesis which we made on the process X allows to give a
very simple criterion of strong irreducibility for U only involving the properties of the
function F . We say that a subset S of the set of d × d matrices is irreducible if there
does not exist a proper subspace V of Rd such that sV ⊂ V for all s ∈ S. One check
immediately that S is irreducible iff the Lie Algebra generated by S is irreducible.
Proposition 1.5.4 Let S = {F (x) ; x ∈ M } and assume that :
• S is strongly irreducible as a subset of GL(d, R) in the discrete time case.
• S is irreducible as a subset of the set of d × d matrices in the continuous time
case.
Then the process U is strongly irreducible.
Proof:
Let us first consider the discrete time case and let us assume that the process U is not
strongly irreducible. Then there exists a measurable family W (x) of finite unions of
proper subspaces of Rd such that U (1)W (X(0)) = W (X(1)) P-almost surely. If follows
that
π ⊗ π{(x, y)|W (x) = F (y)−1 W (y)} = 1 .
This implies that the family W (x) is π almost surely constant and equal to some W .
The same relation now yields F (y)W = W for π almost all y and this contradicts the
strong irreducibility of S.
In the continuous case let H be the connected Lie sub-group of GL(d, R) whose Lie
algebra is generated by S. H is irreducible and being connected it is also strongly
irreducible and the conclusion follows from the above discussion.
Proposition 1.5.5 LetR us denote by ν an invariant probability measure for the Markov
chain Z(n) and by ν = δx ⊗ νx dπ(x) a desintegration of ν. If U is strongly irreducible
then the probability measure νx is proper for π-almost all x.
56
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
Proof:
The proof of this result is very technical and we only give the essentials steps. (See
Bougerol [8] for details). One first proves that there exists a continuous (and hence
unique) version of the mapping x ֒→ νx and for x ∈ M one defines :
• d(x) = the minimum dimension of a subspace V with νx (V ) > 0
• p(x) = the maximum value of νx (V ) for a d(x)-dimensional subspace V
• W (x) = the finite union of the r(x) subspaces V of dimension d(x) such that
νx (V ) = p(x)
The first step is to establish the measurability of the functions d(x), r(x), p(x) and to
prove that r(x) is a constant function equal to some integer r. This allows to consider
r measurable functions Vxi and the invariance equation implies that for π almost all x
one has νx (Vxi ) = Ex {U −1 (1)Vxi }. It follows that U (1)W (X(0)) = W (X(1)) P-a.e. and
this contradicts the strong irreducibility of the process U .
Let Ω̃ be the product space Ω̃ = Ω × B and let θ̃ be the shift operator θ̃t (ω, b) =
(θt (ω), U (t, ω)b) then one has :
Lemma 1.5.6 Let ν be an invariant probability measure for the Markov chain Z(n).
Then the probability measure P̃ defined on Ω̃ by the formula
Z
Z
P̃ = (Px ⊗ δb ) dν(x, b) = (δx ⊗ νx ) dπ(x)
is θ̃ invariant and its projection on Ω is equal to P
Proof:
R
Let φ be a bounded measurable function on Ω̃ and let us set Φ(x, b) = φ(ω, b) dPx (ω).
With the notations of Lemma 1.5.1 one has :
Z
Z
φ ◦ θ̃ dP̃ =
φ(θ(ω), U (1, ω)b) dP x (ω) dν(x, b)
Z
=
Ex {Ex {φ(θ(ω), U (1, ω)b)|F1 }} dν(x, b)
Z
=
Φ(X(1), U (1)b) dPx dν(x, b)
Z
=
φ dP̃ .
1.5. MARKOVIAN MULTIPLICATIVE SYSTEMS
57
Proposition 1.5.7 Suppose that the process U is strongly irreducible and that ν is an
invariant probability measure for the Markov chain Z(n). Then we have the formula :
Z
kU1 vk
} dν(x, v̄)
γ1 = Ex {log
kvk
Proof:
The result is a direct consequence of Lemma 1.5.6 , Proposition 1.5.5 and part (ii) of
Proposition 1.2.9.
Proposition 1.5.8 Let the process U be strongly irreducible. Then for any non-zero
vector v ∈ Rd we have :
lim
n→∞
1
kU (n)vk
log
= γ1
n
kvk
P-a.e.
Proof:
The argument is the same as in Proposition
1.4.7 if we use the probability measures
Pn−1
1
νn defined on M × B by νn (f ) = n k=0 E{f (X(n), U (n)v̄)}. The Markov property
of the process Z implies that
Z
kU (k + 1)vk
kU (1, ω ′ )U (k)vk
Ex {log
| Fk } = log
dPX(k) (ω ′ )
kU (k)vk
kU (k)vk
and this yields the desired result.
Using the Markov property stated above (for continuous time) one sees that
Z
kU (t)vk
Ex {log
} dν(x, v̄) = t γ1
kvk
Actually this property is fairly general (See Proposition 1.2.9).
1.5.2
The Lyapunov spectrum
In order to prove that the Lyapunov exponents γ1 and γ2 are different we need to
introduce the notion of contractivity for a Markovian multiplicative process.
Definition 1.5.9 The process U is said to be contractive if :
• The subset {F (x) ; x ∈ M } generates a contractive semi-group in GL(d, R) in
the discrete time case.
• The set {exp(tF (x)) ; x ∈ M , t > 0} generates a contractive semi-group in
GL(d, R) in the continuous time case.
58
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
One can remark that strong irreducibility and contractivity are easier to handle in the
continous time case since it is possible in this situation to work in the Lie algebra of
SP (ℓ, R).
Assuming strong irreducibility and contractivity it is possible to extend the results
we discussed for independent products. The proof follows the lines of the paper of
Guivarc’h (See [42]). A complete proof can be found in Bougerol [8]. The following
Theorem summarizes the essential results :
Theorem 1.5.10 Suppose that the process U is strongly irreducible and contractive
then one has:
(i) The Markov chain Z(n) admits a unique invariant probability measure ν on M ×
P (Rd ).
1 E {log kU (n)vk } converges to γ uniformly with respect to the
(ii) The sequence n
1
x
kvk
d
variables (x , v̄) ∈ M × P (R ).
(iii) The Lyapunov exponents γ1 and γ2 are distinct.
Proof:
We only give a sketch of the proof which follows closely the argument used in the theory
of independent products (but with more tears ....) One first proves that all the limit
points of the sequence U (n)kU (n)k−1 are rank one matrices. Then using the fact that
the sequence U ′ (n)νX(n) is a martingale (where νx is given by the desintegration in
Proposition 1.5.5) one obtains the uniqueness of the invariant measure and (ii) is a
direct consequence of this. To obtain γ1 > γ2 one uses again the Proposition 1.3.4 but
with a more complicated dynamical system than in the independent case.
Definition 1.5.11 Schrödinger matrices.
• In the discrete case we say that F (x) is a Schrödinger matrix if F = H ◦ h where
H is the function defined in 1.4.27 and h is a continuous function from M to
Rℓ .
• In the continuous case we say that F (x) is a Schrödinger matrix if F = H̃ ◦ h
where H̃ is the mapping from Rd to the Lie algebra of SP (ℓ, R) defined by
0
Iℓ
H̃(x) =
Q(x) 0
and Q(x) is the matrix defined in 1.4.27.
1.5. MARKOVIAN MULTIPLICATIVE SYSTEMS
59
Proposition 1.5.12 Let A be a subset of Rℓ of the form A1 × A2 × . . . × Aℓ such that
each Ai contains more than one point. Then the Lie algebra generated by H̃(A) is the
whole Lie algebra of SP (ℓ, R).
Proof:
We remark that the Lie bracket of two matrices H̃(x) and H̃(y) is a diagonal matrix.
Hence using the notations introduced in the preceding section on obtains all the matrices
Zii and by difference all the matrices Yii . We now remark that :
[H̃(x) , Zii ] = 2(Xii + xi Yii − Yi,i−1 − Yi,i+1 )
where Yi,j is zero if the index j is not in (1 . . . ℓ). A direct application of Lemma 1.4.31
yields the result.
Proposition 1.5.13 In both the continuous and discrete case let us assume that the
range of h contains a subset A of Rℓ of the form A1 × A2 × . . . × Aℓ such that each Ai
contains more than one point. Then the process Λp U is contractive and Lag(p)-strongly
irreducible for 0 ≤ p ≤ ℓ. From which it follows that all the Lyapunov exponents are
distinct.
Proof:
The Zariski closure of the sub-group generated by the set {exp(tF (x)) ; x ∈ M , t > 0}
is a group. In the continuous case the Proposition 1.5.12 and in the discrete case the
Theorem of of Goldsheid and Margulis 1.4.37 imply that this group is equal to SP (ℓ, R)
and Proposition 1.4.26 yields the result.
This proposition contains an older result obtained by Guivarc’h:
Proposition 1.5.14 In the discrete case if the range of h has a non-empty interior in
Rℓ then all the Lyapunov exponents are distinct.
Proof:
Proposition 1.4.34 implies that the semi-group generated by the range of F contains
an open set in SL(d, R) thus the result follows from the Proposition 1.4.22 and Proposition 1.5.4 (i).
The following proposition summarizes some results needed in the sequel:
Proposition 1.5.15 In the one dimensional case,let us assume that the function h
takes at least two distinct values. Then:
1. The Markov chain Z(n) admits a unique invariant probability measure ν on M ×
P (R2 ).
60
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
1 E {log kU (n)vk } converges to γ uniformly with respect to the
2. The sequence n
x
1
kvk
2
variables (x , v̄) ∈ M × P (R ).
3. The Lyapunov exponent γ1 is strictly positive.
Proof:
In the discrete time case we can apply Proposition 1.4.28 and in both continuous and
discrete time case the result follows from Proposition 1.5.13. It is also possible to write
down a direct proof since Lemma 1.5.12 implies that the Lie algebra generated by the
matrices H(x) is the whole Lie algebra of SL(2, R). Hence the strong irreducibility is
obvious and contractivity follows from Proposition 1.4.14.
1.5.3
Laplace Transform
One can extend to the Markovian case most of the results proved for the FourierLaplace operators associated to a product of independent matrices. Careful readers
have noticed that in this latter case one has often used the existence of a dual operator
associated to the measure µ̆. Time reversal is not granted for Markov processes hence
we need to assume some extra properties for the process X. We will not dwell on
the probabilistic construction of a dual process and we will immediately assume that
X(t) is already defined for any real or integer values of the parameter t. We have to
emphasize that theory of stochastic processes, and especially duality theory for Markov
processes, invites us to interpret the parameter t as a time parameter. This is very
infortunate for the applications to one dimensional Schrödinger operators for in this
case the parameter of the stochastic process is actually the space variable. We will
come back to this slight conflict between two well established traditions several times
in the sequel. In any case, in the situation we have in mind, the process is naturally
defined on the whole line. In this subsection T denotes the set of real numbers or the
set of integers and T + is the subset of non-negative elements of T . We will assume that
the random variable X(t) is the t-th coordinate in the product space Ω = M T and that
the probability measure P on Ω is invariant and ergodic with respect to the action of
the group of shift operators {θt ; t ∈ T }. For (s, t) ∈ T with s ≤ t we denote by Fs,t
the sigma algebra generated by the random variables {X(h) ; s ≤ h ≤ t}.
We also assume that there exist two Feller semi-groups {Qt ; t ∈ T + } and {Q̆t ; t ∈ T + }
on M such that for each s′ ≤ s ≤ t ≤ t′ in T and each bounded measurable function f
on M we have :
E{f (X(t′ ) | Fs,t } = Qt′ −t f (X(t)),
E{f (X(s′ ) | Fs,t } = Q̆s−s′ f (X(s)).
1.5. MARKOVIAN MULTIPLICATIVE SYSTEMS
61
For any x ∈ M the probability P conditionned by the event {X(0) = x} is assumed to
have a regular version denoted by Px . It is easily seen that the common law π of the
random variables X(t) is invariant for the semi-groups Qt and Q̆t . We also assume that
the transition probabilities of these semi-groups have strictly positive densities with
respect to π.
All these assumptions are not independent but, as we already pointed out, we are not
aiming at the greatest generality.
In the continuous case the process U is defined for all t ∈ T and in the discrete time
case one defines U (n) for negative values of n by U (n) = F (X(n))−1 . . . F (X(−1))−1 .
Recall that Z(t) = {(X(t), U (t)b) ; t ∈ T + } and Z̆(t) = {(X(−t), U (−t)b) ; t ∈ T + } are
then Markov processes.
The stationnarity of the process {X(t) ; t ∈ T } yields immediately :
Proposition 1.5.16 The law of the process {(X(t), U (t)) ; t ∈ T } is invariant by the
shift on the space Ω.
Note that one should not confuse this property with the classical notion of stationnarity
since U (t) ◦ θs 6= U (t + s).
Let us denote by σ a cocycle on the projective space B and by z a complex number. For
t ∈ T + one defines the Fourier-Laplace operators on the space C(M × B) of continuous
functions on M × B by :
t
Tσ,z
f (x, b) = Ex {f (X(t), U (t)b)σ z (U (t), b)}
t
T̆σ,z
f (x, b) = Ex {f (X(−t), U (−t)b)σ z (U (−t), b)}
Since M is compact, no integrability property of the coycle σ is needed to obtain
bounded operators on C(M ×B). Moreover one can prove exactly as in Proposition 1.3.7
that these operators are holomorphic functions of z. The Markov property added to
the cocycle relation yield immediately the semi-group property with respect to the
parameter t.
Actually, in the continuous time case, we only need to deal with integer values of t
1 , T̆ = T̆ 1 and T , T̆ for the Markov operators
thus we will use the notations Tσ = Tσ,1
σ
σ,1
associated to the trivial cocycle σ = 1. The superscript * will denote dual operators
on the Banach space of measures on M × B.
The next propositions are only stated for the operator Tσ but they obviously have a
counterpart for T̆σ .
Proposition 1.5.17 Let σ be a cocycle such that there exists an integer p with
sup
(x,b)∈M ×B
Ex {log σ(U (p), b)} < 0
62
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
Then there exists a positive real number α such that for 0 < s ≤ α there exist positive
constants Cs < +∞ and ρs < 1 such that :
sup
(x,b)∈M ×B
Ex {σ s (U (n), b)} ≤ Cs ρns
n = 1, 2, . . .
Proof:
See Proposition 1.3.5.
A probability measure ν on M × B is said to be T invariant whenever T ∗ ν = ν.
Proposition 1.5.18 Let σ be a cocycle and for each real number s let us denote by
ρ(s) the logarithm of the spectral radius of Tσs acting on C(M × B). Then we have :
(i) The function s ֒→ ρ(s) is convex and ρ(0) = 0,
R
(ii) ρ(s) ≥ s Ex {log σ(U (1), b)} dν(x, b) for any T -invariant probability measure ν.
(iii) If the function ρ admits a derivative at the origin then its value is given by :
Z
dρ(s)
|
= Ex {log σ(U (1), b)} dν(x, b)
dt s=0
(Hence this integral is independent from the choice of ν)
Proof:
See Proposition 1.3.6
If τ is a measure on M ×RB we denote by < f, h >τ the continuous bilinear form
on C(M × B) defined by f h dτ . We also denote by r a Radon-Nikodym cocycle
associated to a probability measure m on B.
Proposition 1.5.19 Let r be a m-Radon-Nikodym cocycle. Then, for any positive
integer k and any complex number λ we have :
(i) < T̆σ f , h >π⊗m =< Trσ−1 h , f >π⊗m
f, h ∈ C(M × B)
k
∗
(ii) [ker(T̆σ − λI)k ](π ⊗ m) ⊂ ker(Trσ
−1 − λI)
Proof:
Let f and h be two continuous functions on M × B. Then we have :
< Trσ−1 h , f >π⊗m
Z
=
Trσ−1 h(x, b)f (x, b) dπ(x) dm(b)
1.5. MARKOVIAN MULTIPLICATIVE SYSTEMS
=
=
=
=
Z
Z
Z
Z
63
Ex {h(X(1), U (1)b)r(U1 , b)σ −1 (U (1), b)}f (x, b) dπ(x) dm(b)
E{h(X(1), b)σ(U −1 (1), b)f (X(0), U −1 (1)b)} dm(b)
E{h(X(0), b)σ(U (−1), b)f (X(−1), b)} dm(b)
T̆σ f (x, b)h(x, b) dπ(x) dm(b)
This proves conclusion (i) and (ii) is an immediate consequence.
Proposition 1.5.20 Let σ be a cocycle and r be a m-Radon-Nikodym cocycle. Assume
that Trσ−1 , T̆rσ−1 , Tσ and T̆σ are compact operators on C(M ×B). Then for any positive
integer k and any nonzero complex number λ we have :
k
∗
(i) [ker(T̆σ − λI)k ](π ⊗ m) = ker(Trσ
−1 − λI)
(ii) dim(ker(T̆σ − λI)k ) = dim ker(Trσ−1 − λI)k
Proof:
These formulas are immediate consequence of the compacity and Proposition 1.5.19.
Proposition 1.5.21 Let r be a m-Radon-Nikodym cocycle. Assume that T , T̆ , Tr , T̆r
are compact operators on C(M × B), that T and T̆ have unique invariant probability
measures ν and ν̆ and are aperiodic. Then we have the decompositions :
(i) T n f = ν(f ) + Qn f
(ii) Trn f = π ⊗ m(f )ψ + Qnr f
with ν̆ = ψ(π ⊗ m)
the operators Q and Qr having spectral radius strictly less than one. It follows that the
spectral radius of Tr is equal to one and that the sequence kTrn k is bounded.
Proof:
See Proposition 1.3.12.
In the independent case one has proved the compacity of the Laplace operators as a
consequence of the absolute continuity of µ with respect to a Haar measure on GL(d, R).
In the Markovian case we will need some smoothness and non-singularity assumptions
on the function F . It seems difficult to state a compactness criterion in the general
Markovian case but in the particular example of Schrödinger matrices associated to
a diffusion on a Riemannian manifold, which we discussed earlier in this section, the
situation is somewhat simpler. We will see in the sequel that assuming that the function
64
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
h from M to Rℓ is a Morse function, then Tσ and T̆σ are aperiodic compact operators
on C(M × B) admitting unique invariant measures.
1.6
Boundaries of the Symplectic Group
In this section G will denote the symplectic group SP (ℓ, R) which acts on Rd for
d = 2ℓ. The compact subgroup of orthogonal matrices in G is denoted by K. For a
decomposable p-vector u = u1 ∧ u2 ∧ . . . ∧ up we also denote by u the d × p matrix with
columns vectors u1 , u2 , . . . , up and we will use the notations and definitions of section
2.
We first recall without proof some useful results from linear algebra.
Lemma 1.6.1 One has the following properties :
(i) Let u = {u1 , u2 , . . . , up } and v = {v1 , v2 , . . . , vp } be two linearly independent systems. They generate the same p dimensional subspace of Rd iff u1 ∧ u2 ∧ . . . ∧ up
and v1 ∧ v2 ∧ . . . ∧ vp are colinear in Λp (Rd ). It is also equivalent to the existence
of a p × p invertible matrix w such that u = vw.
A
where A and B are ℓ × p
(ii) Let u be a decomposable p-vector with matrix u = B
matrices. Then u is Lagrangian iff the matrix A′ B is symmetric.
(iii) Let u = u1 ∧ u2 ∧ . . . ∧ up be a decomposable p-vector and let v = v1 ∧ . . . ∧ vq be a
decomposable q-vector with p < q. Then the subspace generated by u is contained
in the subspace generated by v iff there exists a decomposable (q − p)-vector w
such that v = u ∧ w
This lemma makes possible the identification of the image v̄ in the projective space
P (Λp (Rd )) of a non-zero decomposable p-vector v with a p dimensional subspace of
Rd . For an integer 1 ≤ p ≤ ℓ we will denote by Lp the set of p dimensional subspaces
of Rd corresponding to Lagrangian decomposable p-vectors (See 1.4.23). For a subset
I = i1 < i2 < . . . < ip of {1, . . . , ℓ} we define the flag-manifold :
LI = {(bi1 . . . bip ) | bik ∈ Lk , bik ⊂ bik+1 }
It is known that LI is a homogeneous compact space under the action of G or K.
Let {e1 . . . ed } be the canonical basis of Rd and define eI = ei1 ∧ . . . ∧ eip . Then we
have LI = Ge¯I = K e¯I . The flag manifold L{1...ℓ} is called the maximal boundary or
Furstenberg boundary and one can remark that it is isomorphic to G/H where H is
1.6. BOUNDARIES OF THE SYMPLECTIC GROUP
65
the “parabolic” sub-group of G which fixes the element ē{1...ℓ} . For any I the space LI
admits a unique probability measure which is invariant by all the rotations of K. It is
called the Cauchy measure and will be denoted by mI . This measure is quasi-invariant
under the action of G (that is gmI has the same nul sets than mI ) and the associated
Radon-Nikodym cocycle, say rI , is called the Poisson kernel of LI . This Cauchy measure
is well known for L1 which is the projective space P (Rd ). Up to a submanifold of lower
dimension this manifold is isomorphic to Rd−1 and the Cauchy measure has a density
with respect to the Lebesgue measure proportional to (1 + kxk2 )−(d+1)/2 . We will see
below that Lℓ , once a submanifold of lower dimension is removed, is isomorphic to the
set M(ℓ) of symmetric matrices of order ℓ and the Cauchy measure admits a density
proportionnal to (1+ (det M )2 )−(d+1)/2 with respect to the Lebesgue measure on M(ℓ).
Proposition 1.6.2 Let µ be a probability measure on G such that for p = 1 . . . ℓ the
measure Λp µ is contractive and Lag(p)-strongly irreducible then
(i) There exists a unique invariant probability measure νI on each LI and
(ii)
γ1 + . . . + γp =
Z
log
kgvk
dµ(g) dνp (v̄)
kvk
Proof:
For each p the set of Lagrangian p-vectors is stable under the action of G, this implies
that the unique invariant measure νp is supported by this set and hence that the random
variable Zp defined in Proposition 1.4.15 has its range in this set. Applying the classical
martingale argument we obtain that if ν is an invariant measure on LI then ν has to be
the law of (Z1 , . . . , Zp ). Conclusion (ii) follows from the fact that the upper Lyapunov
exponent on Λp (Rd ) is the sum γ1 + . . . + γp .
A cocycle ρ on LI is said to be K-invariant if ρ(g, kb) = ρ(g, b) for all b ∈ LI and we
denote by ρp the cocycle defined on Lp by ρp (g, v̄) = kgvk kvk−1 where the norm is
taken in Λp (Rd ).
To carry out the computation of an explicit formula for the Poisson kernel we will need
the following theorem proved by Furstenberg and Tzkoni in [33].
Proposition 1.6.3 Any K-invariant cocycle on LI is of the form :
Y
ρ(g, bI ) =
ρk (g, bk )λk
k∈I
where λk , k ∈ I are real numbers.
If follows from this proposition that the computation of a K-invariant cocycle reduces
to the computation of a finite number of exponents λk . Let θ be a σ-finite measure
66
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
on LI which is equivalent to mI and define f (b) = dmI (b). Then we have rI (g, b) =
dθ
dg−1 θ (b) f (gb) . Choosing for θ the Lebesgue measure in a system of coordinates near
dθ
f (b)
a point b with b = gb one has only to compute a Jacobian to obtain rI . Hence we are
lead to exhibit a parametrization of the manifold LI . Actually we will only perform
this computation for the boundaries Lp and Lℓ−1,ℓ which give the simplest results. See
[73] for further details.
Proposition 1.6.4 For 1 ≤ p ≤ ℓ the Poisson kernel of Lp is rp = ρp−d+p−1
Proof:
To each Lagangian p-vector u = u1 ∧ u2 ∧ . . . ∧ up we associate the d × p matrix of
coordinates of u1 , u2 , . . . , up whose transpose has the form (a′ , c′ , b′ , d′ ) where a and b
are (ℓ − p) × p matrices, and c and d are square matrices of order p. The mapping Πp
defined by :
Πp (ū) = (ad−1 , bd−1 , cd−1 − d′−1 a′ bd−1 )
is one-to-one from the subset of Lp with d invertible onto the product space S of
(ℓ − p) × p matrices times the space of (ℓ − p) × p matrices times the space of p × p
symmetric matrices ( hence the dimension of Lp is 2p(ℓ − p) + p(p + 1)/2). Let ω0
be the point in Lp with coordinates (0, 0, I). Then the matrix d is invertible near ω0
thus the parametrization
Πp iswell defined in a neighborhood of this point. Let g0
∆
0
be the matrix g0 =
where ∆ is a diagonal matrix with diagonal entries
0 ∆−1
(1, . . . , 1, α) with α > 0. It is easy to check that g0 ω0 = ω0 and that the mapping of
Lp defined by ū ֒→ g0 ū near ω0 can be read in the chart Πp as the linear mapping
(r, s, N ) ֒→ (rδ, sδ, δN δ) of S where δ is a p × p diagonal matrix of the same form than
∆. The Jacobian of this mapping is equal to αd−p+1 and this yields the result since
ρp (g0 , ω0 ) = α−1 .
In the case p = ℓ, the Poisson kernel is well known since Lℓ is the boundary of the
symmetric space G/K which is isomorphic to the Siegel upper half-plane. We also
remark that the parametrization Πℓ maps the elements of Lℓ of the form dc with d
invertible onto the set of symmetric matrices of order ℓ by dc ֒→ cd−1 .
−2
Proposition 1.6.5 The Poisson kernel of Lℓ−1,ℓ is rℓ−1,ℓ = ρ−ℓ
ℓ−1 ρℓ
Proof:
We associate to each pair of Lagrangian vectors x1 = u1 ∧ .. . ∧ ul−1 and x2 = x1 ∧ uℓ
the d × ℓ matrix of coordinates of (u1 , . . . , uℓ ) of the form ab where a and b are square
matrices of order ℓ.When b is invertible we denote by t the upper left entry of ab−1 . The
mapping Π̃ defined by Π̃(x̄, ȳ) = (Πℓ (x̄), t) from the subset of Lℓ−1,ℓ with b invertible is
1.6. BOUNDARIES OF THE SYMPLECTIC GROUP
67
one to one and onto the manifold Rℓ−1 × Rℓ−1 × Rℓ(ℓ−1)/2 × R. Hence the dimension of
Lℓ−1,ℓ is equal to 2(ℓ − 1) + ℓ(ℓ − 1)/2. If we choose (ω1 , ω2 ) ∈ Lℓ−1,ℓ with coordinates
(0, 0, I, 1) then the parametrization Π̃ is well defined near this point. Let now g0 be a
diagonal matrix with diagonal entries (β, 1, . . . , 1, α). Then (ω1 ω2 ) is preserved by g0
and using the chart Π̃, one can see that the mapping (x̄1, x̄2 ) ֒→ g(x̄1 , x̄2 ) is locally the
linear mapping (r, s, N, t) ֒→ (βrδ, β −1 sδ, δN δ, β 2 t) where δ is the diagonal matrix of
order ℓ − 1 with entries (1, . . . , 1, α). The Jacobian of this mapping is equal to αℓ+2 β 2
and this yields the result since ρℓ (g0 , ω2 ) = α−1 β −1 and ρℓ−1 (g0 , ω1 ) = α−1 .
In the case of Schrödinger matrices one can apply these relations to compute the Lyapunov exponents. We now assume that the measure µ satisfies the hypothesis
of PropoR
sition 1.6.2 and moreover, that there exists a positive real τ such that kgkτ dµ(g) is
finite. Using Proposition 1.6.4 the boundary Lℓ up to a projective submanifold is isomorphic to the set M(ℓ) of symmetric matrices of order ℓ. Hence the proper invariant
measure νℓ can be viewed as a measure ν̃ℓ on M(ℓ).
Proposition 1.6.6 The sum of the nonnegative Lyapunov exponents is given by :
Z
log | det M | dν̃ℓ (M )
γ1 + . . . + γℓ =
M(ℓ)
Proof:
We know from Proposition 1.6.2 that the sum of nonnegative Lyapunov exponents is
given by the integral of the cocycle ρℓ with respect to µ⊗νℓ . Because of Proposition 1.6.4
we can replace log ρℓ by −1/(ℓ + 1) log rℓ . If f (b) is the density of the Cauchy measure
with respect to the Lebesgue measure θ associated to the parametrization Πℓ , then
Proposition 1.4.21 implies that this function is νℓ -integrable. The cocycle relation yields
dg−1 θ
1
log dθ .
that the sum of the nonnegative Lyapunov exponents is the integral of − ℓ+1
Q −I
For a Schrödinger matrix g =
the action on M can be written M ֒→
I
0
Q − M −1 and the Jacobian of this mapping is equal to | det M |−(ℓ+1) . This completes
the proof.
Let us also notice that using the parametrization Πℓ−1 the proper invariant measure
νℓ−1 can be viewed as a measure ν̃ℓ−1 on the manifold N (ℓ) = Rℓ−1 × Rℓ−1 × Rℓ(ℓ−1)/2 .
Proposition 1.6.7 With the notations of Proposition 1.6.5 the sum of the ℓ − 1 first
Lyapunov exponents is given by :
Z
log | det(N + rs′ )| dν̃ℓ−1 (r, s, N )
γ1 + . . . + γℓ−1 =
N (ℓ)
68
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
Proof:
The Poisson kernel rℓ−1 is not very tractable. In the present situation it is more
−1
= ρ2ℓ−1 ρ−2
convenient to introduce the cocycle σ = rℓ−1,ℓ rℓ−1
ℓ . Then,
R ′ using again the
cocycle property of the Poissons kernels one obtains γℓ = −1/2 g (t) dνℓ−1,ℓ ⊗ dµ
where g(t) is the action on the coordinate t of a matrix g ∈ SP (ℓ, R). For a point of
coordinates (r, s, N, t) we write
Q −I
t w′
x e′
−1
, g=
M = ab =
, Q=
w S
e R
I
0
Then one has g(t) = x − det S with w = r − ts and S = N + tss′ . With some
det M
det(N + rs′ ) 2
elementary algebra one checks g′ (t) = [
] and the formula follows from the
det M
above considerations.
We remark that the classical double integral for a Lyapunov exponent reduces to a single
integral with respect to the invariant measure. If µ is associated to a product of Cauchy
laws on R it is possible to obtain an explicit formula for the sum of the nonnegative
exponents using Proposition 1.6.6 (See [73]). Unfortunately the computation using
Proposition 1.6.7 seems to be more complicated.
1.7
Notes and Comments
The Theory of products of Random Matrices initiated by Bellman, has been fully
developed by P.Bougerol, H. Furstenberg, Y. Guivarc’h, H. Kesten, E. Le Page and A.
Raugi. We have tried to find the shortest way to obtain the essential results needed in
the spectral theory of one and quasi-one dimensional Schrödinger operators. Most of
the results and proofs for the i.i.d. case can also be found in the book of P. Bougerol
and J. Lacroix [9].
The subadditive Ergodic Theorem is used in conjunction with the deterministic Oseledec’s theorem to prove the existence of Lyapunov exponents. Some other approaches
to the random Oseledec’s theorem are available, in particular M. Raghunatan [91] gave
a self-contained proof of this fundamental theorem. We followed Ledrappier [77] to
show that this theorem yields immediately the almost sure growth of the norm of the
column vectors of the product matrix with an exponential speed given by the upper
Lyapunov exponent. The same method also yields a representation formula for this exponent involving some invariant measure on the projective space. An earlier approach
due to Furstenberg and Kesten [31] and Furstenberg [30], based on a refinement of the
classical ergodic theorem for Markov chains give the same results. These properties also
follow from the basic work of Guivarc’h and Raugi [44] which we will present below.
1.7. NOTES AND COMMENTS
69
We have prefered to write the first method since it can be applied to the general case
of Markovian products with minor changes.
The theory of Fourier-Laplace operators associated to a Markov chain goes back to
Doeblin and Fortet and has been developed in the framework of random walks by the
Russian school, see Nagaev, Virtser [102] and Tutubalin [100], [101] in example. These
authors essentially assume some Doeblin’s condition insuring an exponential speed of
convergence of iterates of the Markov kernel to the invariant distribution of probability. The spectral theory of the Laplace operators on the space of continuous functions,
assuming that the probability distribution has a density, is classical and more information can be found in example in Lacroix [73]. Brunel and Revuz [12] have studied the
same question in the case of a “spread-out” distribution and have established the close
connexion with the notion of quasi-compact operators. Derrienic and Guivarc’h have
obtained in [27] a very powerful result, relating the spectral radius of the operator T1
acting on L2 (m) (m is the Cauchy measure on the projective space) to the amenability
of the group generated by the support of the distribution of probability. The theory
of the Laplace transform on spaces of Hölder continuous functions is essentially borrowed from Le Page [79]. The fundamental result is the exponential convergence on
this space of the iterates of the Markov kernel to the invariant distribution without any
assumption of regularity on the distribution of the matrices.
The presentation of the theory of i.i.d. products in Section 3 is entirely due to Guivarc’h
and Raugi [44]. We only have removed the algebraic framework of their general theory of
random products with values semi-simple Lie groups since we will only use subgroups of
SL(d, R) and essentially the symplectic group SP (ℓ, R). The simplicity of the Lyapunov
spectrum was before proved by Virtser[103], assuming the existence of a density for
the distribution of the matrices. The proof of the celebrated theorem of Furstenberg
[30] insuring the positivity of the upper Lyapunov exponent (or equivalently the nontriviality of the spectrum) is only sketched since we prove a much stronger result in this
direction (but also with stronger assumptions...). We have not tried to study in details
the properties of the invariant measure. In particular the proof of the integrability
of the function ϑ ֒→ | sin(ϑ − ϑ0 )|α for some negative α with respect to the invariant
measure ν(dϑ) on the torus T is out of the scope of this book. This property is very
commonly used to obtain a simple representation formula for the Lyapunov exponent
in the Schrödinger case, and also to apply the ergodic theorem to the ν integrable
function log(|x|) (now considering the invariant measure on the projective line). The
interested reader may found this proof (and much more) in the work of Guivarc’h
Raugi cited above. The criterion of strong irreducibility and contractivity using the
notion of Zariski’s closure of the group generated by the distribution of the matrices,
given by Goldsheid and Margulis in [39], have been considerably extended by Guivarc’h
and Raugi in [45]. In particular they give a complete description of the multiplicity
of the Lyapunov spectrum. The proof of the absolute continuity of some power of
convolution of the probability distribution on the symplectic group SP (ℓ, R) induced
70
CHAPTER 1. PRODUCTS OF RANDOM MATRICES
by a Schrödinger matrix is borrowed from Lacroix [72]. Some computations with the
same flavor, but on a group equal to the product of a finite number of copies of SL(2, R),
have been performed by Le Page in [80]. The criterion 1.4.37 given by Goldsheid and
Margulis is very useful. It asserts that the Zariski closure of the set of Schrödinger
matrices in a strip generated in example by independent Bernouilli distributions in the
diagonal of the ℓ × ℓ dimensional matrix Q is equal to SP (ℓ, R).
Section 5. devoted to the Markovian Multiplicative systems was strongly influenced
by the work of Bougerol [7], [8]. The positivity of the upper Lyapunov exponent in
this situation was already investigated by Royer [92] and a general method to prove
the simplicity of the spectrum has been given by Guivarc’h in [42]. A lot of interesting
extensions of the theory of the semi-groups generated by the range of a Markov process
have been studied by Arnold, Wishtutz, Crauel, Klieman, Kunita etc...in connexion
with control theory.
The last section deals with the boundaries of the symplectic group which is a particular case of the general theory of boundaries of semi-simple Lie groups initiated by
Furstenberg. A lot of details can be found in Chapter IV of Bougerol & Lacroix [9].
Our construction of the Poisson kernels of the Lagragian boundaries is borrowed from
Lacroix [74] and the key ideas can be found in the basic paper of Furstenberg and
Tzkoni [33].
It is worth to mention that the material presented here is only a small part of the theory
of random walks on matrix groups. In particular we said nothing about the reducible
case studied by Hennion [47], Kifer [56], Furstenberg and Kifer [32] and of the case of
nonnegative matrices (see Furstenberg and Kesten [31]). The theory of Fourier-Laplace
transform is a basic tool in the study of Central Limit like theorems, Large Deviations
etc..The interested reader can find a lot of informations in Le Page [79], Bougerol &
Lacroix [9] Chapter V, Bougerol [7], Guivarc’h and Raugi[43]. In this last paper the
theory of random products is studied under the fairly general assumptions of proximality (generalizing the notion of contractivity) and strong irreducibility. Moreover the
classical limits theorems of Probability theory are obtained in the general framework
of irreducible Markov kernels satisfying some kind of Doeblin condition.
In some particular cases some explicit computations of the Lyapunov exponents have
been carried out, in example by Cohen and Newman [20] for matrices with normal
entries or Lacroix [75] for the Lloyd model on the strip.
Chapter 2
Spectral Theory of
(Non-Random) Schrödinger
Operators in a Strip
Contents:
1. General Definitions and Properties
(a) The discrete case
(b) The continuous case
(c) The Green’s Kernel
2. Approximations of the Spectral Measures
3. Nature of the Spectrum
(a) Hyperbolic behavior
(b) Localization criterions
This chapter contains the essential tools needed to investigate the spectral properties
of Schrödinger operators, especially the computation of approximations of the spectral
measure giving rise to the criterions for localization used in the random case.
Let ℓ be a positive integer and d = 2ℓ. As usual we denote by T the parameter set which
will be R or Z. We define the strip of width ℓ as the set {(i, t); i = 1 . . . ℓ, t ∈ T } that
71
72CHAPTER 2. SPECTRAL THEORY OF (NON-RANDOM) SCHRÖDINGER OPERATORS IN A STRIP
is the union of ℓ copies of the one dimensional set T . Then the Schrödinger operator
on the strip can be viewed as ℓ coupled one dimensional operators. We also define the
“slice” of index t as the subset {(i, t); i = 1 . . . ℓ}.
Let V1 (t), · · · , Vℓ (t) be ℓ real valued ”potential” functions defined on the set T . Then
one defines the ℓ × ℓ potential matrix V (t) by:

 Vi (t) if i = j
−1
if |i − j| = 1
Vij (t) =

0
if |i − j| > 1
In order to prove spectral properties of the Schrödinger in a strip we will often look at
its restriction to “boxes”. Then we will use a limiting procedure called “thermodynamic
limit”, letting these boxes growing to the whole space. Let a and b be two elements of
T . The box Λ is defined as the set Λ = {(i, t); i = 1 . . . ℓ, t ∈ [a, b]} in the continuous
case, and Λ = {(i, t); i = 1 . . . ℓ, t ∈ [a, b − 1]} in the discrete case.
2.1
2.1.1
General Definitions and Properties
The Discrete Case
The Schrödinger operator H on the strip of width ℓ acts on the vector sequences ψ(n)
with ℓ complex components (ψ1 (n), · · · , ψℓ (n)) by the formula:
[Hψ](n) = −ψ(n + 1) − ψ(n − 1) + V (n)ψ(n).
We remark that this operator restricted to a “slice” with Dirichlet boundary conditions
is nothing else that the one dimensional Schrödinger operator on the box [1, . . . , ℓ].
Let λ be a complex number, we define the transfer matrix gn (λ) ∈ SP (ℓ, R) by:
V (n) − λIℓ −Iℓ
gλ (n) =
Iℓ
0
where Iℓ is the identity matrix of order ℓ. The propagator matrix Uλ (n) is then given
by the products:
gλ (n − 1) · · · gλ (0) if n ≥ 0
Uλ (n) =
gλ−1 (n) · · · gλ−1 (−1) if n < 0.
We will also consider the propagator matrix with two time parameters defined by
−1
U
(m, n) =U (m)U (n). If we denote by ψ̃(n) the d-dimensional vector ψ̃(n) =
ψ(n)
it is easy to check that any solution of the “eigenvalue” equation Hψ = λψ
ψ(n − 1)
can be expressed by the formula:
ψ̃(n) = U (n)ψ̃(0)
2.1. GENERAL DEFINITIONS AND PROPERTIES
73
We will often omit the parameter λ to avoid complicated notations. One can also
consider the operator H acting on sequences of ℓ × ℓ matrices by the formula
(HM )n = −Mn−1 − Mn+1 + V (n)Mn
Let us denote by Pn (λ) and Qn (λ) the solutions of the equation HM = λM satisfying
the initial conditions P0 = Q−1 = Iℓ and P−1 = Q0 = 0. It is readily seen that the
propagator matrix Un can be written in the form :
Pn
Qn
U (n) =
Pn−1 Qn−1
Let H be the Hilbert space of square summable sequences ψ. It follows from the above
general discussion that H is a self adjoint operator on H for H = H0 + V with H0
bounded where [H0 ψ](n) = −ψ(n − 1) − ψ(n + 1). Let us denote by Et the spectral
resolution of the identity of H and by {ǫin ; i = 1 · · · ℓ, n ∈ Z} the canonical basis of H.
One defines a ℓ-system u as a sequence (u1 , · · · , uℓ ) of vectors in Cℓ . An ℓ × ℓ matrix
M with operator entries acts on the set of ℓ-systems by the formula
(M u)i = Σℓj=1 Mij uj .
The entries of the matrices Pn (λ) and Qn (λ) are polynomial in λ thus the operator
matrices Pn (H) and Qn (H) are well defined. Let ǫn be the ℓ-system (ǫ1n , · · · , ǫℓn ).
Proposition 2.1.1 With the previous notations we have:
Pn (H)ǫ0 + Qn (H)ǫ−1 = ǫn
n∈Z
It follows that the two ℓ-systems (ǫ0 , ǫ−1 ) form a basis of a generating subspace.
Proof:
This equality is obvious for n = 0 and n = −1. It is then proved by induction using
the relation ǫn = −ǫn−1 − ǫn+1 + V (n)ǫn . The last conclusion is an easy consequence
of this equality since it implies that the vector subspace generated by the the powers
of H applied to the ℓ-systems ǫ0 and ǫ−1 is dense in H.
Let σm,n be the ℓ×ℓ matrix valued measure whose (i, j)-th entry is the complex measure
< E(.)ǫi,m , ǫj,n >. The spectral matrix measure M is then defined as the d × d matrix
σ0,0
σ0,−1
σ−1,0 σ−1,−1
and we denote by σ the trace of M . The matrix measure M is nonnegative and the
spectral type of the operator H is given by σ in the sense that for any ψ ∈ H the
spectral measure σψ is absolutely continuous with respect to σ.
74CHAPTER 2. SPECTRAL THEORY OF (NON-RANDOM) SCHRÖDINGER OPERATORS IN A STRIP
Let now a and b be two integers with a < 0 < b and let Λ be the associated box. For
Λ is defined in the
two real symmetric matrices of order ℓ, say α and β, the operator Hαβ
box Λ by the equations:
Λ
ψ)(a) = ψ(a + 1) + (V (a) − α)ψ(a)
(Hαβ
Λ
(Hαβ
ψ)(n) = (Hψ)(n)
for a < n < b − 1
Λ
(Hαβ
ψ)(b − 1) = −ψ(b − 2) + (V (b − 1) − β)ψ(b − 1).
Λ is defined by a real symmetric matrix on the orthonormal
It is easy to check that Hα,β
basis {ǫin ; i = 1 · · · ℓ, n = a · · · b − 1} and thus is self-adjoint. If one defines the spectral
Λ in the same way than for the whole strip, then the sequence M Λ converges
matrix Mα,β
α,β
weakly to M when Λ increases to the whole of the strip. The spectral type of the
Λ is given by the trace of the matrix M Λ that we denote by σ Λ .
operator Hα,β
α,β
α,β
We saw in Chapter I Section 6 that the set of real symmetric matrices of order ℓ denoted
by M(ℓ) is isomorphic to the boundary Lℓ up to a projective submanifold of lower
dimension. To obtain this identification one can write the Lagrangian ℓ-dimensional
subspace associated to a symmetric matrix γ as the subspace generated by the columns
γ
of the d × ℓ matrix γI or I . We also denote by γ the associated Lagangian subspace.
If we choose the first mapping to represent the matrix α and the second to represent
Λ ψ = λψ is the restriction to the
the matrix β then any solution ψ of the equation Hαβ
box Λ of a solution of Hψ = λψ with boundary conditions ψ̃(a) ∈ α and ψ̃(b) ∈ β. If
Λ iff there exists a non-zero vector v ∈ α such that
follows that λ is an eigenvalue of Hαβ
U (b, a)v ∈ β.
2.1.2
The Continuous Case
We define formally the operator H acting on a locally integrable vector function ψ(t)
with ℓ complex coordinates (ψ1 (t), · · · , ψℓ (t)) by the formula:
[Hψ](t) = −ψ̈(t) + V (t)ψ(t)
The derivatives are to be understood in the sense of distributions and V (t) is the ℓ × ℓ
potential matrix.
If we assume that for i = 1, · · · , ℓ the functions Vi (t) are locally square integrable, then
one can define the propagator U (t) (which is a matrix in SP (ℓ, R)) as the solution of
the equation:
0
Iℓ
U̇λ (t) =
Uλ (t)
V (t) − λIℓ 0
with the initial value U (0) = Iℓ . We will also consider the propagator with two time
parameters defined by U (s, t) = U (s)U −1 (t).
2.1. GENERAL DEFINITIONS AND PROPERTIES
75
Let Pt (λ) and Qt (λ) be the ℓ × ℓ matrices solutions of the equation:
−M̈t + V (t)Mt = λ Mt
with the boundary conditions P0 = Q̇0 = Iℓ and P˙0 = Q0 = 0.
It is readily seen that the propagator U (t) can be written as:
U (t) =
Pt
Ṗt
Qt
Q̇t
ψ(t)
Let us define the d dimensional vector ψ̃(t) =
, then any solution of the equation
ψ̇(t)
Hψ = λψ can be expressed by means of the propagator Ut as ψ̃(t) = Ut ψ̃(0). If we
assume that the potential function V (t) is such that the operator H be essentially selfadjoint on the Hilbert space H = L2 (R, dt; Cℓ ) of square integrable vector functions
on R (for instance if there exists a positive constant c such that Vi (t) ≥ −c(1 + t2 ) for
i = 1, . . . , ℓ), then one can construct a spectral matrix for H exactly in the same way
than in the one dimensional case.
For two real numbers a < 0 ≤ b let Λ be the associated box and let α and β be two
Lagrangian subspaces of dimension ℓ in Rd . Then we define the domain
Λ = {ψ ∈ L2 (Λ); ψ is differentiable, ψ̇ is absolutely continuous and −ψ̈ + V ψ is
Dα,β
in L2 (Λ), ψ̃(a) ∈ α, ψ̃(b) ∈ β}.
Λ defined on the domain D Λ is self-adjoint and has a compact resolThe operator Hα,β
α,β
Λ repeated with their
vent. Let λ1 ≤ λ2 ≤ . . . be the countable set of eigenvalues of Hαβ
multiplicity (which is at most ℓ) and let 1 ψ,2 ψ, · · · be a complete orthonormal set of
corresponding eigenfunctions. Then each eigenfunction satisfies:
k
ψ̃(t) = Ut (λk )k ψ̃(0)
Λ is defined as the d × d matrix valued measure whose entry
The spectral matrix Mαβ
(i, j) is given by :
k
k
Σ∞
k=1 ψ̃i (0) ψ̃j (0)δλk
Λ is then given by the trace of the matrix M Λ
The spectral type of the operator Hα,β
α,β
Λ .
that we denote by σα,β
Λ converges weakly when a → −∞ and b → +∞ to a
Each entry of the matrix Mα,β
limit independent of the choice of the boundary conditions α and β. If we call M the
limit matrix then the spectral type of H is given by the measure σ =trace(M ).
76CHAPTER 2. SPECTRAL THEORY OF (NON-RANDOM) SCHRÖDINGER OPERATORS IN A STRIP
2.1.3
The Green Kernel
We will now handle at the same time both the continuous and discrete case, the formulas
being quite the same, even if the proofs can be a bit different.
If α is an ℓ-dimensional subspace of Rd and u a d × ℓ matrix we will write u ∼ α if the
A
range of u is equal to α. It is well known that the range of a d × ℓ matrix h = B
is
d
′
a Lagrangian subspace of R iff the ℓ × ℓ matrix A B is symmetric and in this case the
matrix gh has the same property for any g ∈ SP (ℓ, R).
A
C
Proposition 2.1.2 Assume that the ranges of the d × ℓ matrices h = B
and p = D
are two ℓ-dimensional Lagrangian subspaces of Rd and let W be the ℓ × ℓ matrix W =
D′ A − C ′ B. Then we have:
• The ranges of h and p have a common non-zero vector iff W is singular.
• If we assume W non singular then :
AW −1 C ′ = CW ′−1 A′
and
DW ′−1 A′ − BW −1 C ′ = Iℓ
Proof:
One can write W = p′ Jh and a non-zero vector u in the range of h can be written
u = hv where v is a ℓ-dimensional non-zero vector . If u is also in the range of p
we obtain immediately that W v = 0. In the reverse way if W v = 0 for a non-zero
ℓ-dimensional vector v then the subspace generated by the range of p and the vector hv
is Lagrangian. Since there does not exist Lagrangian subspaces of dimension greater
than ℓ the vector hv is in the range of p. Furthermore the vector hv is non-zero since
the columns of h are linearly independent. The equalities given in this proposition are
the result of an easy computation.
For a complex number λ with ℑm(λ) > 0 we denote by GΛ
α,β (λ, x, y) the ℓ×ℓ symmetric
Λ
matrix kernel of the Green function of Hα,β . This means that the entry (i, j) of this
matrix is the classical Green kernel computed at the points (i, x) and (j, y). One can
state and prove exactly in the same way as in the one dimensional case :
Λ has the followProposition 2.1.3 The Stieljes transform of the spectral measure σαβ
ing expression:
 Z
Λ (λ, 0, 0) + GΛ (λ, −1, −1)
Λ

discrete case
tr
G
dσαβ (t)
αβ
αβ
=
 tr GΛ (λ, 0, 0) + ∂ 2 GΛ (λ, 0, 0)
t−λ
continuous case
αβ
∂x∂y
αβ
The definition of the Wronskian is very close from the one given in the one-dimensional
case but now one has to take care of the order of the factors and the transposes in
2.1. GENERAL DEFINITIONS AND PROPERTIES
77
products of matrices. For a function Ψ(t) from T to the set of ℓ × ℓ complex matrices
Ψ(t)
Ψ(t) in the discrete case and Ψ̇(t) in the
we denote by Ψ̃(t) the d × ℓ matrix Ψ(t−1)
continuous case.
Definition 2.1.4 Wronskian
• For two sequences u and v with values in the set of square matrices of order ℓ
with complex entries, then the Wronskian W (u, v) is defined as the the matrix
valued sequence [W (u, v)](n) = v ′ (n − 1)u(n) − v ′ (n)u(n − 1) = ṽ ′ (n)J ũ(n)
• For two differentiable complex functions u and v with values in the set of square
complex matrices of order ℓ the Wronskian W (u, v) is defined as the matrix valued
function [W (u, v)](t) = v̇ ′ (t)u(t) − v ′ (t)u̇(t) = ṽ ′ (t)J ũ(t)
Proposition 2.1.5 If u and v are two ℓ×ℓ matrices solution of the equation HΨ = λΨ
then the Wronskian of u and v is constant. Moreover if ũ(a) ∼ α and ṽ(b) ∼ β then
Λ .
this constant matrix is singular iff λ is an eigenvalue of Hαβ
Proof:
For the constancy of the Wronskian we remark that in the set of d × d matrices we have
the relation:
ũ(t) = U (t)ũ(0) and ṽ(t) = U (t)ṽ(0)
The result now follows from the relation U ′ (t)JU (t) = J which expresses that U (t) ∈
SP (ℓ, R).
From Proposition 2.1.2 we now that the Wronskian W (u, v) is singular at some point
t iff the range of the matrices ũ(t) and ṽ(t) have a common non-zero vector in Rd .
If we evaluate this jacobian at the point b we obtain that this jacobian is singular iff
there exists a non-zero vector v ∈ α such that U (b, a)v ∈ β and this last condition is
Λ .
equivalent to the fact that λ is an eigenvalue of Hαβ
Proposition 2.1.6 Let u and v be two solutions of the matrix equation HΨ = λΨ
satisfying the boundary conditions ũ(a) ∼ α and ṽ(b) ∼ β. Then we have :
GΛ
αβ (λ, x, y)
=
u(x) W −1 (u, v) v ′ (y)
v(x) W −1 (v, u) u′ (y)
if x ≤ y
if x ≥ y
Proof:
Taking in account the conclusions of Proposition 2.1.2 the conclusion follows from an
easy computation. We first remark that the definition is consistent for x = y and that
78CHAPTER 2. SPECTRAL THEORY OF (NON-RANDOM) SCHRÖDINGER OPERATORS IN A STRIP
the exchange of x and y corresponds to a transpose of the Green matrix. In the discrete
case for a fixed y ∈ Λ one checks that :
0 if x 6= y
Λ
Λ
(Hα,β − λI)Gα,β (λ, ., y) (x) =
Iℓ if x = y
This equality is obvious for x < y or x > y. In the case x = y the right side of the
above equality is then equal to:
−u(x − 1)W −1 v ′ (x) + (V (x) − λI)v(x)W ′−1 u′ (x)
−v(x + 1)W ′−1 u′ (x)
= −u(x − 1)W −1 v ′ (x) + v(x − 1)W ′−1 u′ (x) = Iℓ
In the continuous case, if ψ(t) is a continuous function we write:
GΛ
α,β ψ(x)
= v(x)W
′−1
(u, v)
Z
x
′
u (t)ψ(t)dt + u(x)W
−1
(u, v)
a
Z
b
v ′ (t)ψ(t)dt
x
and a computation of derivatives yields the result.
Let φ and ψ two functions defined on Λ with values in Cℓ . We define the dot product
restricted to the box Λ by the formula :
(P
b−1 ′
ψ (n)φ(n) in the discrete case
< ψ, φ >Λ = R b n=a
′
in the continuous case
a ψ (t)φ(t) dt
Proposition 2.1.7 Let φ and ψ two functions defined on Λ with values in Cℓ . Assuming the square integrability of φ and ψ in the continuous case we have:
< Hψ, φ >Λ − < ψ, Hφ >Λ = ψ̃ ′ (a)J φ̃(a) − ψ̃ ′ (b)J φ̃(b)
If ψ is a solution of the equation Hψ = λψ then
2iℑm(λ)kψk2Λ = ψ̃ ′ (a)J ψ̃(a) − ψ̃ ′ (b)J ψ̃(b)
Proof:
In the discrete case this is a result of an easy computation. In the continuous case it is
enough to apply the formula of integration by parts. We remark that the first relation
Λ is a symmetric operator.
proves again that Hα,β
2.2. APPROXIMATION OF THE SPECTRAL MEASURE.
2.2
79
Approximation of the spectral measure.
Let H be the Schrödinger operator in a strip of width ℓ and a and b be two elements
of T with 0 ∈ [a, b] (we assume that b ≥ 1 in the discrete case). Recall that the
box Λ is defined as the set Λ = {(i, t); i = 1 . . . ℓ, t ∈ [a, b]} in the continuous case,
and Λ = {(i, t); i = 1 . . . ℓ, t ∈ [a, b − 1]} in the discrete case. For two ℓ dimensional
Λ the operator H restricted to the box
Lagangian subspaces α and β we denote by Hαβ
Λ with boundary conditions α and β. We have already pointed out that the spectral
measure σ of the Schrödinger operator can be approximated by the spectral measures
Λ computed in boxes with arbitrary boundary conditions α and β. It turns out that
σαβ
Λ are not very tractable and we will rather use their averaged value
the measures σαβ
with respect to one or two boundary conditions. At first sight this procedure seems
surprising if we remind that our goal is to obtain that σ is pure point: We replace
Λ which are pure point, by averaged measures wich are
the approximating measures σαβ
absolutely continuous! The choice of the Cauchy distribution as averaging measure is
arbitrary but the essential reason for this choice is that integration with respect to
the Cauchy measure reduces to the mean value property of harmonic functions with
respect to the Poisson kernel. It is not excluded that other averaging distributions can
yield interesting results but it seems that they will certainly give rise to much more
complicated formulas.
We identify the ℓ-dimensional Lagrangian subspace β of Rd with a symmetric matrix
of order ℓ which we also denote by β, such that the decomposable ℓ-vector given by
β
the columns of the d × ℓ matrix I has the same direction than this subspace. This
last parametrization does not work for the subset of Lagrangian ℓ-vectors with a zero
coordinate on the basis vector eℓ+1 ∧ . . . ∧ ed but this will not be a problem if we want
to integrate a function of β with respect to a proper measure on Lℓ that is a measure
for which any projective proper subspace is negligible. The Cauchy measure on Lℓ is
mapped onto the usual Cauchy law on M(ℓ) by this parametrization. For a square
matrix M we denote by M ∗ its adjoint that is the conjugate of the transpose.
The following result about the Poisson kernel of the Siegel upper-half plane can be
found for example in the book of Hua[49]. For k = 0, . . . , ℓ we set:
Sk = {Z = X + iY ; X, Y ∈ M(ℓ) , Y ≥ 0 , rank(Y ) = k}
The Siegel half-plane is defined as the set Sℓ and the union for k = 0 . . . ℓ of the sets
Sk is called the closed Siegel half-plane. Let F (Z) be a function which is harmonic in
each Sk for k = 1 . . . ℓ and which can be continuously extended to S0 ∪ {∞}. Then if
m is the Cauchy measure on M(ℓ) we have the Poisson formula:
Z
F (X) dm(X) = F (iIℓ )
80CHAPTER 2. SPECTRAL THEORY OF (NON-RANDOM) SCHRÖDINGER OPERATORS IN A STRIP
For a complex number λ with ℑm(λ) > 0 we denote by GΛ
α,β (λ, x, y) the ℓ×ℓ symmetric
Λ . This means that the entry (i, j) of this
matrix kernel of the Green function of Hα,β
matrix is the classical Green kernel computed at the points (i, x) and (j, y).
Proposition 2.2.1
tion Hu = λu with
u(b) − Zu(b − 1) is
conclusion holds for
Let u be a non-zero square
ℑm(λ) > 0 and ũ(a) real.
non-singular for Z in the
the matrix u(b) − Z u̇(b) in
matrix of order ℓ solution of the equaThen in the discrete case the matrix
closed Siegel half-plane and the same
the continuous case.
Proof:
We give the proof in the discrete case, the proof being exactly the same in the continuous
case. If the conclusion was false there would exist a non-zero vector v ∈ Cℓ such that
if we define ψ(n) = u(n)v then ψ is solution of Hψ = λψ and ψ(b) = Zψ(b − 1). It
follows from Proposition 2.1.7 that
2iℑm(λ)kψk2Λ = ψ̃ ′ (a)J ψ̃(a) − ψ̃ ′ (b)J ψ̃(b)
Since ũ(a) is real then ψ̃ ′ (a)J ψ̃(a) = 0 and if we set Z = X + iY then we have:
ℑm(λ)kuk2Λ = −u(b − 1)′ Y u(b − 1)
The left part of this equatity is strictly positive and the right part nonpositive thus we
get a contradiction.
Λ be the spectral measure of H Λ , u be the square matrix of
Theorem 2.2.2 Let σαβ
αβ
order ℓ solution of the equation Hu = λu with the boundary condition ũ(a) ∼ α and
m be the Cauchy
on the Lagrangian boundary Lℓ . Then the averaged spectral
R measure
Λ dm(β) has a density with respect to the Lebesgue measure on R
measure σαΛ = σαβ
given by the formula:
1
tr [ũ′ (0)ũ(0)][ũ′ (b)ũ(b)]−1
π
Proof:
The value of the Wronskian appearing in Proposition 2.1.6 in the Green kernel can be
computed at any point. The computation at the point b yields W (u, v) = u(b)−βu(b−1)
in the discrete case and W (u, v) = u(b) − β u̇(b) in the continuous case. It follows from
Corollary 2.2.1 that this Wronskian viewed as a function of β has a non-vanishing
extension to the closed Siegel half plane. Let v be square matrix valued function
solution of Hv = λv with ṽ(b) ∼ β. It follows from Proposition 2.1.3 that the Stieljes
Λ can be written :
transform of σα,β
tr u(0)W −1 v ′ (0) + u(−1)W −1 v ′ (−1)
in the discrete case
−1 ′
−1 ′
tr u(0)W v (0) + u̇(0)W v̇ (0)
in the continuous case
−1 ′
tr W ṽ (0)ũ(0)
in both cases
2.2. APPROXIMATION OF THE SPECTRAL MEASURE.
81
This function of β is the restriction to the set S0 = M(ℓ) of a bounded holomorphic
function in the closed Siegel half-plane defined by :
F (Z) = tr[W −1 (Z)N (Z)]
Z
where N (Z) =
W (Z) =
Z̃ =
Iℓ
(In this formula the Wronskian has been evaluated at the origin). The integration
of F (β) with respect to the Cauchy measure m is then immediate from the Poisson
formula and this yields:
Z
dσαΛ (t)
= F (iIℓ ).
t−λ
Z̃ ′ Ub′−1 ũ(0) ,
Z̃ ′ Ub′−1 J ũ(0) ,
If we denote by Id the identity matrix of order d we obtain :
W ∗ (iIℓ )N (iIℓ ) = ũ∗ (0)J ′ U
−1
(b)(Id + J)U ′−1 (b)ũ(0)
W ∗ (iIℓ )W (iIℓ ) = ũ∗ (0)J ′ U
−1
(b)(Id + J)U ′−1 (b)J ũ(0)
Taking in account that the matrix U (b) is symplectic one easily checks that :
lim
ℑm(λ)→0
lim
ℑm(λ)→0
J ′U
J ′U
−1
−1
(b)U ′−1 (b)J
(b)JU ′−1 (b)J
= U ′ (b)U (b)
= J
It follows that:
limℑm(λ)→0 ℑm(W ∗ (iIℓ )N (iIℓ )) = ũ′ (0)ũ(0)
limℑm(λ)→0 W ∗ (iIℓ )W (iIℓ )
= ũ′ (0)U ′ (b)U (b)ũ(0)
The de la Vallée Poussin theorem now yields the result.
One can integrate with respect to α rather than to β to obtain the same kind of formula
with a “time reversal”. One can also integrate with respect to the two boundary
conditions to obtain an other type of averaged spectral measure.
R Λ
dm(α) dm(β). Then the measure σ Λ has a
Proposition 2.2.3 Let us set σ Λ = σα,β
density with respect to the Lebesgue measure on R given by:
1
π
Z
Lℓ
kvk
kU (b)vk
ℓ+1
tr v ′ v(v ′ U ′ (a)U (a)v)−1 dm(v̄)
In this formula the norm is computed in Λℓ (Rd ).
82CHAPTER 2. SPECTRAL THEORY OF (NON-RANDOM) SCHRÖDINGER OPERATORS IN A STRIP
Proof:
Let v be a Lagrangian decomposable ℓ-vector with direction
v̄ = β and let f (v) be the
function defined on Lℓ by f (v) = tr v ′ v(v ′ U ′ (a)U (a)v)−1 . Then if we first perform an
integration with respect to the boundary condition at the left end of the box, one has:
Z
Z
Λ
tr v ′ U ′−1 (b)U −1 (b)v(v ′ U ′ (a, b)U (b, a)v)−1 dm(v)
σβ dm(β) =
Z
=
f (U −1 (b)v)dm(v)
Z
=
f (v)d(U −1 (b)m)(v)
ℓ+1
Z
kvk
dm(v)
=
f (v)
kU (b)vk
ℓ+1
Z kvk
tr v ′ v(v ′ U ′ (a)U (a)v)−1 dm(v̄)
=
kU (b)vk
The third equality has been obtained using the result of the computation of the Poisson
kernel of the boundary Lℓ given in the chapter “Products of Random Matrices” section
6.
This formula has the advantage to separate the contribution of [a, 0] and [0, b] in the
construction of the spectral measure.
Theorem 2.2.4 Let φ be a continuous function with compact support on R. Then :
R
1. the integral φ(λ) dσ Λ (λ) is equal to
1
π
Z
φ(λ)
and converges to
R
kvk
kU (b)vk
ℓ+1
tr v ′ v(v ′ U ′ (a)U (a)v)−1 dm(v̄) dλ
φ(λ) dσ(λ) when a → −∞ and b → +∞.
2. For any a < 0 one has the formula :
Z
tr v ′ v(v ′ U ′ (a)U (a)v)−1 dm(v̄) = ℓ
Proof:
The property 1. is a direct consequence of the formula obtained for σ Λ in Theorem
R 2.2.2
Λ
and the uniform weak convergence with respect to the boundary conditions of φ dσαβ
R
to φ dσ.
To prove 2. we have only to remark that the formula obtained in 1. has to be symmetric
with respect to a and b since we can exchange the order of integration with respect
2.3. NATURE OF THE SPECTRUM
83
to the boundary conditions α and β. If we write this equality for b = 0 then one has
U (b) = I and we obtain :
ℓ+1
Z
Z kvk
dm(v̄)
tr v ′ v(v ′ U ′ (a)U (a)v)−1 dm(v̄) = ℓ
kU (a)vk
Z
= ℓ d(U −1 (a)m)(v̄) = ℓ
One can remark that we have used the spectral measure to prove this formula but
actually this result is more general and the formula remains true if one replaces the
matrix U (a) by any matrix of SP (ℓ, R).
Λ the spectral resolution of the operator H Λ
In the discrete case let us denote by Eα,β
α,β
Λ
and for x, y ∈ Λ by σ(α,β),(x,y)
the square matrix valued measure of order ℓ whose
Λ (·)ǫ , ǫ
(i, j)-th entry is the complex measure < Eα,β
i,x j,y >.
Theorem 2.2.5 Let u be the square matrix of order ℓ solution of the equation Hu = λu
with the boundary condition ũ(a) ∼ α and m be the Cauchy measure on the LaΛ
grangian boundary Lℓ . Then for x, y ∈ Λ the averaged spectral measure σα,(x,y)
=
R Λ
σ(αβ),(x,y) dm(β) has a density with respect to the Lebesgue measure on R given by
the formula:
1 ′
u (x)[ũ′ (b)ũ(b)]−1 u(y)
π
Proof:
Using the relation :
Z dσ Λ
(αβ),(x,y) (t)
t−λ
=
GΛ
αβ (λ, x, y)
=
u(x) W −1 (u, v) v ′ (y)
v(x) W −1 (v, u) u′ (y)
if x ≤ y
if x ≥ y
one has only to repeat the proof of Theorem 2.2.2 but without taking traces.
One can prove a theorem of the same kind in the continuous case if one defines properly
Λ
the spectral kernels σ(α,β),(x,y)
and their relationship with the Green kernel.
2.3
Nature of the Spectrum
The approximations of the spectral measures computed in the preceding section allow
to obtain a very simple criterion insuring the absolute continuity of the spectrum. One
84CHAPTER 2. SPECTRAL THEORY OF (NON-RANDOM) SCHRÖDINGER OPERATORS IN A STRIP
only have to check that the sequence of densities computed in Theorem 2.2.5 namely:
1 ′
u (x)[ũ′ (b)ũ(b)]−1 u(y)
π
remains locally bounded (for fixed x,y,α) on the spectrum of H when a → −∞ and
b → +∞. But this will never be the case in the “random situation” since the hyperbolic
behavior defined below will imply that the propagator is exponentially growing, hence
that this sequence goes to infinity. On the contrary the spectrum is singular in most of
the cases studied in the sequel.
2.3.1
Hyperbolic Behavior
We now introduce the concept of hyperbolic behavior of the propagator Uλ (t) associated
to a Schrödinger operator in a strip of width ℓ. Recall that in the discrete case the
operator H is always self-adjoint. In the continuous case we assume that there exists
a constant c such that |Vi (t)| ≤ c(1 + t2 ) for i = 1, . . . , ℓ. Then the operator H is
self-adjoint.
Moreover we also assume that for i = 1, . . . , ℓ we have :
• limn→±∞
1
|n|
• limn→±∞
1
|n|
log(1 + |Vi (n)|) = 0 in the discrete case
R n+1
n
|Vi (t)| dt = 0 in the continuous case.
In the continuous case the transfer matrices are defined by the same formulas as for
the one dimensional operator.
It follows from the above assumptions that for any real or complex number λ we have :
• limn→±∞ 1 log kgλ (n)k = 0
|n|
• if we set Ŭλ (n) = supt∈[0 1] (log kUλ (n + t, n)k)
limn→±∞ 1 Ŭλ (n) = 0
|n|
then one has
Definition 2.3.1 Let us denote by {Uλ (t); t ∈ T, λ ∈ R} the propagator of the Schrödinger
operator H in a strip of width ℓ. The real number λ is said to be hyperbolic if there
exist real numbers γ1± (λ) ≥ γ2± (λ) · · · ≥ γℓ± (λ) > 0 such that for p = 1 · · · ℓ :
lim
n→±∞
1
log kΛp Uλ (n)k = γ1± (λ) + · + γp± (γ)
|n|
The set of hyperbolic values of λ will be denoted by hyp(H).
2.3. NATURE OF THE SPECTRUM
85
Proposition 2.3.2 If λ ∈ hyp(H) then there exist two ℓ dimensional subspaces of Rd
say + Vλ and − Vλ such that for a non-zero vector v in Rd we have:
1
log kUλ (t)vk ≤ −γℓ± (λ)
t→±∞ |t|
1
⇐⇒ lim
log kUλ (t)vk ≥ γℓ± (λ).
t→±∞ |t|
v∈
±
Vλ ⇐⇒
v∈
/
±
Vλ
lim
Proof:
For integer values of t this is just a consequence of the deterministic Oseledec theorem
stated in Proposition 1.2.4. For continuous values of the parameter we write for n ≤
t≤n+1
kU (t)vk
˘
| ≤ U (n)
| log kU (t)vk − log kU (n)vk| = | log
kU (n)vk
and this yields the result for a continuous parameter t
Proposition 2.3.3 Let λ ∈ hyp(H), then λ is an eigenvalue of H iff + Vλ ∩− Vλ 6= {0}.
Proof:
Let λ be in hyp(H) and ψ be a non-zero solution of Hψ = λψ. Then there exist positive
numbers α and t0 such that :
ψ̃(0) ∈+ V(λ) ∩ V − (λ)
ψ̃(0) ∈
/ + V(λ) ∩− V(λ)
⇒
⇒
kψ̃(t)k ≤ e|t|α
for |t| ≥ t0

tα
 kψ̃(t)k ≥ e
for t ≥ t0
or

kψ̃(t)k ≥ e−tα for t ≤ −t0
It is obvious that λ is an eigenvalue in the first alternative and it remains to prove that
it is not an eigenvalue in the second. The conclusion is immediat in the discrete case
since
2
2
+∞
Σ+∞
n=−∞ kψ̃(n)k = 2Σn=−∞ kψ(n)k
In the continuous time case it is enough to remark that
Z t
Z t
(V (s) − λ)ψ(s) ds
ψ̈(s) ds = ψ̇(0) +
ψ̇(t) = ψ̇(0) +
0
0
Thus if we suppose that ψ is a square integrable function, the Cauchy-Schwarz inequality added to the polynomial bound on the potential imply a polynomial bound on ψ̇.
It follows that ψ is exponentially increasing in at least a direction of R and we get a
contradiction.
Theorem 2.3.4 Let m be a nonnegative continuous measure on R which is carried by
hyp(H). Then m is orthogonal to the spectral measure σ of the operator H.
86CHAPTER 2. SPECTRAL THEORY OF (NON-RANDOM) SCHRÖDINGER OPERATORS IN A STRIP
Proof:
The measure m is continuous and the set of eigenvalues of H is at most countable.
Thus we can conclude that for m-almost all λ we have + V(λ) ∩− V(λ) = {0}. It follows
that for m-almost all λ and for any non-zero solution of Hψ = λψ, kψ̃(t)k is growing
exponentially fast in at least one direction of R. But we know from Theorem II.4.5
of [19] that for σ-almost all λ there exists a non-zero solution of Hψ = λψ which is
bounded by a polynomial in t. The formula:
Z t
Z t
(V (s) − λ)ψ(s) ds
ψ̈(s) ds = ψ̇(0) +
ψ̇(t) = ψ̇(0) +
0
0
gives a polynomial bound on |ψ̃(t)| because of our assumption on the potential. It
follows that the complement of the set of eigenvalues of H in hyp(H) is of zero σmeasure and one can conclude that σ is orthogonal to m.
Theorem 2.3.5 Let A be a Borel subset of the real line. If the restriction to A of the
spectral measure σ of H is carried by hyp(H), then σ is pure point on A. Moreover
any eigenvector ψt (λ) corresponding to an eigenvalue λ ∈ A satisfies:
lim
n→±∞
1
log kψ̃t (λ)k ≤ −γℓ± (λ).
|t|
Proof:
Let us write σ = σc + σpp where σpp is the pure point part of σ and σc the continuous
part. Then Theorem 2.3.4 implies that σ is orthogonal to σc thus σ = σpp . Moreover
each eigenvalue λ is hyperbolic and the conclusion follows from Proposition 2.3.3.
It seems difficult to believe that the hypothesis of Theorem 2.3.5 can be satisfied for
a “generic” Schrödinger operator. In fact for random Schrödinger operators H(ω)
constructed from an ergodic process, for each energy λ with γℓ (λ) > 0 then λ ∈ hyp(Hω )
and + Vω ∩− Vω (λ) = {0} for almost all ω. It follows from Proposition 2.3.3 that such a
fixed λ is not an eigenvalue of Hω for almost all ω! Nevertheless it happens quite often
that for almost all ω, the spectral measures of H(ω) are carried by hyp(Hω ) and thus
these measures have to be pure point !!
It is readily seen that the dimension of an eigenspace is less or equal to d. Actually
using the properties of symplectic matrices one can prove that this dimension is less or
equal to ℓ. Let 1 ψ, . . . ,r ψ be r linearly independent eigenfunctions and let us denote
A
by Ψt = Bt the d × r matrix with columns vectors 1 ψ̃(t), . . . ,r ψ̃(t). The relations
t
Ψt = Ut Ψ0 and Ut′ JUt = J imply that A′t Bt −Bt′ At does not depend on t. In the discrete
case At goes to zero at infinity and one can conclude that A′0 B0 − B0′ A0 = 0. To obtain
the same conclusion in the continuous case we use the fact that if ψ is an eigenfunction
2.3. NATURE OF THE SPECTRUM
87
then ψ(t) and t−1 ψ̇(t) are square integrable at infinity. The Cauchy-Swartz inequality
then implies the integrability at infinity of each entry of the matrix t−1 (A′t Bt −Bt′ At ) and
this yields the desired conclusion. It follows that the decomposable r-vector 1 ψ̃∧. . .∧r ψ̃
is Lagrangian and thus r ≤ ℓ.
If λ is a hyperbolic eigenvalue then the dimension of its eigenspace is equal to the
dimension of + Vλ ∩− Vλ and we will prove in the sequel that this dimension is “often”
equal to 1.
2.3.2
Localization Criterions
We now turn our attention to a quite different kind of criterion insuring the pure
point spectrum property that does not refer directly to the Oseledec theorem and the
slow-growth of generalized eigenfunctions. Roughly speaking the basic argument is
the following: If the integrals of a sequence of integrable functions are exponentially
decreasing then almost surely this sequence of functions is pointwise exponentially
decreasing.
The one dimensional case
In the following the unit ball of R2 is denoted by B1 .
Proposition 2.3.6 Let I be an open bounded interval of the real line such that there
exists 0 < δ < 1 with
Z
inf
+∞
X
I v∈B1 n=−∞
kUλ (n)vkδ−|n| dσ(λ) < ∞
Then σ is pure point on I and for an eigenvalue λ ∈ I the eigenfunction ψ decays
exponentially fast. More precisely for any eigenvalue λ ∈ I and δ < ρ < 1 there exists
C(λ) < ∞ such that we have :
|ψ̃(t)| ≤ C(λ)ρ|t|
Proof:
First we remark that if we set
DN (λ) = inf
v∈B1
N
X
n=−N
kUλ (n)vkδ−|n|
then DN (λ) is an increasing sequence of continuous functions and the limit D(λ) is
measurable. An integrable function is almost everywhere finite thus one can deduce
88CHAPTER 2. SPECTRAL THEORY OF (NON-RANDOM) SCHRÖDINGER OPERATORS IN A STRIP
from the hypothesis that for σ-almost all λ ∈ I there exists a unit vector vλ ∈ R2 such
that :
+∞
X
kUλ (n)vλ kδ−|n| = C(λ) < +∞
n=−∞
The proof is complete in the discrete case. In the continuous time case we write for
n≤t≤n+1 :
˘
kU (t)vk = kU (t, n)U (n)vk ≤ U (n)kU
(n)vk
The hypothesis on the sequence U˘n implies that for any positive ǫ there exists an integer
˘ ≤ exp(|n|ǫ) for |n| ≥ N and this yields the conclusion.
N such that U (n)
Using the approximations of the spectral measure obtained by restriction of the operator
H to boxes it is possible to give a more tractable criterion. Let Λ be a box and σ Λ be
a positive measure on the real line converging weakly to the spectral measure σ when
the box Λ converges to the whole space.
Corollary 2.3.7 Let I be an open bounded interval of the real line such that there
exist 0 < ρ < 1, C < ∞ such that for each box Λ one can find a measurable mapping
λ ֒→ v Λ (λ) satisfying :
Z
kUλ (n)v Λ (λ)k dσ Λ (λ) ≤ Cρ|n|
f or n ∈ Λ
I
Then the hypothesis of Proposition 2.3.6 are fulfilled for any ρ < δ < 1.
Proof:
Let ρ < δ < 1, using the notations of the proof of Proposition 2.3.6 one obtains that
there exists a finite constant D such that for [−N N ] ⊂ Λ then
Z
DN (λ) dσ Λ (λ) ≤ D
I
The function λ ֒→ 1I (λ)DN (λ) is lower semi-continuous thus the Fatou lemma implies
that
Z
Z
Z
DN (λ) dσ(λ) ≤ D
D(λ) dσ(λ) = lim ↑
DN (λ) dσ(λ) ≤ D and
I
I
N →∞
I
and this yields the result.
Actually the choice of the vector v Λ (λ) is not arbitrary and is imposed by the construction of the Oseledec contractive vector. Namely for a given boundary condition α at
the left end point a of the box Λ we will choose :
v Λ (λ) =
U −1 (a)α
kU −1 (a)αk
2.3. NATURE OF THE SPECTRUM
89
Then if we use the approximations of the spectral measure proved in Theorem 2.2.4 we
finally obtain the following criterion:
Theorem 2.3.8 Let I be an open bounded interval of the real line and α ∈ [0, π] be a
fixed angle. Assume that there exist 0 < ρ < 1, C < ∞ such that for each box Λ if we
denote by u the solution of Hu = λu with ũ(a) = α one has :
Z
kũ(n)k kũ(0)k kũ(b)k−2 dλ ≤ Cρ|n|
f or n ∈ Λ
I
Then σ is pure point on I and for any ρ < δ < 1 and any eigenvalue λ ∈ I there exists
C(λ) < ∞ such that the associated eigenfunction ψ satisfies :
|ψ̃(t)| ≤ C(λ)δ|t|
It turns out that this criterion is well adapted to the proof of localization in the Markovian random case since it reduces the problem to the spectral properties of operators
defining the Laplace Transform.
The discrete strip case
The one dimensional criterion given in Proposition 2.3.6 can hardly be extended to the
strip in an easy way. In this situation (at least in the discrete case) we will use an other
approach to obtain the pure point property of the spectral measure.
Lemma 2.3.9 The matrix valued measures σm,n are related to the spectral matrix M
by the formula :
′ Pm
σn,m = (Pn , Qn )M
Q′m
Proof:
This formula is a direct consequence of Proposition 2.1.1 and the definition of the
spectral matrix.
Let us denote by M (λ) a matrix of densities of M with respect to its trace σ. In general
M (λ) is defined only σ almost surely but if λ is an eigenvalue then M (λ) is uniquely
defined.
Lemma 2.3.10 Let λ be an eigenvalue of the operator H of multiplicity r. Then the
rank of M (λ) is equal to r and the eigenspace associated to λ is generated by the sequences {U (n)v; n ∈ Z} where v is any column vector of the matrix M (λ). Furthermore
the dimension of the eigenspace is at most ℓ.
90CHAPTER 2. SPECTRAL THEORY OF (NON-RANDOM) SCHRÖDINGER OPERATORS IN A STRIP
Proof:
Let 1 ψ . . .r ψ be an orthonormal basis of the eigenspace associated to λ and denote by
Tn the ℓ × r matrix whose columns are 1 ψ(n) . . .r ψ(n). It is readily seen that
1
M (λ) =
σ(λ)
T0
T−1
′
(T0′ , T−1
)
T0
The matrix
is of rank r thus M (λ) is also of rank r. We have already seen
T−1
that from the Wronskian property we have r ≤ ℓ and the proof is complete.
Proposition 2.3.11 Let I be an open bounded interval of the real line such that there
exist 0 < δ < 1 and C < ∞ satisfying the inequality
ℓ
X
i=1,j=1
|σ(i,n),(j,m) |(I) ≤ Cδ|n|
for m = −1 and m = 0. Then σ is pure point on I and for an eigenvalue λ ∈ I any
eigenfunction ψ decays exponentially fast. More precisely for any eigenvalue λ ∈ I and
δ < ρ < 1 there exists C(λ) < ∞ such that we have :
|ψ̃(n)| ≤ C(λ)ρ|n|
Proof:
A(λ) B(λ)
Let M (λ) =
. Using Lemma 2.3.9 one sees immediately that the
C(λ) D(λ)
hypothesis implies that σ-almost surely each column of the matrix M (λ) can be taken
as the initial vector ψ̃(0) of an exponentially decreasing solution of Hψ = λψ for one
can write :
σn,0 = (Pn A + Qn C)σ and σn,−1 = (Pn B + Qn D)σ
There exists certainly a column of the matrix M (λ) which is not zero since the trace
of M (λ) is equal to d. Then one can conclude that σ is carried by the eigenvalues of H
and is thus pure point. The claim about the exponential decrease of any eigenfunctions
is a direct consequence of Lemma 2.3.10
As usual we will use approximations of the measure σm,n computed in boxes to check
the hypothesis of Proposition 2.3.11.
Theorem 2.3.12 Let I be an open bounded interval of the real line and α be a fixed
Lagrangian subspace of Rd . Assume that there exist 0 < ρ < 1, C < ∞ such that for
each box Λ if we denote by u the square matrix of order ℓ solution of Hu = λu with
2.3. NATURE OF THE SPECTRUM
91
ũ(a) ∼ α and by Γ the symmetric positive matrix ũ′ (b)ũ(b) then one has:
Z p
tr (ũ′ (0)ũ(0)Γ−1 ) tr (ũ′ (n)ũ(n)Γ−1 ) dλ ≤ Cρ|n|
I
Then σ is pure point on I. Moreover for an eigenvalue λ ∈ I and ρ < δ < 1 then there
exists C(λ) < ∞ such that for any associated eigenfunction ψ one has :
|ψ̃(n)| ≤ C(λ)δ|n|
Proof:
If we use the approximations constructed in Proposition 2.2.2 then the Fatou’s lemma
implies that it is enough to have for m = −1 and m = 0:
Z
ℓ
X
I i=1,j=1
| u′ (m)Γ−1 u(n) i,j | dλ ≤ Cρ|n|
f or n ∈ Λ
If A, B, C are square matrices of order ℓ with C = AB then we have:
ℓ
X
i=1,j=1
|Ci,j | ≤ ℓ
p
tr(A′ A) + tr(B ′ B)
√
√
One can choose A = u′ (m) Γ−1 and B = Γ−1 u(n). Applying the above inequality
for m = −1 and m = 0 the Cauchy-Swartz inequality yields the result.
It is certainly possible to extend this theorem to the continuous strip case exactly in
the same form. Moreover one remarks that restricting the above formula to the one dimensional case one obtains exactly the same criterion as the one given in Theorem 2.3.7
!
92CHAPTER 2. SPECTRAL THEORY OF (NON-RANDOM) SCHRÖDINGER OPERATORS IN A STRIP
Chapter 3
Spectral Theory of Random
Schrödinger Operators
in a
Strip
Contents:
1. Singularity of the Spectrum
2. Localization for a.c. Potentials
(a) The one dimensional discrete case
(b) The one dimensional Markov case
(c) The discrete strip case
3. Localization for Singular Potentials
This chapter is essentially devoted to the proof of localization when the distribution of
potentials is absolutely continuous using the so called “Operator Theory”. A simple
proof of localization for singular potentials is still missing and in this situation one has
to refer to the multiscale analysis developed in the multidimensional case by Fröhlich
& Spencer and by Dreifus & Klein. This suggests that the subject is far to be closed. . .
We now assume that the potentials Vi (t) are random. This means that there exists an
ergodic dynamical system (Ω, F, θt , P) (T equals R or Z) such that Vi (t) = Vi (0) ◦ θt for
i = 1 . . . ℓ. In the discrete case the associated Schrödinger operators H(ω) are always
93
94CHAPTER 3. SPECTRAL THEORY OF RANDOM SCHRÖDINGER OPERATORS
self adjoint but this property is not granted in the continous case. In this last situation
we assume in example that E{|Vi (0)|k } < ∞ for some k > 5/2. (See [19], Chapter V,
for a complete discussion about essential self-adjointness).
For such ergodic families of random Schrödinger operators it is known that the spectrum
(as a subset of R) is almost surely not random. More precisely (See [19], Chapter V):
Proposition 3.0.13 There exist closed subsets Σ, Σpp , Σac and Σsc of R such that for
P-almost all ω ∈ Ω we have Σ(H(ω)) = Σ, Σpp (H(ω)) = Σpp , Σac (H(ω)) = Σac and
Σsc (H(ω)) = Σsc .
The two following models are of particular interest:
• The independent case: The time parameter set T is equal to Z and the set
{Vi (n) ; i = 1 . . . ℓ , n ∈ Z} form an independent family of random variables.
(Actually, in most of thre proofs, it is enough to assume the independence of the
set of columns vectors in Rℓ ).
• The Markov case: There exists a stationnary Markov chain (or a Markov
process) {X(t) ; t ∈ T } with state space M , and a measurable function h from
M to Rℓ such that (V1 (t), . . . , Vℓ (t)) = h ◦ X(t).
3.1
Singularity of the Spectrum
The goal of this section is to prove the singularity of the spectrum for ergodic Schrödinger
operators in a strip as soon as the lowest nonnegative Lyapunov exponent is positive.
The proposition below is the first half of a complete characterization of the absolutely
continuous spectrum of random Schrödinger operators . It is known as the PasturIshii result and a converse has been obtained in the one dimensional case by Kotani
[65]. The simplicity of the proof made it one of the first striking results of the theory.
Also, its statement can be formulated in an abstract form in terms of orthogonality of
measures on the line and this extra twist shed some interesting light on the result. See
the remark following the proof.
We will assume throughout this subsection that the integrability condition:
ℓ
X
i=1
E{log(1 + |Vi (0)|)} < ∞
(3.1)
is satisfied P
in the case of the discrete strip of width ℓ. In the continuous case one need
to assume ℓi=1 E{|Vi (0)|} < ∞ but this is implied by our previous condition about
self-adjointness.
IN A STRIP
3.1. SINGULARITY OF THE SPECTRUM
95
Proposition 3.1.1 Let m be a positive measure on the real line such that the lowest
nonnegative Lyapunov exponent γℓ (λ) is strictly positive m-almost surely. Then for
P-almost all ω the spectral measure σω is orthogonal to the measure m.
Proof:
Let Ω1 be the subset of ω ∈ Ω such that one has for i = 1 . . . ℓ:
1
log(1 + |Vi (n, ω)|) −→ 0
|n|
in the discrete case and:
1
|n|
Z
n+1
n
|Vi (t, ω)|dt −→ 0
in the continuous case. The above assumptions imply that Ω1 is of full P-measure. Let
us define the subset W ⊂ Ω × R by:
W = {(ω, λ) ; ω ∈ Ω1 , λ ∈ hyp(H(ω))}.
The set hyp(H(ω)) of hyperbolic values of λ was defined in Chapter II. The function
Uλ (n, ω) is continuous with respect to the variable λ and measurable with respect to ω.
This implies that the set W is measurable for the product sigma algebra F ⊗ B where B
is the Borel sigma algebra of R. It follows from the Oseledec theorem and the positivity
of γℓ (λ) that for m-almost all λ one has P(Wλ ) = 1 where Wλ is the section of W for a
given value of λ. The Fubini’s theorem implies that for P-almost all ω the measure m
is carried by hyp(H(ω)) and the result follows from Proposition 2.3.4 if we first assume
that the measure m is continuous. The continuity of the measure m was needed in
order to insure that the countable set of eigenvalues of H(ω) was m negligible. In the
random case we know from Proposition 1.2.8 that a given λ is P-almost surely not an
eigenvalue of H(ω) and consequently we can assume without any loss of generality that
the measure m is continuous.
Remark 3.1.2 The above result has the following striking consequence: if one picks
ω0 in Ω then for P almost all ω the spectral measure σ(ω0 ) is orthogonal to the spectral
measure σ(ω) in the sense that they are carried by disjoint Borel subsets of R. This
suggests that the mapping ω ֒→ σ(ω) is somewhat chaotic!
This Proposition has a direct application to the independent and Markov models since
we stated in Chapter I a very simple condition insuring that γℓ is positive in such
models.
Theorem 3.1.3 Assume that
96CHAPTER 3. SPECTRAL THEORY OF RANDOM SCHRÖDINGER OPERATORS
• In the independent case, the potentials are not constant.
• In the Markov case, that the support of the distribution of h(X(0)) contains a
subset A of Rℓ of the form A1 × A2 × . . . × Aℓ such that each Ai contains more
than one point.
Then for P almost all ω the spectrum of H(ω) has no absolutely continuous part.
Proof:
Direct application of the above Proposition and the positivity of γℓ implied by Theorems
1.4.37 and 1.5.13. (Obviously the Goldsheid-Margulis theory is not necessary in the
one dimensional situation studied by Ishii and Pastur).
3.2
Localization for a.c. Potentials
For any value λ of the energy we have already defined the propagator Uλ (t) in both the
discrete and continuous cases and also the associated Laplace operator Tλ in the independent and Markov models. The spectral properties of this operator have been studied
in the chapter I but one now needs some uniformity with respect to the parameter λ.
We first prove the following general result:
Proposition 3.2.1 Let λ ֒→ Tλ be a continuous mapping from the compact metric set
I in the set of bounded operators on a Banach space and denote by r(λ) the spectral
radius of the operator Tλ . Then we have:
(i) The function λ ֒→ r(λ) is upper semi-continuous on I.
(ii) If r(λ) < δ for any λ ∈ I then there exist ρ < δ and a constant C such that:
kTλn k ≤ Cρn
for n = 1, 2, · · · , λ ∈ I
Proof:
Let us set rn (λ) = kTλn k, then rn (λ) is a continuous function and the sequence log rn (λ)
is subadditive. It follows that log r(λ) = lim n1 log rn (λ) = inf n n1 log rn (λ) is upper
semi-continuous and this proves (i). The upper semicontinuous function r attains its
maximum on the compact set I hence there exists ρ′ < δ such that r(λ) ≤ ρ′ on I.
From ρ′ = max(ρ′ , r(λ)) one deduces that ρ′ is the pointwise limit of the decreasing
sequence max(ρ′ , inf 1≤k≤n rk (λ)). The Dini’s theorem implies that this limit is uniform
on the compact set I, hence for ρ′ < ρ < δ there exists an integer N such that
inf rk (λ) ≤ ρ
1≤k≤N
for all λ ∈ I
The conclusion now follows from the subadditivity of the sequence log rn (λ)
IN A STRIP
3.2. LOCALIZATION FOR A.C. POTENTIALS
97
Proposition 3.2.2 Let T be a compact operator of spectral radius one on some Banach
space. Then the sequence kT n k is bounded if and only if for any eigenvalue λ of modulus
1 one has ker(T − λI) = ker(T − λI)2 .
Proof:
One can write T n = Qn + N n where the spectral radius of Q is strictly less than 1 and
N is the restriction of T to the sum of the characteristic spaces corresponding to the
eigenvalues of modulus one. The Jordan form of N allows immediately to say that N n
is bounded iff N is diagonalizable.
For a fixed Schrödinger operator H we have proved in Chapter II that localization
occurs as soon as one can prove the exponential decay of some real sequence. In the
random case we have to handle a sequence of measurable functions of ω. Roughly
speaking the following lemma says that if the expectations of a sequence of function of
ω form a real exponentially decreasing sequence then the sequence of functions is itself
almost surely exponentially decreasing.
Lemma 3.2.3 Let fn (ω) be a sequence a measurable positive functions such that for
some finite constants C and ρ one has:
E{fn } ≤ Cρn
Then for any δ > ρ there exists an P-almost everywhere finite random variable C(ω)
such that
fn (ω) ≤ C(ω)δn
Proof:
P
−n } < +∞ and we define the random variable C(ω) by the formula
One has P
E{ ∞
n=0 fn δ
∞
C(ω) = n=0 fn (ω)δ−n . This random variable is integrable hence P-almost everywhere
finite and the proof is complete.
For the reader convenience we will give separate proofs of localization in the three
following models. First we investigate the simplest case of i.i.d. potentials on the one
dimensional lattice. Secondly, using the same techniques, we prove localization in the
one dimensional Markov model (discrete and continuous). Finally we extend these
proofs to the discrete model in the strip with i.i.d. potentials.
3.2.1
The discrete i.i.d. model
We assume in this subsection that the common distribution of the potentials is absolutely continuous with respect to the Lebesgue measure on R and has a second order
moment.
98CHAPTER 3. SPECTRAL THEORY OF RANDOM SCHRÖDINGER OPERATORS
Recall that we define the probability measure µλ on the group SL(2, R) as the law of
the matrix
V (n) − λ −1
1
0
where V (n) is the potential at the site n. If χ is a µλ integrable cocycle on the projective
line B then the Laplace transform Tχ,λ is defined on the set of continuous functions on
B by the formula:
Z
Tχ,λ f (x̄) =
f (gx̄)χ(g, x̄)dµλ (g)
Let m be the Cauchy measure on B, and denote by χ the Radon Nikodym cocycle:
χ(g, v̄) =
kvk 2
dg−1 m
(v̄) = (
)
dm
kgvk
This cocycle will be fixed in all this subsection, hence for a real δ we will use the notation
Tδ,λ for the Laplace transform associated to the cocycle χδ/2 . Since χ(g, v̄) ≤ kgk2 the
existence of a second order moment for the law of the potentials implies that Tδ,λ is a
bounded operator on the space C(B) for any λ ∈ R and 0 ≤ δ ≤ 2.
Proposition 3.2.4 For any δ ∈ [0 , 2] the function λ ֒→ Tδ,λ maps continuously the
real line in the Banach space of bounded operators on C(B).
Proof:
If we denote by ψ(s) the density of the law of the potentials one can write:
Z +∞
(1 + s2 )δ/2 |ψ(s − λ) − ψ(s − λ′ )|ds
kTδ,λ′ f − Tδ,λ f k ≤ kf k
−∞
and the result is a consequence of the Lebesgue theorem since we assume the existence
of a second order moment.
The following theorem summarizes the essential results about the Laplace transform.
Most of them have been previously proved in Chapter I.
Theorem 3.2.5 For any real energy λ and 0 ≤ δ ≤ 2 let us denote by r(δ, λ) the
3 is a compact operator on C(B) and we
spectral radius of the operator Tδ,λ . Then Tδ,λ
have the decompositions:
3n
f
T0,λ
= νλ (f ) + Qn0,λ (f )
3n
T2,λ
f
= m(f )φ̆λ + Qn2,λ (f )
where νλ is the unique µλ invariant probability measure on B, φ̆λ is the density of ν̆λ
with respect to m and the operators Q0,λ , Q2,λ have a spectral radius strictly less than
one. Moreover r(1, λ) < 1
IN A STRIP
3.2. LOCALIZATION FOR A.C. POTENTIALS
99
Proof:
We first remark that if the law of the potentials has a density with respect to the
Lebesgue measure then µ3λ has a density on the group SL(2, R) with respect to the
Haar measure. This can be deduced from the general result of Proposition 1.4.36 but
in this situation, a simple computation of Jacobian gives the result since one has:
x −1
y −1
z −1
xyz − z − x −yz + 1
=
1 0
1 0
1 0
xy − 1
−y
a b
→ (b, c, d) (which works for d 6= 0) then the Jacobian
c d
of the mapping (x, y, z) → (b, c, d) of R3 into itself, is equal to y 2 . The image of an
absolutely continuous measure by this mapping is still absolutely continuous. This
yields the answer since the Haar measure on SL(2, R) is equivalent to the Lebesgue
2 is a
measure on the three parameters (b, c, d). (Actually a direct proof shows that Tδ,λ
compact operator on C(B)).
3 is a compact aperiodic operator
It follows from Propositions 1.3.9 and 1.3.14 that Tδ,λ
on C(B). The decompositions have been obtained in Proposition 1.3.12. To prove the
lasr assertion we use the fact that δ ֒→ log r(δ, λ) is a convex function on [0, 2] which
is nul for δ = 0 and δ = 2. Moreover Proposition 1.3.15 implies that for any λ there
exists a positive number α with r(α, λ) < 1 thus one can conclude that r(1, λ) < 1.
If one chooses the chart
Corollary 3.2.6 Let I be a compact subset of the real line. Then there exist constants
C1 , C2 and ρ < 1 such that for n ∈ N and λ ∈ I we have :
n
k ≤ C1 ρn
kT1,λ
,
n
k ≤ C2
kT2,λ
Proof:
Using Propositions 3.2.1 and 3.2.4 it is enough to prove that for each λ the spectral
radius of T1,λ is strictly less than 1. But this is a direct consequence of Proposition 1.3.13
and the positivity of the Lyapunov exponent.
Let λn be a sequence converging to λ, then any weak limit point of the sequence νλn
will be µλ invariant and thus equal to νλ from the unicity of the invariant measure.
This implies the weak continuity of the mapping λ ֒→ νλ from the real line to the set
of Radon measures B.
This result also implies the continuity of the mapping λ ֒→ φ̆λ from the real line to C(B).
From these facts and the continuity of the mapping λ ֒→ Tδ,λ proved in Proposition 3.2.4
it follows that the mappings λ ֒→ Q0,λ and λ ֒→ Q2,λ are continuous. (In fact these
properties can be obtained directly from the theory of continuous perturbations of
operators, see Kato[54]). The second uniform bound is then a direct consequence of
Proposition 3.2.5.
100CHAPTER 3. SPECTRAL THEORY OF RANDOM SCHRÖDINGER OPERATORS
Theorem 3.2.7 Let {V (n) ; n ∈ Z} be an i.i.d. sequence of real random variables
whose common law is absolutely continuous and has a second order moment. Then for
P almost all ω the operator H(ω) has a pure point spectrum.
Proof:
Taking in account the criterion 2.3.8 and the lemma 3.2.3 one has only to show that
for a fixed angle α and any open bounded set I, there exist a finite constant C and a
positive number ρ < 1 such that for any box Λ = [−b, b − 1] one has for n ∈ Λ :
Z
kUλ (0, −b)αk2
} dλ ≤ Cρ|n|
E{kUλ (n, −b)αk
2
kU
(b,
−b)αk
λ
I
Let us first consider the case n ≥ 0. If we compute the expectation by successive
integration with respect to the sites in [n, b − 1], [0, n − 1] and [−b, −1], the left hand
side of the above inequality can be rewritten as:
Z b−n
b
n
T0,λ
T1,λ
T2,λ
1 (α) dλ
I
where 1 is the function identically equal to one.
For n ≤ 0 the computation of the expectation by successive integration with respect to
the sites [0, b − 1], [n, −1] and [−b, n − 1] implies that the left side can be rewritten as:
Z b−|n| |n| b
T0,λ T1,λ T2,λ
1 (α) dλ
I
Taking in account that T0, λ is a Markov operator and the uniform bounds proved in
Proposition 3.2.6 one obtains the result.
Remark 3.2.8 It is possible to prove localization using the operators Tδ,λ acting on
the space L2 (m) rather than on C(B). Up to now we have fixed the angle α to apply
Theorem 2.3.8. It is easy to see, using the Fatous’lemma, that one can integrate in
the variable α with respect to the Cauchy measure m in the formula appearing in this
Theorem and then obtain a new form of this criterion:
Let I be an open bounded interval of the real line. Assume that there exist two constants
0 < ρ < 1 and C < ∞ such that for each box Λ = [−b, b − 1] and n ∈ Λ one has :
Z Z
−2
kU (n, −b)αk kU (0, −b)αk kU (b, −b)αk dm(α) dλ ≤ Cρ|n|
I
Then σ is pure point on I and for any ρ < δ < 1 and any eigenvalue λ ∈ I there exists
C(λ) < ∞ such that the associated eigenfunction ψ satisfies :
|ψ̃(t)| ≤ C(λ)δ|t|
IN A STRIP
3.2. LOCALIZATION FOR A.C. POTENTIALS
101
Hence in the proof of Theorem 3.2.7 it is enough to prove that one has (in example for
n ≥ 0):
Z Z b−n
b
n
T0,λ T1,λ T2,λ 1 (α) dm(α) dλ ≤ Cρn
I
B
where 1 is the function identically equal to one.
Under this last form one sees immediately that one can use the same estimates that in
Corollary 3.2.6 but with the L2 (m) norm that we will denote by k k2 rather than with
the C(B) norm that we will now denote by k k∞ .
Using the duality relation in Proposition 1.3.8 (i) that is
< T̆δ,λ f , h >m =< T2−δ,λ h , f >m
we obtain that kT2,λ k2 = kT̆0,λ k2 Moreover for f ∈ L2 (m) one can write:
Z
2
|T0,λ f (α)| ≤
|f |2 (gα) dm(α) dµλ (g)
Z
=
|f |2 (α)χ(g, α) dµ̆λ (g) dm(α)
≤ kf k22 kT̆2,λ k∞
It follows from the estimates of Corollary 3.2.6 that on a compact set I of energy then
n and T n are uniformly bounded.
the L2 norms of T0,λ
2,λ
On the other hand it is proved in [27] that the spectral radius of T1,λ acting on L2 (m)
is equal to 1 iff the closed subgroup Gµλ generated by the support of µλ in SL(2, R) is
amenable. In our situation this subgroup is the whole of SL(2, R) (See Lemma 1.4.32),
which is not amenable. An other way to obtain that the spectral radius is strictly less
than one is to check the compactness of the operator T1,λ acting on L2 (m) and that its
eigenfunctions are actually continuous. Hence its spectral radius on L2 (m) and C(B)
n follows from our estimates on C(B).
are the same, and the exponential decrease of T1,λ
Remark 3.2.9 In the proof of localization by Kunz & Souillard in [70] and Delyon,
Kunz & Souillard in [23] the authors use the same kind of estimates than above but
applied to some operators Tδ,λ defined as follows:
Tδ,λ f (v̄) =
Z
f (gv̄)ζ(g, v̄)δ dµλ (g) with ζ(g, v̄) =
| < v , e1 > |
| < gv , e1 > |
where e1 is the first basis vector of R2 . If we denote by r the density of the distribution
of potential and if the projective line minus the point at infinity is identified to the real
102CHAPTER 3. SPECTRAL THEORY OF RANDOM SCHRÖDINGER OPERATORS
line by the mapping v =
a
b
֒→ x =
Tδ,λ f (x) =
a
b
Z
this operator takes the form:
1
1
f (t)r(λ + t + ) dt
δ
|t|
x
These operators also enjoy a “duality” property of the following form: Let U be the
unitary operator of L2 (dx) associated to the mapping x ֒→ x1 . If one define the operators
Z
1
1
f (t)r(λ + x + ) dt
T̆δ,λ f (x) =
δ
|t|
t
a simple change of variables yields
U ∗ Tδ,λ U = T̆2−δ,λ
The “pseudo-cocycle” ζ is singular and is no longer rotation-invariant, hence the corresponding “pseudo-Laplace transform” cannot be easily studied on C(B). Nevertheless
the same estimates that we obtained in this section can be proved for these operators
acting on the space L2 (R, dx) if we identify the projective line minus infinity to the real
line.
We end the study of the one-dimensional i.i.d. case by a formula relating the density of
states and the eigenfunctions of the Laplace operator. The “distribution of states” N
studied in Chapter IV can be defined as the averaged spectral measure N = E{σn } =
1
1
2 E{σ}. (We put the factor 2 in order to obtain a probability measure).
Proposition 3.2.10 Under the hypothesis of Theorem 3.2.7 the probability measure N
has a continuous density with respect to the Lebesgue measure given by the formula:
Z
1
dN
=
φλ (b)φ̆λ (b)dm(b)
dλ
2π B
where φλ and φ̆λ are the densities of νλ and ν̆λ with respect to the Cauchy measure m
on B.
Proof:
Let h be a real continuous function with compact support. The Lebesgue theorem
and the approximation of the spectral measure computed in Proposition 2.2.2 imply
immediately that for any nonzero vector v ∈ R2 one has:
Z
Z
kUλ (0, −b)vk2
1
dλ
h(λ)E{
h(λ)dN (λ) = lim
b→+∞ 2π
kUλ (b, −b)vk2
Z
1
b
b
1 (v̄) dλ
T2,λ
h(λ) T0,λ
= lim
b→+∞ 2π
IN A STRIP
3.2. LOCALIZATION FOR A.C. POTENTIALS
103
n and T n given in Proposition 2.2.2
It only remains to use the decompositions of T2,λ
0,λ
to obtain :
b
b
b
φ̆λ + Qb2,λ 1
1 = T0,λ
T2,λ
T0,λ
= νλ (φ̆λ ) + νλ Qb2,λ 1 + Qb0,λ φ̆λ + Qb0,λ Qb2,λ 1
It has been proved in Proposition 3.2.6 that the mappings λ ֒→ Q2,λ and λ ֒→ Q0,λ
are continuous. Since the spectral radius of these operators is strictly less than one the
uniform bound proved in Lemma 3.2.1 yields the formula.
The continuity of the density of N is a consequence of the continuity of the mapping
λ ֒→ φλ and λ ֒→ φ̆(λ) proved in Proposition 3.2.6.
3.2.2
The Markov model
In the one dimensional Markov case (both discrete and continuous) the proof of localization is essentially the same as for independent potentials since this proof also reduces
to check some spectral properties of the Laplace transform.
For T = R or Z let {X(t); t ∈ T } be an ergodic“invertible” Markov process with values
in the compact space M satisfying all the assumptions of Section 1.5. The potentials
are of the form V (t) = F (X(t)) for F a real function defined on M (this is a slight
modification of the notations of Section 1.5 in which F was a SL(2, R) valued function).
It has been proved in [19] Proposition V.3.5. that if F has a lower bound equal to zero
then the spectrum of the associated ergodic family of Schrödinger operators is P-almost
surely equal to [0 , ∞).
Denoting by B the the projective line we have already constructed the Markov process
{(X(t), Uλ (t)b); t ∈ T } in (M × B).
For a cocycle χ on B , t ∈ T + , f ∈ C(M × B) the Laplace transform is defined by:
t
f (x, b) = Ex {f (X(t), Uλ (t)b)χ(Uλ (t), b)}
Tχ,λ
t
f (x, b) = Ex {f (X(−t), Uλ (−t)b)χ(Uλ (−t), b)}
T̆χ,λ
For t = 1 we drop the superscript 1. Let m be the Cauchy measure on B, and let us
choose for χ the usual Radon Nikodym cocycle:
χ(g, v̄) =
kvk 2
dg−1 m
(v̄) = (
)
dm
kgvk
The cocycle χ will be fixed in all this subsection. For a real δ we set Tδ,λ for the Laplace
transform associated to the cocycle χδ/2
We will choose at times to represent the projective line as the torus T = [0, π] to be in
accordance with earlier papers on the subject. As we said above the proof of localization
104CHAPTER 3. SPECTRAL THEORY OF RANDOM SCHRÖDINGER OPERATORS
will works for a large class of Markovian models as soon as the operators associated to
the Laplace transform satisfy some conditions introduced in the following definition.
t on the Banach space C(M × B).
Definition 3.2.11 Let us consider the operators Tδ,λ
We denote by P the set of following properties :
1. For any real λ and δ ∈ [0, 2], Tδ,λ is a compact operator and the mapping λ ֒→ Tδ,λ
is continuous.
2. For any positive λ, the operator T0,λ admits a unique invariant probability measure
on M × B say νλ and moreover T0,λ is aperiodic.
3. For any real λ there exists δ > 0 such that the spectral radius of Tδ,λ is strictly
less than 1.
We call dual properties the same assumptions on the dual operators T̆δ,λ .
These properties are easily obtained in the particular case of a Brownian motion X(t)
on the torus M assuming that F is a Morse function, that is:
• F is C ∞ function and inf x∈M F (x) = 0.
k
• For any point x in M there exists an integer k with d Fk (x) 6= 0
dx
Proposition 3.2.12 Let X(t) be a Brownian motion on the torus M . Then one has:
1. The infinitesimal generator of the process (X(t), U (t)ϑ) is given by:
1
1 ∂2
∂
L = X2 + Y = −
+ (cos2 ϑ + (λ − F (x) sin2 ϑ)
2
2 ∂x2
∂ϑ
2. If F is a Morse function then the Lie Algebra generated by the vector fields X
and Y is two dimensional at any point of M × B.
Proof:
The Prüffer variable ϑ(t) = Uλ (t)ϑ introduced in Chapter III satisfies the following first
order differential equation:
ϑ̇(t) = cos2 ϑ(t) + (λ − V (t)) sin2 ϑ(t)
This implies immediately the first formula, since for a C 2 function f with compact
support on M × B one can write:
f (X(t), ϑ(t)) − f (x, ϑ) = f (X(t), ϑ(t)) − f (x, ϑ(t)) + f (x, ϑ(t)) − f (x, ϑ)
IN A STRIP
3.2. LOCALIZATION FOR A.C. POTENTIALS
105
Applying the taylor formula at the second order to the first part of the difference, the
relations
1
1
lim E{(X(t) − x)} = 0 , lim E{(X(t) − x)2 } = 1
t→0 t
t→0 t
yield immediately that
∂2
1
f (x, ϑ)
lim (f (X(t), ϑ(t)) − f (x, ϑ(t))) =
t→0 t
∂x2
In the same way one can write that
Z t
1
1
lim E{(ϑ(t) − ϑ)} = lim E{ ϑ̇(s)ds}
t→0 t
t→0 t
0
= ϑ̇(0) = cos2 ϑ + (λ − F (x) sin2 ϑ
To prove 2. we remark that if we compute k successive Lie brakets one obtains:
[X, [X, [. . . [X, Y ]]] . . .] = − sin2 ϑ
dk F
∂
(x)
k
dx
∂ϑ
∂
and the
Thus the result is clear if sin ϑ 6= 0 and for sin ϑ = 0 we remark that Y = ± ∂ϑ
conclusion follows.
Proposition 3.2.13 Let us assume that X(t) is a Brownian motion on the torus M
and that F is a Morse function. Then the property P and its dual, in Definition 3.2.11,
hold.
Proof:
Proposition 3.2.12 and the Hörmander’theorem imply that the infinitesimal generat ; t ≥ 0} has
tor L is hypoelliptic. It follows from [50] that the semigroup {T0,λ
a density ptλ ((x, ϑ), (x′ , ϑ′ )) which is jointly continuous with respect to the variables
(t, λ, x, ϑ, x′ , ϑ′ ) (and C ∞ with respect to (x, ϑ, x′ , ϑ′ )). This property implies immet since these operators have continuous
diately the compactness of the operators Tδ,λ
t .
kernels and also the continuity of the mapping λ ֒→ Tδ,λ
It is also proved in [50] that the Markov chain associated to the operator T0,λ is Harris
recurrent on M ×B with respect to the Riemannian measure. This yields the uniqueness
of the invariant measure and the aperiodicity of the transition operator.
The uniqueness of the invariant measure is also a consequence of Proposition 1.5.15
which also implies that the Lyapunov exponent is positive, hence condition 3. in property P by the standart argument of Proposition 1.5.17.
Actually these properties hold in the general case of a Brownian motion on a connected
compact Riemannian manifold M .
106CHAPTER 3. SPECTRAL THEORY OF RANDOM SCHRÖDINGER OPERATORS
Proposition 3.2.14 Assuming the property P and its dual one has the decompositions:
n
T0,λ
f
= νλ (f ) + Qnλ,0 f
n
f
T2,λ
= (π ⊗ m)(f )ψλ + Qnλ,2 f
In these formulas m is the normalized Lebesgue measure on [0, π], ψλ is the density
with respect to π ⊗ m of the invariant probability measure ν̆λ of the operator T̆0,λ and
and the operators Q0,λ , Q2,λ have a spectral radius strictly less than 1.
Proof:
Direct consequences of Proposition 1.5.21.
Proposition 3.2.15 Let us assume that the property P and its dual hold, then for a
bounded interval I there exist finite constants C1 , C2 and ρ < 1 such that for n ∈
N , λ ∈ I one has:
(P1 ):
n k ≤ C ρn
kT1,λ
1
(P2 ):
n k≤C
kT2,λ
2
Proof:
The proof is exactly the same as in Proposition 3.2.6.
Proposition 3.2.16 Let us assume that the properties (P1 ) and (P2 ) of Proposition 3.2.15 hold and let I be an open bounded interval. Then for P-almost all ω the
spectrum of H(ω) is pure point on I with exponentially decaying eigenfunctions.
Proof:
The proof is quite the same as in the independent case if one replaces the successive
integrations by conditional expectations. If we consider the Markov process Zt+ = Zt−b
then the Markov property can be written:
s
+
f (Zt+ )
)|F−b,t } = kUλ (t, −b)αkδ Tδ,λ
Ex {kUλ (t + s, −b)αkδ f (Zt+s
Recall (Proposition 2.3.8) that a sufficient condition for exponential localization on the
open interval I is the following :
For a fixed angle α there exist a finite constant C and ρ < 1 such that for any box
Λ = [−b , b] and n ∈ Λ one has:
Z
I
E{kUλ (n, −b)αk
kUλ (0, −b)αk2
} dλ ≤ Cρn
kUλ (b, −b)αk2
IN A STRIP
3.2. LOCALIZATION FOR A.C. POTENTIALS
107
Computing successive conditional expectations rather than expectations the same procedure as in the independent case allows to rewrite the right hand of the above inequality
as:
Z b−|n| |n| b
1 (x, α) dπ(x)dλ
f or − b ≤ n ≤ 0
T0,λ T1,λ T2,λ
I
Z b−n
n
b
T1,λ
T2,λ
1 (x, α) dπ(x)dλ
f or 0 ≤ n ≤ b
T0,λ
I
n k and kT n k on the interval I yield the result.
The uniform bounds of kTλ,2
λ,1
It is again possible to build a proof using the Laplace transform acting on L2 (M ×
B, dx ⊗ dm) rather than on C(M × B). This is what is done by Royer in [93]
Assuming the same hypotheses, one can obtain an explicit formula for the averaged
spectral measure.
Proposition 3.2.17 Let us assume that the property P and its dual hold, then the averaged spectral measure N = E{σ} has a continuous density with respect to the Lebesgue
measure given by:
Z
1 π
dN
=
ψλ (x, α)ψ̆(x, α)dπ(x)dα
dλ
π 0
Proof:
Let φ be a continuous function with compact support, and Λ be the box [−b, b]. One
has:
lim E{σαΛ (φ)}
Z
= lim E φ(λ)kUλ (0, −b)αk2 kUλ (b, −b)α−2 k dλ
b→+∞
Z
b
b
= lim
1 (x, α)dπ(x)dλ
φ(λ) T0,λ
T2,λ
N (φ) =
b→+∞
b→+∞
The proof is now the same that in the independent case.
It is interesting to discuss the assumptions made in these two subsections and their
relationship with the existence of a density for the distribution of potentials.
Localization occurs in the Markovian model as soon as properties P1 , P2 appearing in
Proposition 3.2.15 hold. In fact the argument given below shows that P2 is the most
important requirement. First we remark that one can replace in Proposition 2.3.8 the
term kUλ (n)k by kUλ (n)ks for a parameter 0 < s < 2 without changing the proof. It
108CHAPTER 3. SPECTRAL THEORY OF RANDOM SCHRÖDINGER OPERATORS
n by the
follows than we can replace in the above proof of localization the operator T1,λ
n
operator Ts,λ . If we look to Propositions 1.5.17 and Theorem 1.5.10 which require only
strong irreducibility and contractivity (and this is true in the one dimensional case as
soon as the potentials are not constant !) then we see that for a compact interval I
there exist a real s > 0, a finite constant C and ρ < 1 such that for n ∈ N and λ ∈ I
one has:
n
kTs,λ
k ≤ Cρn
The uniformity of the constants with respect to λ in a compact set can be obtained by
improving the uniform convergence stated in Theorem 1.5.10 (ii).
It follows that the real strong assumption in this subsection is the uniform bound of
n k.
kT2,λ
3.2.3
The discrete i.i.d. model on the strip
The proof of localization on the strip is a bit more involved than in the one dimensional
case. This is due to the fact that matrices are not so easily tractable as real numbers.
Nevertheless, the strategy of the proof is the same, namely localization occurs as soon
as the the spectral radius of operators defining the Laplace transforms satisfy some
requirements.
In this subsection we assume that the potentials {Vi (n); i = 1 · · · ℓ, n ∈ Z} form an
i.i.d. family of random real variables whose law is absolutely continuous on R and has
a moment of order ℓ(ℓ + 1). Recall that in this situation we define the probability
measure µλ on the symplectic group SP (ℓ, R) as the law of the 2ℓ × 2ℓ matrix
V (n) − λIℓ −Iℓ
Iℓ
0
where V (n) is the ℓ × ℓ potential matrix associated to the Schrödinger operator on a
strip of width ℓ. If χ is a µλ integrable cocycle on a boundary B then the Laplace
transform Tχ,λ is then defined on the set of continuous function on B by the formula:
Z
Tχ,λ f (b) = f (gb)χ(g, b) dµλ (g)
We have already computed the Poisson kernel of the boundary Lℓ−1 say rℓ−1 and it is
given by:
kyk ℓ+2
dg−1 mℓ−1
(ȳ) = (
)
rℓ−1 (g, ȳ) =
dmℓ−1
kgyk
The Poisson kernel of the boundary Lℓ−1,ℓ say rℓ−1,ℓ is given by:
rℓ−1,ℓ (g, (ȳ, x̄)) = (
kyk ℓ kxk 2
)(
)
kgyk kgxk
IN A STRIP
3.2. LOCALIZATION FOR A.C. POTENTIALS
109
We will consider the cocycle χ(g, (x̄, ȳ)) on Lℓ−1,ℓ defined by:
−1
χ(g, (ȳ, x̄)) = rℓ−1,ℓ (g, (ȳ, x̄))rℓ−1
(g, x̄) = (
2
kyk −2 kxk 2
) (
)
kgyk
kgxk
2
One has r̄ℓ−1 (g) ≤ kgkℓ +ℓ−2 , rℓ−1,ℓ (g) ≤ kgkℓ +ℓ and χ̄(g) ≤ kgk4ℓ−2 thus if the distribution of the potentials has a moment or order ℓ(ℓ + 1) all these cocycles are integrable.
We define the operator Tδ,λ as the Laplace operator on C(Lℓ−1,ℓ ) associated to the
cocycle χδ/2
In the same way than in the preceding subsection, we introduce the following definition:
Definition 3.2.18 For a fixed interval I we consider the following properties:
(P1 )
n k ≤ C ρn
kT1,λ
1
λ ∈ I, n ∈ N
(P2 )
n k≤C
kT2,λ
2
λ ∈ I, n ∈ N
In order to work with the cocycle χ we need to introduce the following notations. Let
(e1 , · · · , e2ℓ ) be the canonical basis of Rd then we set:
ui = e1 ∧ · · · ∧ ei−1 ∧ ei+1 · · · eℓ , u = e1 ∧ · · · ∧ eℓ
We will also denote by u the d × ℓ matrix I0ℓ . Anymatrix g ∈SP (ℓ, R) can be written
A
B
g = g̃s where g̃ is orthogonal and s is of the form
0 A′−1
Lemma 3.2.19 Let g and h be two matrices in the group SP (ℓ, R). Then one has:
P
(i) tr((u′ g′ gu)−1 ) = ℓi=1 χ(g, (ūi , ū))
P
(ii) tr((u′ h′ hu)(u′ h′ g′ ghu)−1 ) = ℓi=1 χ(g, h̃(ūi , ū))
Proof:
The formula (i) is a simple consequence of the definition of the cocycle χ and of the
computation of the inverse of a matrix by means of cofactors.
To prove (ii) we remark that hu = h̃uA, then
u′ h′ hu = A′ A , u′ h′ g′ ghu = A′ u′ h̃′ g′ gh̃uA
The invariance of the trace by exchange of the factors imply that the left part of (ii)
is equal to tr((u′ h̃′ g′ gh̃u)−1 ) and it remains to apply the formula (i) to the matrix gh̃
taking in account the cocycle relation χ(g1 g2 , b) = χ(g1 , g2 b)χ(g2 , b) and the identity
χ(k, (ūi , ū)) = 1 for any orthogonal matrix k.
110CHAPTER 3. SPECTRAL THEORY OF RANDOM SCHRÖDINGER OPERATORS
Theorem 3.2.20 Let I be a bounded open interval and let us assume that the properties P1 and P2 hold on I. Then for P-almost all ω the spectrum of H(ω) is pure point
on I with exponentially decreasing eigenfunctions.
Proof:
For a positive integer b and for n ∈ [−b, b] let us set:
Γn = u′ Uλ′ (n, −b)Uλ (n, −b)u
Applying the criterion of exponential localization given in Proposition 2.3.12 we have
only to to prove that there exists a finite constant C and ρ < 1 such that for any box
Λ = [−b , b] and n ∈ Λ one has:
Z
q
|n|
−1
E{ tr(Γ0 Γ−1
(∗)
b )tr(Γn Γb )} dλ ≤ Cρ
I
For notational convenience we will set:
X = Uλ (n, −b) , Y = Uλ (0, n) , Z = Uλ (b, 0)
X = Uλ (0, −b) , Y = Uλ (n, 0) , Z = Uλ (b, n)
From the Proposition 3.2.19 we obtain that:
Pℓ
χ(ZY, X̃(ūi , ū))
−1
tr(Γ0 Γb ) =
Pℓi=1
˜
i=1 χ(Z, Y X(ūi , ū))
Pℓ
˜
i=1 χ(Z, Y X(ūi , ū))
tr(Γn Γ−1
)
=
P
ℓ
b
i=1 χ(ZY, X̃(ūi , ū))
if − b ≤ n ≤ 0
if 0 ≤ n ≤ b
if n ≥ 0
if n ≤ 0
if n ≥ 0
if n ≤ 0
It follows, using the Cauchy-Schwarz inequality that for any n :
q
−1
tr(Γ0 Γ−1
b )tr(Γn Γb )
≤
=
ℓ
X
χ1/2 (Z, Y ˜X(ūi , ū))χ1/2 (ZY, X̃(ūj , ū))
i,j=1
ℓ
X
χ1/2 (Z, Y ˜X(ūi , ū))χ1/2 (Z, Y X̃(ūj , ū))χ1/2 (Y, X̃(ūj , ū))
i,j=1
Integrating only with respect to Z and using again the Cauchy-Schwarz inequality one
obtains:
E{χ1/2 (Z, Y ˜X(ūi , ū))χ1/2 (Z, Y X̃(ūj , ū))}
≤ E{χ(Z, Y ˜X(ūi , ū))}E{χ(Z, Y X̃(ūj , ū))}
(
a−n
k
if n ≥ 0
kT2,λ
≤
b
kT2,λ k
if n ≤ 0
IN A STRIP
3.2. LOCALIZATION FOR A.C. POTENTIALS
111
A new integration with respect to Y yields :
|n|
E{χ1/2 (Y, X̃(ūj , ū))} ≤ kT1,λ k
The result is now a direct consequence of the hypothesis of the theorem.
Theorem 3.2.21 Let {V (i, n) ; n ∈ Z ; i = 1 . . . ℓ} be an i.i.d. sequence of real random
variables whose common law is absolutely continuous and has a moment of order ℓ(ℓ+1).
Then for P almost all ω the operator H(ω) has a pure point spectrum on the strip of
width ℓ.
Proof:
It only remains to prove that under the above hypothesis then the properties P1 and
P2 hold for any compact set I. Since we have assumed that the law of the potentials
has a density if follows from Proposition 1.4.36 that for any λ the measure µ2ℓ+1
has
λ
ℓ(ℓ+1)
a density on SP (ℓ, R). Moreover using the integrability of kgk
it is easy to prove
in the same way than in Proposition 3.2.4 that the function λ ֒→ Tδ,λ is a continuous
maping from the real line to the Banach space of bounded operators on the space of
continuous functions on Lℓ−1,ℓ . Hence one can suppose that µλ has a density since we
2ℓ+1
in the proof of P1 and P2 . From the theory developped in
can replace Tδ,λ by Tδ,λ
Chapter IV we know that the operator T0,λ has a unique invariant measure say νλ on
Lℓ−1,ℓ . Moreover one has:
Z
log χ1/2 (g, (ȳ, x̄)) dµλ (g)dνλ (x̄, ȳ)
Z
kgyk
kgxk
− log
) dµλ (g)dνλ (x̄, ȳ)
=
(log
kxk
kyk
= (γ1 (λ) + . . . + γℓ−1 (λ)) − (γ1 (λ) + . . . + γℓ (λ))
= −γℓ (λ) < 0
The second equality follows from the fact that the invariant measure νλ projects on the
boundaries Lℓ−1 and Lℓ according to the respective invariant measures and the formula
giving the partial sums of Lyapunov exponents in Proposition 1.6.2. Proposition 1.3.5
then implies that for each λ there exists a strictly positive real α such that the spectral
radius of Tα,λ be strictly less than one.
On the boundary Lℓ−1 let us denote by Xλ the transition operator of the Markov chain
ȳ ֒→ gȳ and by Yλ the Laplace operator associated to the Radon-Nikodym cocycle
rℓ−1 . We denote by Zλ the Laplace operator on Lℓ−1,ℓ associated to the cocycle rℓ−1 .
It follows from the general theory of compact Laplace operators that Xλ and Y̆λ have
the same spectral properties, and the same conclusion holds for Z̆λ and T2,λ since
−1
χ = rℓ−1
rℓ−1,ℓ . Moreover it is readily seen that for any n then kY̆ n k = kZ̆ n k and
we can conclude that Xλ , Y̆λ , Z̆λ , T2λ have the same spectral radius which is obviously
112CHAPTER 3. SPECTRAL THEORY OF RANDOM SCHRÖDINGER OPERATORS
equal to one for the operator Xλ . If we denote by rλ (δ) the spectral radius of Tδ,λ
,we know that this function is convex, that rλ (0) = rλ (0) = 1 and that there exists a
positive α with rλ (α) < 1. One can conclude that rλ (1) < 1 and the Proposition 3.2.1
yields the property P1 .
The property P2 is more difficult to check since one cannot use directly the theory of
Radon-Nikodym cocycles as in the one dimensional case. Nevertheless we will prove in
Chapter IV (The Distribution of States) that 1 is the only eigenvalue of modulus one
of T2,λ and that this eigenvalue is simple. We observe that these properties are true for
Xλ hence for Y̆λ . It follows that P2 is true for these operators hence for the operator
Z̆λ . Then Proposition 3.2.2 and the fact that Z̆λ and T2,λ have the same characteristic
n k is a bounded sequence but this does not imply
spaces imply that for each λ then kT2,λ
any uniform bound in λ. Hence we have to refer to the result proved in the next chapter
to conclude.
3.3
Localization for Singular Potentials
For singular potentials , in example with a Bernoulli distribution, it is hopeless to try
n . Moreover, up to
to prove any uniform bound on the norm of the of the sequence T2,λ
now, no extension of the so called “Kotani’s trick” is working in this situation. The
only available proof is given in Annex (A) and the result is the following:
Theorem 3.3.1 Let H(ω) be a random Schrödinger operator in a discrete strip associated with an i.i.d. family of potentials such that:
1. The potentials are not constant.
2. There exists a positive α such that E{|V |α } < ∞.
Then for P almost all ω the spectrum of H(ω) is pure point with exponentially decaying
eigenfunctions.
IN A STRIP
Chapter 4
The Distribution of States in the
Strip
Contents:
1. The General Ergodic case.
2. The case of a.c. potentials
(a) Existence of a density
(b) A representation Formula for the Density of States.
(c) Smoothness
3. The case of singular potentials
1 Local Hölder continuity
2 The Super-Symmetric Method
The distribution of states of states is of physical importance for it can be measured
experimentally in some cases. It is a poor predictor of the spectral properties of random Schrödinger operators since it is known that there exist models with the same
distribution of states having a pure point spectrum or an absolutely continous spectrum ! Nevertheless many proofs of technical results crucial to the study of the spectral
properties of the random Hamiltonians, in particular for the localization in the case of
singular potentials, rely very heavily on estimates on the distribution of states.
113
114
CHAPTER 4. THE DISTRIBUTION OF STATES IN THE STRIP
In this chapter we only study the case of discrete ergodic random Schrödinger on a
strip and most of the regularity properties are in fact proved in the independent case.
4.1
The General Ergodic Case
We will assume throughout this chapter that there exists an ergodic dynamical (Ω, F, θt , P)
with T = Z and such that one has for i = 1 . . . ℓ
• The stationnarity condition Vi (n) = Vi (0) ◦ θn
• The integrability condition:
ℓ
X
i=1
E{log(1 + |Vi (0)|)} < ∞
(4.1)
Let N Λ be the the empirical probability distribution of the eigenvalues of H Λ in the box
Λ. It is not very difficult to prove (using the Birkhoff ergodic theorem, see Fukushima
[29]) that there exists a nonrandom probability distribution N called distribution of
states, such that:
1. the sequence N Λ of probability distributions converges weakly almost surely to
N.
2. One has N = 1ℓ E{σ(.)} and for almost all ω the spectrum of H(ω) is equal the
topological support of N .
3. The probability distribution N is continuous.
We begin with some useful integrability results in order to prove the so called “Thouless
formula”.
Proposition 4.1.1 Under the integrability condition (4.1), the function hλ (t) = log |t−
λ| is N integrable for any λ Rwith ℑmλ 6= 0. Moreover h+
λ (t) is N integrable for any
real λ and the function λ ֒→ hλ (t) dN (t) is a subharmonic in the complex plane.
Proof:
For a nonreal λ it is enough to prove that the function t ֒→ log(1 + |t|) is N integrable.
Let ± H(ω) be the diagonal operators ±2I + V and ± Nω be the associated distributions
4.1. THE GENERAL ERGODIC CASE
115
of states. Then it is readily seen that one has the following relations:
−
H(ω) ≤ H(ω) ≤+ H(ω)
+
Z
N (λ) ≤ N (λ) ≤− N (λ)
h(t) d± N (t) = E{h(±2 + V (0))|}
The last relation is a direct consequence of the Birkhoff ergodic theorem and holds for
any nonnegative Borel measurable function h with E{h(±2 + V (0))|} < ∞. It follows
from the second inequality that for an increasing function g one has N (g) ≤+ N (g)
and that for a decreasing function f one has N (f ) ≤− N (f ). If one chooses g(t) =
log(1 + t)1{t≥0} and f (t) = log(1 − t)1{t≤0} then we have:
Z
log(1 + |t|) dN (t) ≤+ N (g) +− N (f ) = 2E{log(3 + |V (0)|)}
and this yields the desired result.
For a real λ one has:
h+
λ (t) = 1{|t−λ|≥1} hλ (t) ≤ 1{|t−λ|≥1} log |t − (λ + iǫ)|
This implies immediately the quasi-integrability of hλ and the subharmonicity is a
consequence of the Fatou’s theorem and the subharmonicity of the function λ ֒→ log |t−
λ|.
Remark that this result is obvious in the case of bounded potentials since in this case
the probability measure N is compactly supported. We are now ready to prove the
so-called Thouless formula in the lattice case. We first prove it in the one dimensional
case and then in the strip.
Proposition 4.1.2 Let H(ω) be an ergodic family of Schrödinger operators on the one
dimensional lattice satisfying the integrability condition
E{log(1 + |V (0)|)} < ∞
Then for each real or complex number λ one has:
Z
γ(λ) = Log|t − λ| dN (t)
Proof:
Let Λ be a box of length n. One can restrict to the box Λ the argument of Proposition 4.1.1. Let NωΛ be the empirical probability distribution of eigenvalues of the
operator H Λ (ω). One has again to take care of the fact that for unbounded potentials
116
CHAPTER 4. THE DISTRIBUTION OF STATES IN THE STRIP
the measures E{N Λ } are not carried by a fixed compact set. Hence the convergence
of the integrals of the unbounded continuous function t ֒→ log |t − λ| for ℑmλ 6= 0
is not granted by the weak convergence of these measures to the distribution N . Let
± H Λ (ω) be the diagonal operators ±2I + V . We denote by ± N Λ the associated emω
pirical distribution of eigenvalues. Then it is readily seen that one has the following
relations:
−
H Λ (ω) ≤ H Λ (ω) ≤+ H Λ (ω)
NωΛ (λ) ≤ NωΛ (λ) ≤− NωΛ (λ)
Z
n−1
1X
f (V (k, ω) ± 2)
f (λ) d± NλΛ (λ) =
n
+
k=0
The last relation holds for any function f and obviously the two last relations remain
valid after taking expectations. Let m be an integer greater than 2, applying the above
inequalities to the decreasing function fm (λ) = log(1−λ)1{λ≤−m} and to the increasing
function gm (λ) = log(1 + λ)1{λ≥m} one obtains:
Z
Z
Λ
log(3 + |V (0, ω)|) dP(ω)
log(1 + |λ|) d(E{N })(λ) ≤ 2
|λ|≥m
|V (0,ω)|≥m−2
The crucial point in this inequality is the uniform integrability of the function log(1 +
|λ|) with respect to the averaged distributions E{N Λ } granted by the integrability of
log(1 + |V (0)|).
Let now λ be a complex number with ℑmλ 6= 0 and an integer n > 1. The functions
pλ (n) and qλ (n) are polynomials in λ with a leading coefficient of norm 1. Moreover
their roots are the eigenvalues of the operator H restricted to the boxes Λ = [0 , n − 1]
and Λ′ = [1 , n] respectively. It follows that:
Z
1
log |pλ (n)|
log |t − λ| dN Λ (t) =
n
Z
′
1
log |t − λ| dN Λ (t) =
log |qλ (n)|
n−1
Taking into account the above uniform integrability one obtains:
Z
1
1
lim E{log |pλ (n)|} = lim E{log |qλ (n)|} = log |t − λ| dN (t)
n→∞ n
n→∞ n
Hence we have proved that the expectation of the logarithm of theabsolute value
p(n + 1) q(n + 1)
of each entry of the propagator matrix U (n) =
has the same
p(n)
q(n)
asymptotic behavior.
1
Since we know that
R limn→∞ n log kUλ (n)k = γ(λ) we can conclude that for ℑmλ 6= 0
one has γ(λ) = log |t − λ| dN (t). Each hand side in this formula is a subharmonic
4.1. THE GENERAL ERGODIC CASE
117
function on the set of complex numbers. For the Lyapunov exponentR γ(λ) this is just
the result of Proposition V.4.5 in [19] and the subharmonicity of λ ֒→ Log|t−λ|dN (t)
is stated in Proposition 4.1.1. Since two subharmonic functions which are equal almost
everywhere with respect to the Lebesgue measure of the complex plane are equal (see
Proposition V.4.4 of [19]), we obtain the desired result.
In a similar way one can prove the “Thouless Formula” for the strip.
Proposition 4.1.3 Let H(ω) be an ergodic random Schrödinger operator on the strip
of width ℓ satisfying the integrability condition (4.1). Then for each real or complex
number λ one has:
Z
γ1 (λ) + . . . + γℓ (λ) = ℓ log |t − λ| dN (t)
Proof:
The only difference with the one dimensional proof is that we have now to deal with
the matrix Λℓ (Uλ (n)). Let α and β be two ℓ dimensional Lagrangian subspaces of Rd
and as usual let us denote by α and β two d × ℓ matrices whose column vectors form
a basis of these subspaces. We know from Proposition 2.1.2 that λ is an eigenvalue of
Λ if and only if one has:
Hα,β
det(β ′ Uλ (n) α) = 0.
Let λ be a complex number with ℑmλ 6= 0. Repeating the argument of Proposition 4.1.2
one obtains that
Z
1
′
E{log |det(β Uλ (n) α)|} = log |t − λ| dN (t)
lim
n→∞ nℓ
for any couple of Lagrangian vectors α and β.
Let e1 , . . . , ed be the canonical basis of Rd . For a subset I = {i1 , . . . , iℓ } of {1 . . . d} we
denote by eI the decomposable ℓ-vector ei1 ∧ . . . ∧ eiℓ . The above formula implies that
1
E{log |(Λℓ Uλ (n))I,J |} =
lim
n→∞ nℓ
Z
log |t − λ| dN (t)
for any couple (I, J) such that eI and eJ are Lagrangian. Unfortunately not all the
entries of the matrix Λℓ (Uλ (n)) are of this type. Nevertheless the Lagrangian linear
space Lag(ℓ) is generated by the Lagrangian decomposable ℓ-vectors and this linear
space is left invariant by any matrix of SP (ℓ, R). Repeating the argument of Proposition 1.4.24 one can conclude that the norm of Λℓ (g) computed on Λℓ (Rd ) is equal to the
norm computed in Lag(ℓ). Each entry of the matrix of the linear mapping Λℓ (Uλ (n))
restricted to Lag(ℓ) is of the form det(β ′ Uλ (n) α) for two Lagrangian ℓ-vectors α and
118
CHAPTER 4. THE DISTRIBUTION OF STATES IN THE STRIP
β. Hence each entry of this matrix has the same behavior and we can conclude that:
1
lim E{log kΛℓ Uλ (n)k}
Z n
= ℓ log |t − λ| dN (t)
γ1 (λ) + . . . + γℓ (λ) =
n→∞
As before we use the subharmonicity to extend this formula to a real λ.
4.2
4.2.1
The case of a.c. potentials
Existence of the Density of States
In the independent case, when the distribution of the potentials has a bounded density,
a simple functional analytic argument shows that the integrated density of states is
actually absolutely continuous and its density is bounded. This old result is due to
Wegner. See [104] and is valid for the general Anderson model. It plays a crucial role
in the proof of localization for multidimensional systems for it will be the only way to
control the probability that a given energy λ is near the spectrum of the Hamiltonian
restricted to a given box.
Proposition 4.2.1 If the random potentials are i.i.d. with a distribution admiting a
bounded density, then the distribution of states is absolutely continuous and its density
is bounded.
Proof:
Let I be a bounded open interval. Then:
ν(I) ≤ lim inf E{N Λ (I)}
Λ→Zd
so that the proof reduces to showing that the right hand side is bounded by the length
of the interval I times a constant independent of the box Λ. This is an immediate
consequence of Corollary 4.2.3 below which shows that when one takes the expectation,
if one integrates first with respect to the potential at one site, then the spectral measures
gain a bounded density, the bound depending only on the bound on the density of the
potential at one site.
Let e1 , · · · , ed be an orthonormal basis of Rd and X be a symmetric real matrix X of
order d. Then each spectral measure σkX = σeXk ,ek of X can be viewed as a function of
the d(d + 1)/2 entries of the matrix X. Let us denote by x1 , x2 , · · · , xd the diagonal
entries of the matrix X.
4.2. THE CASE OF A.C. POTENTIALS
119
Lemma 4.2.2 Let m be a probability measure on R with a density with respect to the
Lebesgue measure, bounded
by some constant C. Then for k = 1, · · · , d the averaged
R
probability measure σkX dm(xk ) also admits a density with respect to the Lebesgue
measure on R bounded by the same constant C.
Proof:
x1 b′
where
There is no loss of generality to suppose k = 1. One can write X =
b S
b is a (d − 1) dimensional vector and S is a (d − 1) × (d − 1) symmetric matrix. Let λ
be a complex number with ℑm(λ) > 0 and let us set:
u1
−1
u = (X − λI) e1 =
where u1 ∈ R
v
Then from the spectral theorem one has:
Z
dσ1X (t)
= u1
t−λ
If we write the set of equations satisfied by the vector u one obtains:
x1 u1 + b′ v = 1 + λu1
and u1 b + Sv = λv.
Let us denote by h the (n − 1) dimensional vector h = (S − λI)−1 b, then one has:
v = −u1 h and u1 = (x1 − b′ h − λ)−1 .
The spectral theorem applied to the matrix S yields:
b′ h =
Z
S (t)
dσb,b
t−λ
and it follows that the imaginary part of b′ h is positive. If we denote by ϕ the bounded
density of the law m one can conclude that:
1
ℑm
π
Z Z
ϕ(x1 )dx1 dσ1X (t)
1
= ℑm
t−λ
π
Z
ϕ(x1 )dx1
≤ kϕk
(x1 − b′ h − λ)
The de la Vallée Poussin theorem yields the desired result.
Corollary 4.2.3 If the random potentials are i.i.d. with a distribution admiting a
density bounded by some constant C, then the averaged distribution of states E{N Λ }
in a box Λ is absolutely continuous and its density is bounded by the same constant C.
120
CHAPTER 4. THE DISTRIBUTION OF STATES IN THE STRIP
Proof:
Denoting as usual by σxΛ the spectral measure of the operator H Λ at the site x then
one has:
1 X Λ
σx (I).
N Λ (I) =
|Λ|
x∈Λ
For each x ∈ Λ one can compute the expectation of σxΛ first integrating with respect to
the potential at the site x and Lemma 4.2.2 implies that if the density of distribution
of potentials is bounded by C then one has:
E{σxΛ (B)} ≤ C m(I)
where m is the Lebesgue measure. The desired result follows from this inequality.
4.2.2
A Representation Formula for the Density of States
An other way to obtain the existence of a density for the distribution of states already presented in the Chapter III in the one dimensional case (for both discrete and
continuous time, see Proposition 3.2.10 and Proposition 3.2.17) uses the machinery
of approximations of the spectral measures already used to prove localization. This
formula also yields the continuity of this density.
4.2.3
Smoothness
Actually, in the independent case and for a.c. potentials, the density of states is infinetely smooth. In the one dimensional case this follows immediately from a result of
Le Page which asserts that the upper Lyapunov exponent has this property (Theorem
V.4.16 of [19]), the Thouless formula and Proposition 4.3.2. This result does not extend immediately to the strip since the proof of Le Page (see [81]) requires that the
probability distribution of the transfer matrices has a density on the group in order to
claim that the upper Lyapunov exponent is infinetely smooth. This is no longer true
for the ℓ exterior power µℓ on SP (ℓ, R)ℓ but it is possible to rewrite a direct proof
following the lines of the original one and the results proved in Annex (A). We only
give below the essential steps.
Let Lℓ be the Lagrangian boundary of order ℓ, µλ be the distribution of the transfer
matrices and for a rotation invariant cocyle χ on Lℓ let us define the Laplace operator
Z
T χ,λ f (x̄) = χ(g, x̄) f (gx̄)dµλ (g).
for a complex continuous function f on Lℓ and let assume that E{|V |β } is fnite for any
positive β.
Then one proves successively:
4.3. SMOOTHNESS FOR SINGULAR POTENTIALS
121
1. The operator T 2ℓ+1 χ,λ acting on the Banach space C p (Lℓ ) of p times differentiable
functions is a compact operator for any nonnegative integer p.
2. The Markov operator
T λ f (x̄) =
Z
f (gx̄)dµλ (g).
has a unique invariant measure νl ambda with a C ∞ density with respect to the
Cauchy measure m on L − ℓ. (This is an immediate consequence of the general
theory of Laplace transform developed in Chapter I, taking the Poisson kernel of
Lℓ as cocycle χ).
3. Let I be a fixed bounded open interval. It is proved in annex (A) Propositions
3.4, 3.5 that there exists a positive α and 0 < c < 1 such that for any λ ∈ I the
spectrum of Tλ acting on Lα splits in the eigenvalue 1 and a part contained in
the disc of radius c < 1. Compactness of the operator Tλ2ℓ+1 implies the same
spectral property for Tλ acting on C p (Lℓ ) for any p. It follows that for any p one
can find a circle Γ of center 1 in the complex plane, such that any z ∈ Γ lies in
the resolvent set of Tλ acting on C p (Lℓ ) for any λ ∈ I. Denoting by Rλ,z the
resolvent of Tλ acting on C p (Lℓ ) one has:
Z
1
Rλ,z f dz
f ∈ C p (Lℓ )
νλ (f ) =
2πi Γ
Following the lines of [81] one obtains that:
Theorem 4.2.4 Let H(ω) be a random Schrödinger operator on the strip constructed
from an i.i.d. sequence for which there exists a positive β with E{|V (0)|β } < ∞ and
such that the distribution of V (0) is absolutely continuous. Then the integrated density
of states is a C ∞ function.
4.3
4.3.1
Smoothness for Singular Potentials
Local Hölder Continuity
In fact, in the independent case, the distribution of states is generally slightly better
than continuous. It is locally Hölder continuous. This result will follows from the
Thouless formula and the local Hölder continuity of the sum of nonnegative Lyapunov
exponents proved in the one dimensional case by Le Page [80] and for the strip in
annex (A). (The same result has been also reproved by Le Page [81] using a slightly
different method). In order to prove this important result we begin with some technical
propositions.
122
CHAPTER 4. THE DISTRIBUTION OF STATES IN THE STRIP
Definition 4.3.1 A real function γ defined on R is said to be locally Hölder continuous
if for any compact interval I there exist a finite constant C and a strictly positive real
number α such that :
|γ(λ) − γ(λ′ )| ≤ C|λ − λ′ |α
λ, λ′ ∈ I
Proposition 4.3.2 Let N be a continuous probability distribution on R such that the
function t ֒→ log |t−λ| is N -integrable for any real λ and we denote by F (λ) its integral.
1. If the function F (λ) is Hölder continuous then the function N (λ) has the same
property.
2. If the function F (λ) is C ∞ then the function N (λ) has the same property.
Proof:
The two conclusions are in fact a direct application of the general theory of Hilbert
transform that we summarize below (the proofs can be found in [85]). The Hilbert
transform of a square integrable function ψ is defined by the formula :
Z
ψ(t)
1
dt
T ψ(x) = lim
ǫ↓0
π
|x−t|>ǫ x − t
Then one has:
1. T ψ is a square integrable function and T 2 ψ = −ψ almost everywhere with respect
to the Lebesgue measure.
2. If ψ is Hölder continuous on some interval [−a , a] then T ψ is Hölder continuous
on [−a/2 , a/2].
3. If ψ is C ∞ on some interval [−a , a] then T ψ is C ∞ on [−a/2 , a/2].
One can add that if ψ is analytic on some interval (a , b) then T ψ has the same property.
This result is proved in [81] but we will not use it. Let a be a positive number and let
us define the function ψ(t) = N (t)1{|t|≥4a} . A simple integration by parts yields:
πT ψ(x) = −ψ(4a) log |4a − x| + ψ(−4a) log |4a + x|
Z
log |x − t| dN (t)
+F (x) −
|t|>4a
If F (x) is locally Hölder continuous (respectively C ∞ ) then T ψ has the same property
on the interval [−2a , 2a] hence T 2 φ has the same property on the interval [−a , a]. The
continuity of the function N together with the equality T 2 ψ = −N almost everywhere
give the desired result.
The Thouless formula makes it possible to transfer the regularity properties of the
Lyapunov exponents to the distribution of states. In particular it can be used to prove
the local Hölder continuity of the distribution of states in the case of the strip.
4.3. SMOOTHNESS FOR SINGULAR POTENTIALS
123
Corollary 4.3.3 Let H(ω) be a random Schrödinger operator in the strip of width ℓ
with an i.i.d. sequence of potentials and such that there exists a positive α with
ℓ
X
i=1
E{|Vi (0)|α } < ∞
Assuming that the potentials are not constants then the distribution of states is locally
Hölder continuous.
Proof:
This is just a consequence of Theorem V.4.15 of [19] which asserts the local Hölder
continuity of the p first Lyapunov exponents as soon as the distribution µλ is strongly
irreducible and contractive on the Lagrangian linear spaces Lag(k) for k = 1 . . . p. This
last condition is certainly fulfilled with p = ℓ in view of Theorem 1.4.37.
4.3.2
The Super-Symmetric Method
We would like to emphasize that the smoothness results we gave are not the best one
can get. We proved them using the smoothness of the Lyapunov exponent but there are
other methods of approaching this problem. Reformulating the problem using the socalled replica trick and with some functional analytic estimates from harmonic analysis
one can get a better result.
Proposition 4.3.4 Let H(ω) be a random Schrödinger operator on the strip of width
ℓ constructed from an i.i.d. sequence for which the Fourier transform ĥ of the one-site
distribution of the potential is of class C (k) with bounded derivatives which satisfy:
lim ĥ(j) (p) = 0
|p|→∞
for j = 1, . . . , k. Then the integrated density of states is of class C ([(k+1)/2]) .
See annex (B) for a proof.
(4.2)
124
CHAPTER 4. THE DISTRIBUTION OF STATES IN THE STRIP
Chapter 5
representation
UNE FORMULE POUR LA REPRESENTATION
DE LA DENSITE D’ETATS SUR LE RUBAN
Nous nous proposons ici de prouver une formule pour la représentation de la densité
d’états dans le cas de potentiels indépendants et absolument continus. En dimension
1 une formule du même genre figure dans le Chapitre III, Proposition 3.2.10. Pour
ce faire nous rappelons que la distribution d’états N peut être obtenue comme limite
Λ dans une boite
de la moyenne arithmétique des espérances des mesures spectrales σx,x
Λ = {x = (i, n) , i = 1 . . . ℓ, n = −b . . . b}. Pour −b ≤ n ≤ b on note par Pn (λ)
la matrice ℓ × ℓ solution de l’equation aux valeurs propres avec conditions initiales
P−b−1 = 0 , P−b = I. En intégrant par rapport à une loi de Cauchy sur les conditions
au bord gauche on a obtenu dans le chapitre II que, pour n fixe, la somme partielle sur
i de ces mesures spectrales soit σnΛ a une densité donnée par la formule:
1
trace Pn (λ)[u′ Uλ′ (b, −b)Uλ (b, −b) u]−1 Pn′ (λ)
π
I
Λ
où u désigne la matrice
d’ordre 2ℓ × ℓ. Il s’en suit que la somme de σnΛ et σn+1
0
a une densité donnée par:
1
trace u′ Uλ′ (n, −b)Uλ (n, −b)u [u′ Uλ′ (b, −b)Uλ (b, −b) u]−1
π
En fait cette formule n’est valable que pour −b+1 ≤ n ≤ b−1 mais on ne modifie pas la
limite des moyennes arithmétiques en rajoutant les deux termes extrèmes. En écrivant
le produit de matrices de transfert gb . . . g−b sous la forme Xn Yn avec Xn = gb . . . gn+1
et en tenant compte des notations et du résultat du Lemme 3.2.19 du Chapitre 3 on
obtient que la moyenne arithmétique de ces mesures spectrales dans une boite possède
125
126
CHAPTER 5. REPRESENTATION
une densité donnée par:
b
ℓ X
X
1
χ(Xn , Yen (ūi , ū))
2(2b + 1)πℓ
(5.1)
i=1 n=−b
où Ye représente la partie compacte de Y ∈ SP (ℓ, R) dans la décomposition d’Iwasawa.
En considérant de nouveau l’opérateur de Fourier Laplace Tt,λ sur la Lagrangienne
Lℓ−1,ℓ associé au cocycle χ1/2 introduit au chapitre III, l’espérance de cette quantité
peut s’écrire:
ℓ X
b
X
1
b−n
E{T2,λ
1(Yen (ūi , ū))}
2(2b + 1)πℓ
i=1 n=−b
où 1 est la fonction identiquement égale à 1 sur Lℓ−1,ℓ . Toujours d’après les résultats
du chapitre 3 on sait que T2,λ est un opérateur compact, de rayon spectral égal à 1 sur
l’espace de Banach des fonctions continues de Lℓ−1,ℓ . De plus 1 est valeur propre et
pour toute valeur propre de module 1 le sous espace caractéristique est égal au sous
espace propre. Il s’en suit que si l’on désigne par Πλ le projecteur sur le sous espace
invariant et par Πλ,j , j = 1 . . . r les projecteurs sur les autres sous espaces propres
correspondant à des valeurs propres ǫj de module 1, on peut écrire pour tout entier k:
k
T2,λ
= Πλ +
r
X
ǫkj Πλ,j + Qkλ
j=1
où le rayon spectral de Qλ est strictement plus petit que 1. La continuité de λ ֒→ T2,λ
implique la même propriété sur chaque élément de la décomposition. On en conclut
que kQkλ 1k converge vers zero uniformément sur tout compact en λ. Il reste donc à
étudier la limite pour b → ∞ des sommes de Cæsaro:


r
b X
X
1
ǫnj Πλ,j  1(Sen .(ūi , ū))}
E{Πλ +
2(2b + 1)
n=−b j=1
où Sn désigne le produit de n matrices de transfert (qui dépend bien sûr de λ). Pour
ceci on se rappelle qu’en notant par νλ la mesure invariante sur la frontière maximale
alors pour toute fonction continue f sur cette frontière on a:
lim sup |E{f (Sn .x̄)} − νλ (f )| = 0
n→∞ x̄
uniformément sur tout compact en λ. Le seul problème est que l’on a affaire ici à Sen
plutôt qu’à Sn lui même. Pour surmonter cette difficulté on désigne par ēi le drapeau
obtenu en échangeant dans le drapeau canonique ē les vecteurs de base ei et eℓ . On a
alors:
5.1. DÉCOMPOSITION D’IWASAWA DE SP (ℓ, R)
127
Proposition 5.0.5 Avec les notations ci dessus, pour chaque i = 1 . . . ℓ il existe une
unique application Fi de la frontière maximale dans elle même vérifiant:
1. Fi (k.x) = kFi (x) pour toute matrice orthogonale k de SP (ℓ, R).
2. Fi (ē) = ēi
Proof:
La preuve est immédiate en remarquant que le sous groupe orthogonal de SP (ℓ, R) opère
transitivement sur la frontière maximale et que si l’on a k.ē = k′ .ē alors k = k′ m, la
matrice m étant diagonale avec des +1 ou des −1 dans la diagonale et donc k.ēi = k′ .ēi .
Corollary 5.0.6 Soit f une fonction continue sur Lℓ−1,ℓ alors
lim E{f (Sen .(ūi , ū))} = νλ (f ◦ Fi )
n→∞
uniformément sur tout compact en λ.
Proof:
Il suffit de constater que f (Sen .(ūi , ū)) = f ◦ Fi (Sn .ē) car Sn s’écrit comme le produit
de Sen et d’une matrice triangulaire supérieure laissant ē invariant, puis d’appliquer les
résultats ci dessus.
On conclut donc de toutes ces propriétés que la densité de l’espérance des approximations ( 5.1) converge uniformément sur tout compact d’energie vers la densité d’états.
Theorem 5.0.7 Dans le cas de potentiels i.i.d. et absolument continus sur le ruban,
possédant un moment d’ordre ℓ(ℓ + 1), la densité d’états est donnée par la fonction
continue:
ℓ
1 X
νλ (Πλ (1) ◦ Fi )
2πℓ
i=1
Il serait avantageux de remplacer la projection Πλ (1) par une fonction propre comme
en dimension 1. Ceci est possible si l’on sait que l’opérateur T2,λ possède une fonction
propre unique. Nous nous proposons donc d’étudier ce problème ci-dessous. Pour ce
faire nous commençons par quelques préléminaires algébriques.
5.1
Décomposition d’Iwasawa de SP (ℓ, R)
X1 X2
avec X2 et X3
L’algèbre de Lie G de SP (ℓ, R) est constituée des matrices
X3 −X1′
symétriques de dimension ℓ. On peut écrire une décomposition de Cartan G = K + P
128
CHAPTER 5. REPRESENTATION
avec:
K =
P =
A =
X1
−X2
X
0
X1
X2
X2
X1
avec X2 symétrique et X1 antisymétrique
X2
avec X2 et X1 symétriques
−X1
0
avec X =diag(x1 , . . . , xℓ )
−X
où A est une sous algèbre abélienne maximale de P.
Si l’on choisit la chambre de Weyl
X
0
+
A =
avec x1 > . . . > xℓ
0 −X
on obtient le système de racines positives:
∆+ = {φi,j , j > i} ∪ {ψi,j , j ≥ i}
ces formes linéaires opérent sur A par:
φi,j (X) = xi − xj ,
ψi,j (X) = xi + xj
La sous algébre nilpotente N est définie comme la somme des sous espaces propres
correspondant aux racines positives. On en déduit la décomposition d’Iwasawa G =
KAN avec:
X −Y
K =
avec X + iY unitaire
Y
X
X
0
A =
avec
X
=diag(x
,
.
.
.
,
x
),
x
>
0
1
i
ℓ
0 X−

X
Y
XY ′ symétrique


N =
avec 
0 (X −1 )′
X
triang. sup. avec des 1 dans la diag.
X 0
M =
avec X =diag(ǫ1 , . . . , ǫℓ ) , ǫi = ±1
0 X
On remarque que K est isomorphe au groupe unitaire U (ℓ) et M est le stabilisateur
de A dans K. Si on désigne par H le sous groupe parabolique H = M AN la frontière
maximale s’identifie alors à G/H ∼ K/M .
On peut exhiber un système de ℓ racines primitives:
Σ = {φi,i+1 , i = 1, . . . , ℓ − 1} ∪ {ψℓ,ℓ }
5.1. DÉCOMPOSITION D’IWASAWA DE SP (ℓ, R)
129
et l’on sait que toute frontière partielle peut être construite algébriquement de la façon
suivante: On choisit une partie Θ ⊂ Σ et l’on désigne par AΘ le sous espace de A sur
lequel les éléments de Θ s’annulent. On note alors par MΘ le centralisateur de AΘ dans
K et par MΘ le sous groupe de K associé. Alors toute frontière partielle est isomorphe
à K/MΘ pour un certain choix de Θ. On remarque que M⊘ = M et MΣ = K. Nous
allons donc commencer par identifier les parties Θ correspondant aux frontières “utiles”
Lℓ , Lℓ−1 et Lℓ−1,ℓ .
Proposition 5.1.1 Si on note par:
Θℓ = Σ \ {ψℓ,ℓ },
Θℓ−1 = Σ \ {φℓ−1,ℓ },
Θℓ−1,ℓ = Σ \ {φℓ−1,ℓ , ψℓ,ℓ }
et Mℓ , Mℓ−1 et Mℓ−1,ℓ les centralisateurs associés, on a en reprenant la notation des
matrices de k ∈ K :
Mℓ = {k ∈ K avec X ∈ O(ℓ) , Y = 0}


R
0




X=




0 cos(ϑ)

Mℓ−1 =
k ∈ K avec 

0
0



 Y =

0 sin(ϑ)
Mℓ−1,ℓ = {k ∈ Mℓ−1 avec ϑ = 0 ou π}


R ∈ O(ℓ − 1) 




De plus on a les isomorphismes:
Lℓ ∼ M/Mℓ ,
Lℓ−1 ∼ M/Mℓ−1 ,
Lℓ−1,ℓ ∼ M/Mℓ−1,ℓ
Preuve :
L’identification des stabilisateurs résulte d’un calcul facile. Nous allons par contre
préciser la forme des isomorphismes. On peut convenir de repérer
un représentant
X
d’un élément de Lℓ par une matrice de taille 2ℓ × ℓ de type
de telle façon que
Y
les vecteurs colonne forment une base orthonormée. Il revient au même de choisir
un élément de K. Deux représentants définissent le même “ℓ plan” Lagrangien si et
seulement si il existe une rotation R ∈ O(ℓ) telle que X2 = X1 R et Y2 = Y1 R ce qui
revient à dire que les deux éléments de K sont dans la même classe modulo Mℓ . On
définit une carte Πℓ sur l’ouvert dense de L
ℓ tel que X soit inversible, à valeurs dans
X
les matrices symétriques d’ordre ℓ, par Πℓ (
) = Y X −1 .
Y
Un calcul d’algèbre élémentaire montre que si l’on associe à une matrice de K le “ℓ − 1
plan” Lagrangien engendré par ses ℓ − 1 premières colonnes alors deux matrices de K
définissent le même élément de Lℓ−1 si et seulement si elles sont dans la même classe
modulo Mℓ−1 . On remarque aussi que les vecteurs colonnes d’ordre ℓ de deux matrices
130
CHAPTER 5. REPRESENTATION
k1 rt k2 dans la même classe
soit v1 et v2 vérifient la relation v2 = v1 cos(ϑ) + Jv1 sin(ϑ
0 I
où J est la matrice J =
∈ K. La dernière identification résulte aisément de
−I 0
cette remarque.
Proposition 5.1.2 On a l’identification:
\
Lℓ−1,ℓ = Lℓ−1 × SO(2)
\ est le quotient de SO(2) par {I, −I} c’est à dire la frontière maximale de
où SO(2)
SL(2, R). De plus il existe un cocycle ξ(g, ȳ) défini sur SP (ℓ, R) × Lℓ−1 et à valeurs
dans GL(2, R) telle que l’action de SP (ℓ, R) sur Lℓ−1,ℓ s’ecrive:
g.(ȳ, x̄) = g(ȳ, ϑ) = (g.ȳ, ξ(g, ȳ).ϑ)
Preuve :
\ Pour prouLa première affirmation est évidente puisque l’on a Mℓ−1 /Mℓ−1,ℓ = SO(2).
ver la seconde on choisit une section s de Lℓ−1 dans Lℓ−1,ℓ définie par ȳ ֒→ (ȳ, s(ȳ)) où
s(ȳ) est un vecteur normé choisi de telle sorte que s(ȳ) et Js(ȳ) soient orthogonaux au
ℓ−1 plan ȳ. Tout vecteur v dont la réunion avec ȳ engendre un ℓ plan Lagrangien s’écrit
comme une combinaison lináire de s(ȳ) et Js(ȳ). Dans ces conditions un élément (ȳ, ϑ)
s’identifie au couple (ȳ, v = s(ȳ) cos(ϑ) + Js(ȳ) sin(ϑ)). Si pour un vecteur w ∈ R2ℓ on
note Υȳ w la partie de w othogonale au ℓ − 1 plan ȳ on a alors :
g.(ȳ, ϑ) = g.(ȳ, v = s(ȳ) cos(ϑ) + Js(ȳ) sin(ϑ))
Υȳ gv
= (g.ȳ,
kΥȳ gvk
= (g.ȳ, s(g.ȳ) cos(ϑ′ ) + Js(g.ȳ) sin(ϑ′ ))
Si donc on note par a et c les coordonnées de Υȳ gs(ȳ) sur la base orthonormée s(g.ȳ),
Js(g.ȳ) et b et d les coordonnées
de Υȳ gJs(ȳ) dans la même base on constate qu’en
a c
posant ξ(g, ȳ) =
on a la relation g.(ȳ, ϑ) = (g.ȳ, ξ(g, ȳ).ϑ). La relation de
b d
cocycle est évidente et il reste à vérifier que ξ est une matrice inversible. Si ce n’était
pas le cas, (ȳ, v) et (ȳ, Jv) représenteraient le même (ℓ − 1, ℓ) plan ce qui est impossible.
Nous allons maintenant appliquer ces préléminaires algébriques à l’étude spectrale des
opérateurs de Fourier Laplace.
5.2
Existence d’une fonction propre strictement positive
Nous nous plaçon maintenant dans le cas d’opérateurs de Fourier Laplace associés à
une mesure absolument continue sur SP (ℓ, R). Ces opérateurs agissant sur l’espace des
5.2. EXISTENCE D’UNE FONCTION PROPRE STRICTEMENT POSITIVE 131
fonctions continues sur la Lagrangienne Lℓ−1,ℓ sont compacts positifs et dans nôtre cas
nous nous interessons à la famille d”opérateur:
Z
Tt f (ȳ, x̄) = f (g.(ȳ, x̄))(χ(g, (ȳ, x̄)))t/2 dµ(g)
le cocycle χ étant défini par:
−1
χ(g, (ȳ, x̄)) = rℓ−1,ℓ (g, (ȳ, x̄))rℓ−1
(g, x̄) = (
kyk −2 kxk 2
) (
)
kgyk
kgxk
Voir chapitre III pour les notations.
Il résulte d’un théorème de Guivarc’h & Raugi [44] que la valeur propre positive de
plus grand module possède une fonction propre strictement positive dès qu’un certain
mineur est intégrable pour la mesure de Cauchy. Ceci implique alors, par relativisation
par rapport à cette fonction propre, que le sous espace propre est de dimension 1 et qu’il
n’existe pas d’autres valeurs propres de même module. Si l’on s’intéresse au cas t > 0
kxk
sur la Lagrangienne Lℓ
c’est l’intégrabilité du mineur associé au cocycle ρℓ (g, x̄) = kgxk
qui va intervenir. Pour pouvoir appliquer le résultat mentionné plus haut il faut tout
d’abord réécrire ce cocycle sous forme algébrique. En effet l’on sait que tout cocycle K
invariant ρ sur une frontière K/MΘ est associé à une racine de l’algèbre de Lie αρ . Ce
cocycle s’écrit:
ρ(g, kMΘ ) = exp(αρ (H(gk)))
où H(g) est le logarithme de la partie abélienne de g dans la décomposition d’Iwasawa.Soit
αℓ la racine définie sur une matrice diagonale X = diag(x1 , . . . , xℓ )) par αℓ (X) =
x1 + . . . + xℓ
Proposition 5.2.1 Le cocycle ρℓ est associé à la racine −αℓ .
Preuve :
La preuve résulte du calcul de la décomposition d’Iwasawa d’une matrice de SP (ℓ, R).
En effet en notant:
A B
U −V
X
0
T
Y
g=
= kan =
C D
V
U
0 X −1
0 (T ′ )−1
′
′
2
on a A′ A + C ′ C = T ′ X 2 T et
donc
det(A A + C C)=(det(X)) . Or si x̄ est associé à
A
une matrice 2ℓ × ℓ soit x =
on a kxk2 =det(A′ A + C ′ C).
C
Si l’on se réfère au théorème de Guivarc’h & Raugi l’opérateur Tt , t ≥ 0, va posséder
une fonction
propre
strictement positive dès que le mineur ∆ℓ (k)=det(U ) de la matrice
U −V
k=
(en fait ce mineur est une fonction sur K/Mℓ ) possède une puissance
V
U
d’ordre −t intégrable par rapport à la loi de Cauchy sur Lℓ .
132
CHAPTER 5. REPRESENTATION
Proposition 5.2.2 Le mineur ∆−t
ℓ est intégrable pour la loi de Cauchy sur Lℓ si et
seulement si t < 1.
Preuve :
La mesure de Cauchy sur les matrices symétriques d’ordre ℓ est donnée à une constante
près par la formule:
−(ℓ+1)/2
det(I + M 2 )
dM
où dM est la mesure de Lebesgue sur les ℓ(ℓ + 1)/2 coefficients indépendants de M .
Cette mesure peut aussi s’écrire
Y
dK ⊗ (λi − λj ) dλi dλj
i<j
où dK est la mesure de Haar sur SO(ℓ) et les λi sont les valeurs propres de M rangées
en ordre croissant. D’autre part si l’on utilise la carte Πℓ définie dans la preuve de
( 5.1.1) on peut écrire que ∆ℓ (k) = (det(I + Π2ℓ (k)))−1/2 et donc tout revient à trouver
pour quelles valeurs de t la fonction M ֒→ (det(I + M 2 ))β/2 est dM intégrable avec
β = t−(ℓ+1). En utilisant l’invariance du déterminant de (I +M 2 ) par transformation
orthogonale el la formule ci-dessus on voit facilement qu’une telle intégrale n’existe que
pour β < −ℓ soit t < 1.
On ne peut donc affirmer que l’opérateur Tt possède une fonction propre strictement
positive que pour 0 ≤ t < 1 ce qui ne règle donc pas le cas de T2 . Nous allons donc
tenter une étude directe du sous espace invariant de T2 .
5.3
Fonctions Invariantes de l’opérateur T2
Toujours sous l’hypothèse d’existence d’une densité pour la loi sur le groupe SP (ℓ, R)
on sait qur T2 possède les mêmes propriétés spectrale que l’opérateur noté Ye dans le
chapitre III qui est en fait l’opérateur “remonté” sur la frontière Lℓ−1,ℓ de l’opérateur
Y sur Lℓ−1 , ce dernier étant associé au noyau de Poisson de Lℓ−1 que nous noterons
maintenant r(g, ȳ). L’opérateur Y a des proprit́és spectrales bien connues, en particulier
1 est sa seule valeur propre de module 1 et elle est simple. Il est donc tentant de prouver
que l’on ne gagne rien sur les fonctions invariantes en remontant Y sur Lℓ−1,ℓ et que
toute fonction invariante pour Ye est en fait invariante pour Y c’est à dire ne dépend pas
de la seconde coordonnées ϑ dans l’isomorphisme établi en 5.1.2. Ceci devrait être une
conséquence de la propriété de contraction de l’action de SP (ℓ, R) sur la composante
SO(2) que nous allons établir maintenant. Dans ce but nous introduisons les notations
suivantes:
n
gn gn−1 . . . g1 si n ≥ 1
Un =
I
si n = 0
5.3. FONCTIONS INVARIANTES DE L’OPÉRATEUR T2
133
Mn (ȳ) = ξ(gn , Un−1 .ȳ)
Sn (ȳ) = Mn (ȳ)Mn−1 (ȳ) . . . M1 (ȳ)
On a donc ξ(Un , ȳ) = Sn (ȳ) si bien que le produit Sn apparait comme un produit de
matrices en dépendance Markovienne, la chaine de Markov à valeurs dans G × Lℓ−1
etant définie par {(gn , Un−1 .ȳ) , n ≥ 1}.
Proposition 5.3.1 On a les propriétés suivantes:
1. Pour tout ȳ ∈ Lℓ−1 on a
limn→∞ n1 log(|det(Sn (ȳ))|) = 0 presque surement.
2. Pour tout ȳ ∈ Lℓ−1 et tout ϑ on a
limn→∞ n1 log(k(Sn (ȳ).ϑ)k) = γℓ presque surement.
Preuve:
Si on considère le parallélépiède B de dimension ℓ + 1 engendré par un sous espace
Lagrangien y de dimension ℓ − 1 et les vecteurs unitaires v et Jv orthogonaux à y le
volume de celui-ci croˆıt exponentiellement sous l’action de Un comme la somme des
ℓ + 1 premiers exposants qui est aussi égale à la somme des ℓ − 1 premiers. Mais le
volume de Un B est égal au volume de Un y multiplié par le déterminant de Sn et le
volume de Un y croˆıt comme la somme des ℓ − 1 premiers exposants d’où le premier
résultat. Ceci permettra donc, dans la suite, d’opérer comme si les matrices Mn étaient
de déterminant ±1. Pour le second point on remarque que le cocycle χ est associé à
l’exposant −2γℓ et que:
χ(g, (ȳ, ϑ)) = kξ(g, ȳ).ϑk−2
d’où la conclusion.
Ce dernier résultat joint à la stricte positivité de γℓ implique que l’action de Sn sur
SO(2) est contractante. En applicant un résultat purement déterministe établi dans
[9] Proposition 2.7.1 on en déduit:
Corollary 5.3.2 Il existe une variable aléatoire Z(ω, ȳ) à valeurs dans SO(2) telle
que:
1. Pour tout ȳ ∈ Lℓ−1 et tout point ϑ ∈ SO(2) (sauf peut-être un seul, dépendant de
ω et de ȳ) on a:
lim M1 (ȳ)M2 (ȳ) . . . Mn (ȳ).ϑ = Z(ω, ȳ)
n→∞
presque surement.
2. Pour tout ȳ ∈ Lℓ−1 et toute mesure continue m sur SO(2) on a:
lim M1 (ȳ)M2 (ȳ) . . . Mn (ȳ).m = δZ(ω,ȳ)
n→∞
presque surement.
134
CHAPTER 5. REPRESENTATION
Bibliography
[1] M. A. Akcoglu and U. Krengel (1981): Egodic theorems for superadditive processes. J. Reine Angew. Math. 323 53-67.
[2] Ph. Anderson (1958): Absences of diffusion in certain random lattices. Physical
Review 109, 1492-1505.
[3] F. V. Atkinson (1964): Discrete and continuous boundary problems. Mathematics
in science and engineering Vol 8, Academic Press, New York.
[4] M. M. Benderskii and L. A. Pastur (1970): On the spectrum of the one dimensional Schrödinger equation with a random potential. Math. Sb. 82, 245-256.
[5] J. M. Berezanskii (1968): Expansion in eigenfunctions of self-adjoint operators.
Transl. Math. Monographs Vol. 17. Amer. Math. Soc. Providence, R. I.
[6] R. E. Borland (1963): The nature of the electronic states in disordered onedimensional systems. Proc. Royal Soc. London ser. A, A274, 529-545
[7] P. Bougerol (1988): Théorèmes limites pour les systèmes linéaires à coefficients
markoviens. Probab. Th. Rel. Fields, vol 78, 193-221
[8] P. Bougerol (1988): Comparaison des exposants de Lyapounov des processus
markoviens multiplicatifs. Ann. Inst. Henri Poincaré 24, 439-489
[9] P. Bougerol and J. Lacroix (1985): Products of random matrices with applications
to random schrödinger operators. Birkhauser, Boston, Massachussets.
[10] L. Breiman (1988): Probability. Addison Wesley. Reading Massachussets.
[11] J. Brossard (1983); Perturbations Aléatoires de Potentiels Périodiques. (unpublished)
[12] A. Brunel, D. Revuz (1974): Quelques applications probabilistes de la quasicompacité. Ann. Inst. Henri Poincaré, 10, 301-337.
135
136
BIBLIOGRAPHY
[13] M. Campanino and A. Klein (1986): A supersymmetric transfer matrix and differentiability of the density of states in the one dimensional Anderson model.
Comm. Math. Phys. 104, 227-241.
[14] R. Carmona (1982), Exponential localization in one dimensional disordered systems. Duke Math. J. 49, 191-213.
[15] R. Carmona (1983), One dimensional Schrödinger operators with random or deterministic potentials: new spectral types. J. Functional Anal. 51 229-258.
[16] R. Carmona (1984a): One dimensional Schrödinger operators with Random Potentials. Physica 124A, 131-188.
[17] R. Carmona (1984): Random Schrödinger operators, in Ecole d’Ete de probabilities XIV-Saint Flour. Lect. Notes in Math. No. 1070. Springer-Verlag, New York,
N.Y.
[18] R. Carmona, A. Klein and F. Martinelli (1987): Anderson localization for
Bernouilli and other singular potentials. Comm. Math. Phys. 108, 41-66.
[19] R. Carmona, J. Lacroix (1990): Spectral Theory of Random Schrödinger Operators. Probability and its Applications. Birkhauser, Boston, Massachussets.
[20] J.E. Cohen and C. Newman (1984): The stability of large random matrices and
their products. Ann. Proba. 12, 283-310.
[21] F. Constantinescu, J. Frölich and T. Spencer (1983): Analiticity of the density of
states and replica method for random Schrödinger operators on a lattice J. Stat.
Phys. 34, 571-596.
[22] W. Craig and B. Simon (1983): Log-Hölder Continuity of the Integrated Density
of States for Stochastic Jacobi Matrices. Comm. Math. Phys. 90, 207-218.
[23] F. Delyon, H. Kunz and B. Souillard (1983): One dimensional wave equations in
random media. J. Phys. A16, 25-42.
[24] F. Delyon, Y. Levy and B. Souillard (1985): Anderson localization for one and
quasi-one dimensional systems. J. Stat. Phys. 41, 375-388.
[25] F. Delyon and B. Souillard (1984): Remark on the continuity of the density of
states of ergodic finite difference operators. Comm. Mth. Phys. 94, 289-291
[26] Y. Derriennic (1975): Sur le théorème ergodique sous additif. C. R. Acad. Sci.
Paris ser A 282, 985-988.
[27] Y. Derriennic and Y. Guivarch (1974): Théoreème de renouvellement pour les
groupes non moyennables. C. R. Acad. Sc. PARIS 277, 613-615.
BIBLIOGRAPHY
137
[28] H. von Dreifus and A. Klein (1989): A new proof of localization in the Anderson
tight binding model. Comm. Math. Phys. 124, 285-299.
[29] M. Fukushima (1981): On asymptotics of spectra of Schrödinger operators.
In “Aspects Statistiques et Aspects Physiques des Processus Gaussiens,” ed.
C.N.R.S.
[30] H. Furstenberg (1963): Non commuting random products. Trans. Amer. Soc.
108, 377-428.
[31] H. Furstenberg and H. Kesten (1960): Products of random matrices. Ann. Math.
Stat. 31, 457-469.
[32] H. Furstenberg, Y. Kifer (1983): Random matrix products and measures on
projective spaces. Israël j. of Math. 46 12-32.
[33] H. Fustenberg, I. Tzkoni (1971): Spherical functions and integral geometry. Israël
J. of Math. 10, 327-338.
[34] C. Glaffig (1988): Smoothness of the integrated density of states on discrete strip
lattices, i.i.d. and non i.i.d. cases. (preprint)
[35] I. Ja. Goldsheid (1980): Asymptotic properties of the product of random matrices
depending on a parameter. In “Multicomponent Systems”, advances in Probability, 8, 239-283.
[36] I. Ja. Golsheid (1980): The structure of the spectrum of the Schrödinger random
difference operator. Sov. Math. Dokl. 22, 670-674.
[37] I. Ja. Golsheid and S. A. Molcanov (1976): On Mott’s Problem. Soviet Math.
Dokl. 17, 1369-1373.
[38] I. Ja. Golsheid, S. A. Molcanov and L. A. Pastur (1977): A pure point spectrum
of the one dimensional Schrödinger operator. Funct. Anal. Appl. 11, 1-10.
[39] I. Ja. Goldsheid and A. G. Margulis (1987): A condition for simplicity of the
spectrum of Lyapunov exponents Sov. Math. Dokl. 35 309-313.
[40] I. Ja. Goldsheid and A. G. Margulis (1990): Lyapunov indices of a product of
random matrices. Russian Math. Surveys 44-5, 11-71.
[41] Y. Guivarch (1980): Quelques propriétés asymptotiques des produits de matrices
aléatoires. In Ecole d’Eté de Probabilités de Saint-Flour VIII, 1978, ed. P. L.
Hennequin, Lect. Notes in Math. # 774, 177-249.
[42] Y. Guivarc’h (1984): Exposants caractéristiques des produits de matrices
aléatoires en dépendance markovienne. Probability measures on Groups. Lect.
Notes in Math. #.1064 Spinger Verlag, New York.
138
BIBLIOGRAPHY
[43] Y. Guivarc’h and A. Raugi (1986): Products of random matrices : Convergence
theorems. Contemporary Math. 50, 31-53
[44] Y. Guivarc’h and A. Raugi (1987): Frontière de Furstenberg, propriétés de contraction et théorèmes de convergence. Z. für Wahrscheinlichkeitstheorie verw.
Geb. 69, 187-242.
[45] Y. Guivarc’h and A. Raugi (1989): Propriétés de contraction d’un semi groupe
de matrices inversibles et coefficients de Lyapunov d’un produit de matrices
aléatoires independantes. Israel J. Math. 65, 165-196.
[46] B. Halperin (1967): Properties of a particle in a one dimensional random potential. Adv. Chem. Phys. 13, 123[47] H. Hennion (1984): Loi des grands nombres et perturbations pour des produits
réductibles de matrices aléatoires indépendantes. Wahrs. Verw. Gebiete 67, 265278.
[48] K. Hewitt and A. Ross (1963): Abstract Harmonic Analysis 1. Springer Verlag,
Berlin, Heidelberg, New York.
[49] L. K. Hua (1963): Harmonic analysis of several complex variables in the classical
domains. A.M.S. Providence.
[50] K. Ichihara and H. Kunita (1974): A classification of the Second Order Degenerate Elliptic Operators and its Probabilistic Characterization. Zeit fur Wahrch.
Verw Gebete, vol 30, 235-254.
[51] K. Ishii and H. Matsuda (1970): Localization of normal modes and energy
transpaort in the disordered harmonic chain. Suppl Progr. Theor. Phys. 45, 56-86.
[52] K. Ishii (1973): Localization of eigenstates and transport phenomena in the one
dimensional disordered system. Supp. Progr. Theor. Phys. 53, 77-138.
[53] R. Jhonston and H. Kunz (1983): The conductance of a disordered wire. J. Phys.
C 16, 3895-3912
[54] T. Kato (1966): Perturbation Theory for Linear Operators. Springer Verlag, New
York.
[55] Y. Katznelson and B. Weiss (1982): A Simple proof of some ergodic theorems.
Israel J. Math. 42, 291-296.
[56] Y. Kifer (1986): Ergodic Theory of Random Transformations. Birkhauser,
Boston, Basel, Stuttgart.
BIBLIOGRAPHY
139
[57] Y. Kifer (1989): Random Perturbations of Dynamical Systems. Birkhauser,
Boston.
[58] J.F.C. Kingman (1976): Subadditive processes. In Ecole d’Eté de Probabilités de
Saint-Flour V, 1975, ed. P.L. Hennequin, Lect. Notes in Math. # 539, 167-223.
[59] A. Klein, J. Lacroix and A. Speis (1989): Regularity of the density of states in the
Anderson model on a strip for potentials with singular continuous distributions.
J. Statist. Phys. 57, 65-88
[60] A. Klein, J. Lacroix and A. Speis (1989): Localization for the Anderson model
on a strip for singular potentials. J. Functional Anal. (to appear)
[61] A. Klein, F. Martinelli and F. Perez (1986): A rigorous replica trick approach to
Anderson localization in one dimension. Comm. Math. Phys. 106, 623[62] A. Klein and A. Speis (1988): Smoothness of the density of states in the Anderson
model on an one-dimensional strip. Ann. of Phys. 183, 352-398.
[63] A Klein and A. Speis (1989): Regularity of the invariant measure and of the
density of states in the one dimensional Anderson model. J. Functional Anal. (to
appear)
[64] S. Kotani (1976): On asymptotic behavior of the spectra of a one dimensional
Hamiltonian with a certain random coefficient. Publ. Res. Inst. Math. Sci. Kyoto
Univ. 12, 447-492.
[65] S. Kotani (1983): Ljapunov indices determine absolute continuous spectra of stationary one dimensional Schrödinger operators. in Proc. Taneguchi Intern. Symp.
on Stochastic Analysis. Katata and Kyoto (1982), ed. K. Ito. North Holland, 225247.
[66] S. Kotani (1983): Limit Theorems of Hypoelliptic Diffusions Processes. Probability theory and Mathematical Statistics. Lect. Notes in Math, No 1021, Springer
Verlag, New York.
[67] S. Kotani (1985): Support theorems for random Schrödinger operators. Comm.
Math. Phys. 97, 443-452
[68] S. Kotani (1984): Lyapunov exponents and spectra for one-dimensional random
Schrödinger operators. Proc. Conf. on Random MNatrices and their Applications.
Contemporary Math. Amer. Math. Soc. Providence R. I.
[69] S. Kotani and B. Simon (1988): Stochastic Schrödinger Matrices on the Strip.
Commun. Math. Phys. 119, 403-429.
140
BIBLIOGRAPHY
[70] H. Kunz and B. Souillard (1980): Sur le spectre des opérateurs aux différences
finies aléatoires. Comm. Math. Phys. 78, 201-246.
[71] J. Lacroix (1982): Problèmes probabilistes liés a l’étude des opérateurs aux
différences aléatoires. Ann. Inst. Elie Cartan 7, Marches aléatoires et processus
stochastiques sur les groupes de Lie, 80-95.
[72] J. Lacroix (1983): Singularité du spectre de l’opérateur de Schrödinger aléatoire
dans un ruban ou un demi ruban. Ann. Inst. H. Poincaré ser A38, 385-399.
[73] J. Lacroix (1984): Localisation pour l’opérateur de Schrödinger aléatoire dans un
ruban. Ann. Inst. H. Poincaré ser A40, 97-116.
[74] J. Lacroix (1984): The Schrödinger operator in a strip. Probability measures on
groups VII, Proceedings Oberwolfach, Lect. Notes in Math. 1064 280-297.
[75] J. Lacroix (1984): Computation of the sum of positive Lyapunov exponents in a
strip. Lect. Notes in Math. 1186 280-297.
[76] F. Ledrappier et G. Royer (1980): Croissance exponentielle de certains produits
aléatoires de matrices. C. R. Acad. Sci. Paris ser. A290, 513-514.
[77] F. Ledrappier (1984): Quelques propriétés des exposants caractéristiques. Ecole
d’Eté de Probabilités XII, Saint Flour 1982. Lect. Notes in Math. # 1097.
[78] F. Ledrappier (1985): Positivity of the exponent for stationary sequences of matrices (preprint).
[79] E. Lepage (1982): Théorèmes limites pour les produits de matrices aléatoires.
Lect. Notes in Math # 1064, 258-303.
[80] E. Le Page (1983): Répartition d’état d’un opérateur de Schrödinger aléatoire.
Probability measures on groups VII, Proceedings Oberwolfach, Lect. Notes in
Math. # 1064, 309-367.
[81] E. Le Page (1989): Régularité du plus grand exposant caractéristique de matrices
aléatoires indépendantes et applications. Ann. Inst. Henri Poincaré, 25, (2),109142
[82] F. Martinelli and L. Micheli (1986): On the large coupling constant behavior of
the Lyapunov exponent of a binary alloy. (preprint).
[83] F. Martinelli and E. Scoppola (1987): Introduction to the Mathematical theory
of Anderson localization. La Rivista del Nuovo Cimento, 10, 3.
[84] N. F. Mott and W. D. Twose (1961): The theory of impurity conduction. Adv.
Phys. 10, 107-
BIBLIOGRAPHY
141
[85] U. Neri (1971): Singular integrals. Lect. Notes in Math. 200, Springer Verlag,
New York, N.Y.
[86] V. Oseledec (1968): A multiplicative ergodic theorem, Ljapunov characteristic
numbers for dynamical systems. Trans. Moscow Math. Soc. 19, 197-231.
[87] L. A. Pastur (1972): On the distribution of the eigenvalues of the Schrödinger
equation with a random potential. Funct. Anal. Appl. 6, 163-165.
[88] L. A. Pastur (1973): Spectra of random self adjoint operators. Russ. Math. Surveys 28, 1-67.
[89] L. A. Pastur (1974): On the spectrum of the random Jacobi matrices and
the Schrödinger equation on the whole axis with random potential. (preprint),
Kharkov, (russian).
[90] L. A. Pastur (1980): Spectral properties of disordered systems in the one body
approximation. Comm. Math. Phys. 75, 167-196.
[91] M. Raghunatan (1979): A proof of Oseledec’s multiplicative ergodic theorem.
Israel J. Math 32 356-362.
[92] G. Royer (1980): Croissance exponentielle de produits de matrices aléatoires.
Ann. Inst. H. Poincaré ser. B, 16, 49-62.
[93] G. Royer (1982): Etude des opérateurs de Schrödinger à potentiel aléatoire en
dimension un. Bull. Soc. Math. France 110, 27-48.
[94] B. Simon (1985): Localization in general one-dimensional random system, I. Jacobi Matrices. Comm. Math. Phys. 102, 327-336.
[95] B. Simon and M. Taylor (1985): Harmonic analysis on SL(2, R) and smoothness
of the density of states in the Anderson model. Comm. Math. Phys. 101, 1-19.
[96] B. Simon and T. Wolff (1986): Singular continuous spectrum under rank one
perturbation and localization for random Hamiltonians. Comm. Pure Appl. Math.
39, 75-90.
[97] T. Spencer (1985): The Schrödinger equation with a random potential, a mathematical review. In Les Houches Summer School of Critical Phenomena 1984. ed.
K. Osterwalder and R. Stora.
[98] J.M. Steele (1989): Kingman’s subadditive ergodic Theorem. Ann. Inst. Henri
Poincaré , ser. B 25, No 93-98.
[99] D. J. Thouless (1972): A relation between the density of states and the range of
localization for one dimensional random systems. J. Phys. C. 5, 77-???
142
BIBLIOGRAPHY
[100] V.N. Tutubalin (1965): On limit theorems for products of Random Matrices.
Theor. Proba. Appl. 10, 15-27.
[101] V.N. Tutubalin (1969): Some Theorems of the type of the strong law of large
numbers. Theor. Proba. Appl. 14, 313-319.
[102] A. D. Virtser (1979): On products of random matrices and operators. Theory
Proba. Appl. 24, 367-377.
[103] A. D. Virtser (1983): On the simplicity of the spectrum of the Ljapunov characteristic indices of a product of random matrices. Theory Proba. Appl. 26, 122-136.
[104] F. Wegner (1981): Bounds on the density of States in Disordered Systems. Z.
Phys. B. Condensed Matter. 44, 9-15.
[105] Y. Yoshioka (1973): On the singularity of the spectral measures of a semi-infinite
random system. Proc. Japan Acad. 49, 665-668.