# semaine 14 à 16

## Transcription

semaine 14 à 16
```http://statwww.epfl.ch
5. Several Random Variables
5.1: Definitions. Joint density and distribution functions. Marginal
and conditional density and distribution functions.
5.2: Independent random variables. Random sample.
5.3: Joint and conditional moments. Covariance, correlation.
5.4: New random variables from old. Change of variables formulae.
5.5: Order statistics.
References: Ross (Chapter 6); Ben Arous notes (IV.2, IV.4–IV.6,
V.1, V.2).
Exercises: 89, 94–102, 114, 115 of Recueil d’exercices, and the
exercises in the text below.
Probabilité et Statistique I — Chapter 5
1
http://statwww.epfl.ch
Petit Vocabulaire Probabiliste
Mathematics
English
Français
E(X)
E(X r )
expected value/expectation of X
l’espérance de X
rth moment of X
rième moment de X
var(X)
variance of X
la variance de X
MX (t)
moment generating function of X, or
la fonction génératrice des moments
the Laplace transform of fX (x)
ou la transformée de Laplace de fX (x)
fX,Y (x, y)
joint density/mass function
densité/fonction de masse conjointe
FX,Y (x, y)
joint (cumulative) distribution function
fonction de répartition conjointe
fX|Y (x | y)
conditional density function
densité conditionelle
X, Y independent
X, Y independantes
random sample from F
un échantillon aléatoire
E(X r Y s )
joint moment
un moment conjoint
cov(X, Y )
covariance of X and Y
la covariance de X et Y
corr(X, Y )
correlation of X and Y
la correlation de X et Y
conditional expectation of X
l’espérance conditionelle de X
conditional variance of X
la variance conditionelle de X
rth order statistic
rieme statistique d’ordre
fX,Y (x, y) = fX (x)fY (y)
iid
X1 , . . . , X n ∼ F
E(X | Y = y)
var(X | Y = y)
X(r)
Probabilité et Statistique I — Chapter 5
2
http://statwww.epfl.ch
5.1 Basic Ideas
Often we consider how several variables vary simultaneously. Some
examples:
Exemple 5.1: Consider the distribution of (height, weight) for
EPFL students.
•
Exemple 5.2: N people vote for political parties, choosing among
•
(left, centre, right).
Exemple 5.3: Consider marks for a probability test and a
probability exam, (T, P ), with 0 ≤ T, P ≤ 6. How are these likely to
be related? Given the test results, what can we say about the likely
value of P ?
•
Our previous definitions generalize in a natural way to this situation.
Probabilité et Statistique I — Chapter 5
3
http://statwww.epfl.ch
Bivariate Discrete Random Variables
Définition: Let (X, Y ) be a discrete random variable: the set
D = {(x, y) ∈ R2 : P{(X, Y ) = (x, y)} > 0}
is countable. The joint probability mass function of (X, Y ) is
fX,Y (x, y) = P{(X, Y ) = (x, y)},
(x, y) ∈ R2 ,
and the joint cumulative distribution function of (X, Y ) is
FX,Y (x, y) = P(X ≤ x, Y ≤ y),
(x, y) ∈ R2 .
Exemple 5.4: One 1SFr and two 5SFr coins are tossed. Let X
denote the total number of heads, and Y the number of heads
showing on the 5SFr coins. Find the joint probability mass function
•
of (X, Y ), and give P(X ≤ 2, Y ≤ 1) and P(X ≤ 2, 1 ≤ Y ≤ 2).
Probabilité et Statistique I — Chapter 5
4
http://statwww.epfl.ch
Bivariate Continuous Random Variables
Définition: The random variable (X, Y ) is called (jointly)
continuous if there exists a function fX,Y (x, y) such that
Z Z
P{(X, Y ) ∈ A} =
fX,Y (u, v) dudv
(u,v)∈A
for any A ⊂ R2 . Then fX,Y (x, y) is called the joint probability
density function of (X, Y ).
•
On setting A = {(u, v) : u ≤ x, v ≤ y}, we see that the joint
cumulative distribution function of (X, Y ) may be written
Z x Z y
FX,Y (x, y) = P(X ≤ x, Y ≤ y) =
fX,Y (u, v) dudv, (x, y) ∈ R2 ,
−∞
Probabilité et Statistique I — Chapter 5
−∞
5
http://statwww.epfl.ch
and this implies that
∂2
fX,Y (x, y) =
FX,Y (x, y).
∂x∂y
Exercice : If x1 < x2 and y1 < y2 , show that
P(x1 < X ≤ x2 , y1 < Y ≤ y2 ) = F (x2 , y2 )−F (x1 , y2 )−F (x2 , y1 )+F (x1 , y1 ).
Exemple 5.5: Find the joint cumulative distribution function and
P(X ≤ 1, Y > 2) when
−3x−2y
e
, x, y > 0,
fX,Y (x, y) ∝
0,
otherwise.
Exemple 5.6: Find the joint cumulative distribution function and
P(X ≤ 1, Y > 2) when
−x−y
e
, y > x > 0,
fX,Y (x, y) ∝
0,
otherwise.
Probabilité et Statistique I — Chapter 5
6
http://statwww.epfl.ch
Marginal and Conditional Distributions
Définition: The marginal probability mass/density function
for X is
P
discrete case,
y fX,Y (x, y),
R
x ∈ R.
fX (x) =
∞
f
(x,
y)
dy,
continuous
case,
−∞ X,Y
The conditional probability mass/density function for Y given
X is
fX,Y (x, y)
fY |X (y | x) =
, y ∈ R,
fX (x)
provided fX (x) > 0. When (X, Y ) is discrete,
fX (x) = P(X = x),
fY |X (y | x) = P(Y = y | X = x).
Analogous definitions hold for fY (y), fX|Y (x | y), and for the
conditional distribution functions FX|Y (x | y), FY |X (y | x). The
Probabilité et Statistique I — Chapter 5
7
http://statwww.epfl.ch
definitions extend to several dimensions by letting X, Y be vectors. •
Exemple 5.7: Find the conditional and marginal probability mass
•
functions in Example 5.4.
Exercice : Recompute Examples 5.4, 5.7 with three 1SFr and two
•
5SFr coins.
Exemple 5.8: The number of eggs laid by a beetle has a Poisson
distribution with mean λ. Each egg hatches independently with
probability p. Find the distribution of the total number of eggs that
hatch. Given that x eggs have hatched, what is the distribution of
•
the number of eggs that were laid?
Exemple 5.9: Find the conditional and marginal density functions
•
in Example 5.6.
Probabilité et Statistique I — Chapter 5
8
http://statwww.epfl.ch
Multivariate Random Variables
Définition: Let X1 , . . . , Xn be random variables defined on the
same probability space. Their joint cumulative distribution function
is
FX1 ,...,Xn (x1 , . . . , xn ) = P(X1 ≤ x1 , . . . , Xn ≤ xn )
and their joint probability mass/density function is
(
P(X1 = x1 , . . . , Xn = xn ), discrete case,
fX1 ,...,Xn (x1 , . . . , xn ) = ∂ n FX1 ,...,Xn (x1 ,...,xn )
,
continuous case.
∂x1 ···∂xn
Marginal and conditional density and distribution functions are
defined analogously to the bivariate case, by replacing (X, Y ) with
X = X1 , Y = (X2 , . . . , Xn ).
Probabilité et Statistique I — Chapter 5
9
http://statwww.epfl.ch
All the subsequent discussion can be generalised to n variables in an
obvious way, but as the notation becomes heavy we mostly stick to
the bivariate case.
Exemple 5.10: n students vote for the three candidates for
president of their union. Let X1 , X2 , X3 be the corresponding
numbers of votes, and suppose that all n students vote independently
with probabilities p1 = 0.45, p2 = 0.4, and p3 = 0.15. Show that
fX1 ,X2 ,X3 (x1 , x2 , x3 ) =
n!
px1 1 px2 2 px3 3 ,
x1 !x2 !x3 !
x1 , x2 , x3 ∈ {0, . . . , n},
x1 + x2 + x3 = n.
where
Find the marginal distribution of X3 , and the conditional
distribution of X1 given X3 = m.
Probabilité et Statistique I — Chapter 5
•
10
http://statwww.epfl.ch
5.2 Independent Random Variables
Définition: Two random variables X, Y defined on the same
probability space are independent if for any subsets A, B ⊂ R,
P(X ∈ A, Y ∈ B) = P(X ∈ A)P(Y ∈ B).
This implies that the events EA = {X ∈ A} and EB = {Y ∈ B} are
independent for any sets A, B ⊂ R.
Setting A = (−∞, x] and B = (−∞, y], we have in particular
FX,Y (x, y) =
P(X ≤ x, Y ≤ y)
=
P(X ≤ x) P(Y ≤ y)
=
FX (x)FY (y),
Probabilité et Statistique I — Chapter 5
−∞ < x, y < ∞.
11
http://statwww.epfl.ch
This implies the equivalent condition
fX,Y (x, y) = fX (x)fY (y),
−∞ < x, y < ∞,
which will be our criterion of independence.
Note: X, Y are independent if and only if this holds for all x, y ∈ R:
it is a condition on the functions fX,Y (x, y), fX (x), fY (y).
Note: If X, Y are independent, then for any x for which fX (x) > 0,
fY |X (y | x) =
fX (x)fY (y)
fX,Y (x, y)
=
= fY (y),
fX (x)
fX (x)
y ∈ R.
Thus knowledge of the value taken by X does not affect the density
of Y : this an obvious meaning of independence. By symmetry we
have also that fX|Y (x | y) = fX (x) for any y for which fY (y) > 0.
Note: If X and Y are not independent, we say they are dependent.
Probabilité et Statistique I — Chapter 5
12
http://statwww.epfl.ch
Exemple 5.11: Are (X, Y ) independent in Example 5.4?
•
Exemple 5.12: Are (X, Y ) independent in Example 5.5?
•
Exemple 5.13: Are (X, Y ) independent in Example 5.6?
•
Exemple 5.14: If the density of (X, Y ) is uniform on the disk
{(x, y) : x2 + y 2 ≤ a},
then (a) without computing the density, say if they are independent;
(b) find the conditional density of Y given X.
•
Exercice : Let ρ be a constant in the range −1 < ρ < 1. When are
the variables with joint density
2
2
x − 2ρxy + y
1
exp −
fX,Y (x, y) =
, −∞ < x, y < ∞,
2(1 − ρ2 )
2π(1 − ρ2 )1/2
independent? What are then the densities of X and Y ?
Probabilité et Statistique I — Chapter 5
•
13
http://statwww.epfl.ch
Random Sample
Définition: A random sample of size n from a distribution F
with density f is a set of n independent random variables all with
iid
iid
distribution F . We then write X1 , . . . , Xn ∼ F or X1 , . . . , Xn ∼ f .
iid
The joint probability density of X1 , . . . , Xn ∼ f is
fX1 ,...,Xn (x1 , . . . , xn ) =
n
Y
fX (xj ).
j=1
iid
Exemple 5.15: If X1 , X2 ∼ exp(λ), give their joint density.
•
iid
Exercice : Write down the joint density of Z1 , Z2 , Z3 ∼ N (0, 1),
•
and show that it depends only on R = (Z12 + Z22 + Z32 )1/2 .
Probabilité et Statistique I — Chapter 5
14
http://statwww.epfl.ch
5.3 Joint and Conditional Moments
Définition: Let X, Y be random variables with probability density
function fX,Y (x, y). Then the expectation of g(X, Y ) is
P
discrete case,
x,y g(x, y)fX,Y (x, y),
RR
E{g(X, Y )} =
g(x, y)fX,Y (x, y) dxdy, continuous case,
provided E{|g(X, Y )|} < ∞ (so that E{g(X, Y )} has a unique value).
In particular we define joint moments and joint central moments
E(X r Y s ),
r
s
E [{X − E(X)} {Y − E(Y )} ] ,
r, s ∈ N.
The most important of these is the covariance of X and Y ,
cov(X, Y ) = E [{X − E(X)} {Y − E(Y )}] = E(XY ) − E(X)E(Y ).
Probabilité et Statistique I — Chapter 5
15
http://statwww.epfl.ch
Properties of Covariance
Théorème : Let X, Y, Z be random variables and a, b, c, d scalar
constants. Covariance satisfies:
cov(X, X) =
var(X);
cov(a, X) =
0;
cov(X, Y ) =
cov(Y, X),
(symmetry);
cov(a + bX + cY, Z) =
b cov(X, Z) + c cov(Y, Z),
cov(a + bX, c + dY ) =
bd cov(X, Y );
var(a + bX + cY ) =
cov(X, Y )2
≤
(bilinearity);
b2 var(X) + 2bc cov(X, Y ) + c2 var(Y );
var(X)var(Y ),
(Cauchy–Schwarz inequality).
Use the definition of covariance to prove these. For the last, note that
var(X + aY ) is a quadratic function of a with at most one real root.
Probabilité et Statistique I — Chapter 5
16
http://statwww.epfl.ch
Independence and Covariance
If X and Y are independent and g(X), h(Y ) are functions whose
expectations exist, then (in the continuous case)
ZZ
E{g(X)h(Y )} =
g(x)h(y)fX,Y (x, y) dxdy
ZZ
=
g(x)h(y)fX (x)fY (y) dxdy
Z
Z
=
g(x)fX (x) dx
h(y)fY (y) dy
= E{g(X)}E{h(Y )}.
Setting g(X) = X − E(X) and h(Y ) = Y − E(Y ), we see that if X
and Y are independent, then
cov(X, Y ) = E [{X − E(X)} {Y − E(Y )}] = E {X − E(X)} E {Y − E(Y )} = 0.
Probabilité et Statistique I — Chapter 5
17
http://statwww.epfl.ch
Independent Variables
Note: In general it is not true that cov(X, Y ) = 0 implies
independence of X and Y .
Exercice : Let X ∼ N (0, 1) and set Y = X 2 − 1. What is the
conditional distribution of Y given X = x? Are they dependent?
Show that E(X r ) = 0 for any odd r. Deduce that cov(X, Y ) = 0.
•
Exemple 5.16: Let Z1 , Z2 , Z3 be independent exponential variables
with parameters λ1 , λ2 , λ3 . Let X = Z1 + Z2 and Y = Z1 + Z3 . Find
•
cov(X, Y ) and cov(2 + 3X, 4Y ).
Exemple 5.17: Let X1 ∼ N (µ1 , σ12 ) and X2 ∼ N (µ2 , σ22 ) be
independent. Find the moment-generating functions of X1 and of
X1 + X2 . What is the distribution of X1 + X2 ?
Probabilité et Statistique I — Chapter 5
•
18
http://statwww.epfl.ch
Linear Combinations of Random Variables
Let X1 , . . . , Xn be random variables and a, b1 , . . . , bn constants. Then
the properties of expectation E(·) and of covariance cov(·, ·) imply
E(a + b1 X1 + · · · + bb Xn )
= a+
n
X
bj E(Xj ),
j=1
var(a + b1 X1 + · · · + bb Xn )
=
n
X
b2j var(Xj )
j=1
+
X
bj bk cov(Xj , Xk ).
j6=k
If X1 , . . . , Xn are independent, then cov(Xj , Xk ) = 0, j 6= k, and so
var(a + b1 X1 + · · · + bb Xn ) =
n
X
b2j var(Xj ).
j=1
Exemple 5.18: If X1 , X2 are independent variables with means 1, 2,
and variances 3, 4, find the mean and variance of 5X1 + 6X2 − 16. •
Probabilité et Statistique I — Chapter 5
19
http://statwww.epfl.ch
Correlation
Covariance is a poor measure of dependence between two quantities,
because it depends on their units of measurement.
Définition: The correlation of X, Y is defined as
corr(X, Y ) =
cov(X, Y )
1/2
{var(X)var(Y )}
.
Note: This measures linear dependence between X and Y . If
corr(X, Y ) = ±1 then constants a, b, c exist such that aX + bY = c
with probability one: X and Y are then perfectly linearly dependent.
If independent, they are uncorrelated: corr(X, Y ) = 0.
Note: In all cases −1 ≤ corr(X, Y ) ≤ 1.
Note: Mapping (X, Y ) 7→ (a + bX, c + dY ) changes corr(X, Y ) to
sign(bd)corr(X, Y ): at most the sign of the correlation changes.
Probabilité et Statistique I — Chapter 5
20
http://statwww.epfl.ch
Exemple 5.19: Find corr(X, Y ) in Example 5.16.
•
Exercice : Let Z1 , Z2 , Z3 be independent Poisson variables with
common mean λ. Let X = Z1 + 2Z2 and Y = 2Z1 + Z3 . Find
cov(X, Y ) and corr(X, Y ).
•
Probabilité et Statistique I — Chapter 5
21
http://statwww.epfl.ch
Multivariate Normal Distribution
Définition: Let µ = (µ1 , . . . , µn )T ∈ Rn , and let Ω be a n × n
positive definite matrix with elements ωjk . Then the vector random
variable X = (X1 , . . . , Xn )T with probability density
f (x) =
1
(2π)p/2 |Ω|1/2
1
T
−1
exp − 2 (x − µ) Ω (x − µ) ,
x ∈ Rn ,
is said to have the multivariate normal distribution with mean
vector µ and covariance matrix Ω; we write X ∼ Nn (µ, Ω). This
implies that
E(Xj ) = µj , cov(Xj , Xk ) = ωjk .
If cov(Xj , Xk ) = 0, then the variables Xj , Xk are independent.
Here are plots with n = 2, zero mean (µ1 = µ2 = 0), unit variance
(ω11 = ω22 = 1), and correlation ρ = ω12/(ω11 ω22 )1/2 .
Probabilité et Statistique I — Chapter 5
22
http://statwww.epfl.ch
Bivariate Normal Densities
rho=0.0
0 0.1 0.2 0.3
2
1
0
x2 -1
-2
rho=0.3
0 0.1 0.2 0.3
2
-2
-1
0
x1
1
2
1
0
x2 -1
-2
-2
-1
0
x1
1
2
rho=0.9
0 0.1 0.2 0.3
2
x2
0.02
0.05
0.1
0.15
0.18
1
2
1
0
-1
0
x2 -1
-2
-2
-1
0
x1
1
2
-2
-2
-1
0
1
2
x1
23
Probabilité et Statistique I — Chapter 5
http://statwww.epfl.ch
Espérance conditionelle
Définition: Soit g(X, Y ) une fonction d’un vecteur aléatoire
(X, Y ). Son espérance conditionelle sachant X = x est
P
dans le cas discret,
y g(x, y)fY |X (y | x),
R
E{g(X, Y ) | X = x} =
∞
g(x, y)fY |X (y | x) dy, dans le cas continu,
−∞
à condition que fX (x) > 0 et E{|g(X, Y )| | X = x} < ∞. Noter que
c’est une fonction de x.
Exemple 5.20: Calculer E(Y | X = x) et E(X 4 Y | X = x) dans
l’Exemple 5.5.
•
Exercice : Dans l’Example 5.7, calculer le nombre espéré d’oeufs en
éclosion lorsque n oeufs ont été pondus. Calculer aussi l’espérance du
•
nombre d’oeufs pondus sachant que m oeufs ont éclos.
Probabilité et Statistique I — Chapter 5
24
http://statwww.epfl.ch
Espérance et conditionnement
Dans certains cas, il est plus facile de calculer E{g(X, Y )} par étapes
de la manière suivante :
Théorème : Si les espérances requises existent, alors
E{g(X, Y )}
var{g(X, Y )}
= EX [E{g(X, Y ) | X = x}] ,
= EX [var{g(X, Y ) | X = x}] + varX [E{g(X, Y ) | X = x}] .
où EX et varX représentent l’espérance et la variance par rapport à
la distribution de X.
•
Probabilité et Statistique I — Chapter 5
25
http://statwww.epfl.ch
Exemple 5.21: n = 200 personnes passent devant un artiste de rue
à un jour donné. Chacune d’entre elles décident independemment de
lui donner de l’argent avec une probabilité de p = 0.05. Les quantités
d’argent reçues sont indépendantes, et ont pour espérances µ = 2\$ et
variances σ 2 = 1\$2 . Quelles sont l’espérance et la variance de la
quantité d’argent reçues par lui ?
•
Probabilité et Statistique I — Chapter 5
26
http://statwww.epfl.ch
Exercice : Un étudiant passe un examen composé de n = 6
questions. Pour réussir, il faut qu’il totalise au moins 60 points. Les
notes des différentes questions sont independantes. Il sait qu’il a une
probabilité p = 0.1 de ne pas pouvoir commencer une question.
Cependant s’il sait débuter une question, la note correspondante aura
pour densité
x/200, 0 ≤ x ≤ 20,
f (x) =
0,
sinon.
(a) Quelle est la probabilité que sa note totale soit de zéro?
(b) Quelles sont la moyenne et la variance de sa note totale ?
(c) Utiliser une approximation normale pour estimer la probabilité
qu’il réussisse l’examen.
•
Probabilité et Statistique I — Chapter 5
27
http://statwww.epfl.ch
5.4 Nouvelles variables aléatoires issues d’anciennes
On veut souvent calculer des lois de variables aléatoires à partir
d’autres variables aléatoires. Voilà comment:
Théorème : Soit Z = g(X, Y ) une fonction des variables aléatoires
(X, Y ) qui a pour densité conjointe fX,Y (x, y). Alors :
P
(x,y)∈Az fX,Y (x, y), cas discret,
RR
FZ (z) = P{g(X, Y ) ≤ z} =
f
(x, y) dxdy, cas continu ,
Az X,Y
où Az = {(x, y) : g(x, y) ≤ z}.
iid
Exemple 5.22: Soient X, Y ∼ exp(λ), calculer les lois de X + Y et
de Y − X.
•
Exemple 5.23: Soient X1 et X2 les résultats de lancés
indépendants de deux dés équilibrés. calculer la loi de X1 + X2 .
Probabilité et Statistique I — Chapter 5
•
28
http://statwww.epfl.ch
Tranformations de densité conjointe continue
Théorème : Soit (X1 , X2 ) un vecteur aléatoire de dimension 2, et
de densité continue, soient Y1 = g1 (X1 , X2 ) et Y2 = g2 (X1 , X2 ), où:
(a) le système d’ équations y1 = g1 (x1 , x2 ), y2 = g2 (x1 , x2 ) peut être
résolu pour tout (y1 , y2 ), donnant les solutions
x1 = h1 (y1 , y2 ), x2 = h2 (y1 , y2 ); et
(b) g1 and g2 sont continuement différentiables et ont pour Jacobien
∂g1 ∂g1 ∂x
∂x2 1
J(x1 , x2 ) = ∂g2 ∂g2 ∂x1
∂x2
qui est positif si fX1 ,X2 (x1 , x2 ) > 0.
Alors
fY1 ,Y2 (y1 , y2 ) = fX1 ,X2 (x1 , x2 )
Probabilité et Statistique I — Chapter 5
|J(x1 , x2 )|−1 x1 =h1 (y1 ,y2 ),x2 =h2 (y1 ,y2 )
.
29
http://statwww.epfl.ch
Exemple 5.24: Calculer la densité conjointe de Y1 = X1 + X2 et
iid
Y2 = X1 − X2 lorsque X1 , X2 ∼ N (0, 1).
•
Exemple 5.25: Calculer la densité conjointe de X1 + X2 et
iid
X1 /(X1 + X2 ) lorsque X1 , X2 ∼ exp(λ).
•
iid
Exemple 5.26: Si X1 , X2 ∼ N (0, 1), calculer la densité de X2 /X1 .•
Exercice : Si la densité de (X1 , X2 ) se répartit uniformément sur le
disque unitaire {(x1 , x2 ) : x21 + x22 ≤ 1}, calculer la densité de
X12 + X22 .
(Indication : utiliser les coordonnées polaires.)
Probabilité et Statistique I — Chapter 5
•
30
http://statwww.epfl.ch
Cas multivariée
Le théorème ci-dessus s’ étend aux vecteurs aléatoires de densité
continue :
(X1 , . . . , Xn ) 7→ (Y1 = g1 (X1 , . . . , Xn ), . . . Yn = gn (X1 , . . . , Xn )).
à condition que la transformation inverse existe, et a pour Jacobien
∂g1
∂g1 ·
·
·
∂x1
∂xn .. ,
..
J(x1 , . . . , xn ) = ...
.
. ∂gn
∂gn ·
·
·
∂x1
∂xn
on trouve que
fY1 ,...,Yn (y1 , . . . , yn ) = fX1 ,...,Xn (x1 , . . . , xn ) |J(x1 , . . . , xn )|−1 ,
evaluaée á x1 = h1 (y1 , . . . , yn ), . . . , xn = hn (y1 , . . . , yn ).
Probabilité et Statistique I — Chapter 5
31
http://statwww.epfl.ch
Fonctions génératrices des moments (rappel)
La fonction génératrice des moments (FGM) de X est définie comme
MX (t) = E(etX ), avec t ∈ R de façon à ce que MX (t) < ∞. Elle
résume la loi de X, qui lui est équivalente. Ses propriétés clés sont :
MX (0) = 1;
Ma+bX (t) = eat MX (bt);
r
∂ MX (t) r
E(X ) =
;
∂tr t=0
′
MX
(0) = E(X);
′′
′
MX
(0) − MX
(0)2
= var(X).
Il existe une bijection entre la fonction de répartition et la fonction
génératrice des moments.
Probabilité et Statistique I — Chapter 5
32
http://statwww.epfl.ch
Combinaisons linéaires
Théorème : Soient a, b1 , . . . , bn des constantes et X1 , . . . , Xn des
variables independantes dont les FGMs existent. Alors
Y = a + b1 X1 + · · · + bn Xn a pour FGM
MY (t) = E(etY )
= E{et(a+b1 X1 +···+bn Xn ) }
= eat E(etb1 X1 ) × · · · × E(etbn Xn )
n
Y
MXj (tbj ).
= eta
j=1
En particulier, si X1 , . . . , Xn est un échantillon aléatoire, alors
S = X1 + · · · + Xn a pour FGM
MS (t) = MX (t)n .
Probabilité et Statistique I — Chapter 5
33
http://statwww.epfl.ch
Utilisation des fonctions génératrices des moments
2
Exemple 5.27: Soit Z ∼ N (0, 1), montrer que MZ (t) = et
2 2
déduire que X = µ + σZ a pour FGM MX (t) = etµ+t σ /2 .
/2
. En
•
Exemple 5.28: Supposons que X1 , . . . , Xn sont independants, et
Xj ∼ N (µj σj2 ). Montrer que
Y = a+b1 X1 +· · ·+bn Xn ∼ N (a+b1 µ1 +· · ·+bn µn , b21 σ12 +· · ·+b2n σn2 ) :
donc, qu’une combinaison linéaires de variables normales est normale.
•
iid
Exemple 5.29: SoientIf X1 , . . . , Xn ∼ exp(λ), montrer que
S = X1 + · · · + Xn est distribuée selon une loi gamma.
•
iid
Exemple 5.30: Soient X1 , X2 ∼ exp(λ), montrer que
W = X1 − X2 est distribuée selon une loi de Laplace.
Probabilité et Statistique I — Chapter 5
•
34
http://statwww.epfl.ch
5.5 Statistiques d’ordre
Définition: Les statistiques d’ordre des variables aléatoires
X1 , . . . , Xn sont les valeurs ordonnées
X(1) ≤ X(2) ≤ · · · ≤ X(n−1) ≤ X(n) .
Si les X1 , . . . , Xn sont continues, alors leurs valeurs diffèrent avec
probabilité 1 et
X(1) < X(2) < · · · < X(n−1) < X(n) .
Définition: Le minimum de l’échantillon est X(1) .
Définition: Le maximum de l’échantillon est X(n) .
Définition: La médiane de l’échantillon de X1 , . . . , Xn est
X(m+1) si n = 2m + 1 est impair, et 12 (X(m) + X(m+1) ) si n = 2m est
pair. La médiane fait ressortir le ‘centre’ des données.
Probabilité et Statistique I — Chapter 5
35
http://statwww.epfl.ch
Exemple 5.31: Si x1 = 6, x2 = 3, x3 = 4, les statistiques d’ordre
sont x(1) = 3, x(2) = 4, x(3) = 6. Les minimum, médiane, et maximum
de l’ échantillon sont 3, 4, et 6 respectivement.
•
Théorème : Soient X1 , . . . , Xn un échantillon aléatoire issu d’une
distribution continue de densité f et de fonction de répartition F . On
a alors :
P(X(n) ≤ x) = F (x)n ;
P(X(1) ≤ x) = 1 − {1 − F (x)}n ;
fX(r) (x) =
n!
F (x)r−1 f (x){1 − F (x)}n−r ,
(r − 1)!(n − r)!
r = 1, . . . , n.
iid
Exemple 5.32: Soient X1 , X2 , X3 ∼ exp(λ). Quelles sont les
densités marginales de X(1) , X(2) , and X(3) ?
Probabilité et Statistique I — Chapter 5
•
36
http://statwww.epfl.ch
Exemple 5.33: Un étudiant passe un examen composé de 5
questions qui sont notées indépendemment. Les notes ont pour
densité
x/200, 0 ≤ x ≤ 20,
f (x) =
0,
sinon.
Trouver la probabilité que sa note la plus faible soit inférieure á 5.
Calculer les espérances de la note médiane et la note la plus élevée. •
iid
Exercice : Soient X1 , . . . , Xn ∼ F un échantillon aléatoire,
montrer que P(X(1) > x, X(n) ≤ y) = {F (y) − F (x)}n . Si F est
continue, utiliser le fait que
P(X(n) ≤ y) = P(X(1) > x, X(n) ≤ y) + P(X(1) ≤ x, X(n) ≤ y)
pour montrer que la densité conjointe de X(1) , X(n) est
fX(1) ,X(n) (x, y) = n(n − 1)f (x)f (y){F (y) − F (x)}n−2 ,
Trouver cette densité pour l’Exemple 5.32.
Probabilité et Statistique I — Chapter 5
x < y.
•
37
```

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail

Plus en détail