Jump-adapted discretization schemes for Lévy

Transcription

Jump-adapted discretization schemes for Lévy
Jump-adapted discretization schemes for
Lévy-driven SDEs
Arturo Kohatsu-Higa
Osaka University
Graduate School of Engineering Sciences
Machikaneyama cho 1-3, Osaka 560-8531, Japan
E-mail: [email protected]
Peter Tankov∗
Ecole Polytechnique
Centre de Mathématiques Appliquées
91128 Palaiseau Cedex France
Email: [email protected]
Abstract
We present new algorithms for weak approximation of stochastic differential equations driven by pure jump Lévy processes. The method is
built upon adaptive non-uniform discretization based on the jump times of
the driving process coupled with suitable approximations of the solutions
between these jump times. Our technique avoids the costly simulation
of the increments of the Lévy process and in many cases achieves better
convergence rates than the traditional schemes with equal time steps. To
illustrate the method, we consider applications to simulation of portfolio
strategies and option pricing in the Libor market model with jumps.
Key words: Lévy-driven stochastic differential equation, Euler scheme, jumpadapted discretization, weak approximation, Libor market model with jumps.
2000 Mathematics Subject Classification: Primary 60H35, Secondary 65C05,
60G51.
1
Introduction
Let Z be a d-dimensional Lévy process without diffusion component, that is,
Z tZ
Z tZ
b
Zt = γt +
y N (dy, ds) +
yN (dy, ds), t ∈ [0, 1].
0
∗ Corresponding
|y|≤1
0
author
1
|y|>1
Here γ ∈ Rd , N is a Poisson random measure on Rd × [0, ∞) with intensity ν
R
b (dy, ds) = N (dy, ds) − ν(dy)ds denotes the
satisfying 1 ∧ |y|2 ν(dy) < ∞ and N
compensated version of N . Further, let X be an Rn -valued adapted stochastic
process, unique solution of the stochastic differential equation
Z t
Xt = X0 +
h(Xs− )dZs , t ∈ [0, 1],
(1)
0
where h is an n × d matrix.
The traditional uniformly-spaced discretization schemes for (1) [15, 10] suffer
from two difficulties: first, in general, we do not know how to simulate the
driving Lévy process and second, a large jump of Z occurring between two
discretization points can lead to an important discretization error. On the other
hand, in many cases of interest, the Lévy measure of Z is known and one can
easily simulate the jumps of Z bigger in absolute value than a certain ε > 0.
As our benchmark example, consider the one-dimensional tempered stable Lévy
process [5] which has the Lévy density
ν(x) =
C− e−λ− |x|
C+ e−λ+ |x|
1
+
1x>0 .
x<0
|x|1+α−
|x|1+α+
(2)
This process, also known as the CGMY model [3] when C− = C+ and α− = α+ ,
is a popular representation for the logarithm of stock price in financial markets.
No easy algorithm is available for simulating the increments of this process
(cf [12, 14]), however, it is straightforward to simulate random variables with
density
ν(x)1|x|>ε
|x|>ε ν(x)dx
R
(3)
using the rejection method [5, example 6.9]. Sometimes, other approximations
of Z by a compound Poisson process, such as the series representations [16] lead
to simpler simulation algorithms.
A natural idea due to Rubenthaler [17] (in the context of finite-intensity
jump processes, this idea appears also in [2, 13]), is to replace the process Z
with a suitable compound Poisson approximation and place the discretization
points at the jump times of the compound Poisson process. When the jumps
of Z are highly concentrated around zero, however, this approximation is too
rough and can lead to convergence rates which are arbitrarily slow.
In this paper we are interested in approximating the solution of (1) in the
weak sense. We use the idea of Asmussen and Rosiński [1] (see also [4]) and
replace the small jumps of Z with a Brownian motion between the jump times
of the compound Poisson approximation. We then replace the solution of the
continuous SDE between the jump times by a suitable approximation, which
is explicitly computable. Combined, these two steps lead to a much lower discretization error than in [17]. If N is the average number of operations required
to simulate a single trajectory, the discretization error of our method converges
2
to zero faster than √1N in all cases and faster than N1 if the Lévy measure of Z
is locally symmetric near zero1 . In practice, the convergence rates are always
better than these worst-case bounds: for example, if α− = α+ = 1.8 in (2),
our method has the theoretical convergence rate of N −1.22 (compared to only
N −0.11 without the Brownian approximation) and when α− = α+ = 0.5, the
rate of our method is N −7 .
functions (χε )ε>0 : Rd → [0, 1] such that
R Consider a family of measurable
R
2
Rd χε (y)ν(dy) < ∞ and |y|>1 |y| (1 − χε )(y)ν(dy) < ∞ for all ε > 0 and
limε↓0 χε (y) = 1 for all y 6= 0. The Lévy measure ν will be approximated by
finite measures χε ν. The most simple such approximation is χε (y) := 1|y|>ε ,
but others can also be useful (see Example 7 below). We denote by Nε a Poisson
cε its compensated Poisson
random measure with intensity χε ν × ds and by N
b̄
random measure. Similarly, we denote by N ε the compensated Poisson random
measure with intensity χ̄ε ν × ds, where χ̄ε := 1 − χε . The process Z can then
be represented in law as follows:
d
Zt = γε t + Ztε + Rtε ,
Z
Z
γε = γ −
yχε ν(dy) +
Ztε =
Rtε =
Z tZ
|y|≤1
0
Rd
0
Rd
Z tZ
y χ̄ε ν(dy),
|y|>1
yNε (dy, ds),
b̄ (dy, ds).
yN
ε
R
We denote by λε = Rd χε ν(dy) the intensity of Z ε , by (Ntε ) the Poisson process
which has the same jump times as Z ε , and by Tiε , i ∈ N, the jump times of Z ε .
Furthermore, we denote by Σε the variance-covariance matrix of R1ε :
Z
ε
Σij =
yi yj χ̄ε ν(dy),
Rd
and in the one-dimensional case we set σε2 := Σε11 . Sometimes we shall use the
following technical assumption on (χε ):
(A) ∀n ≥ 2, ∃C such that
Z
Z
n+1
|z|
χ̄ε ν(dz) ≤ C
Rd
Rd
|z|n χ̄ε ν(dz)
for ε sufficiently small.
This assumption, roughly, means that the approximation (χε ) does not remove small jumps faster than big jumps, and it is clearly satisfied by χε (y) =
1|y|>ε .
1 for
the process (3) this corresponds to having α+ = α− and C+ = C−
3
Supposing that no further discretization is done between the jump times of
Z ε , the computational complexity of simulating a single approximate trajectory
is proportional to the random number N1ε . To compare our method to the traditional equally-spaced discretizations, we measure instead the computational
complexity by the average number of jumps of Z ε on [0, 1], equal to λε . We
say that the approximation X̂ ε converges weakly for some order α > 0 if, for a
sufficiently smooth function f ,
|E[f (X1 )] − E[f (X̂1 )]| ≤ Kλ−α
ε
for some K > 0 and all ε sufficiently small.
In this paper, we provide two versions of the jump-adapted discretization
scheme for pure jump Lévy processes. The scheme presented in Section 2 is
easier to use and implement but works in the case d = n = 1. A fully general
scheme is then presented in Section 3. Section 4 presents two detailed examples
where our method is applied to the problems of long term portfolio simulation
and of option pricing in the Libor market model with jumps.
2
One-dimensional SDE
For our first scheme we take d = n = 1 and consider the ordinary differential
equation
dXt = h(Xt )dt,
X0 = x.
(4)
In the one-dimensional case, the solution to this equation can always be written:
Xt := θ(t; x) = F −1 (t + F (x)),
1
where F is the primitive of h(x)
. Alternatively, a high-order discretization
scheme (such as Runge-Kutta) can be used and is easy to construct (see also
Remark 4).
We define inductively X̂(0) = X0 and for i ≥ 1,
ε
ε
ε
X̂(Ti+1
−) = θ γε (Ti+1
− Tiε ) + σε (W (Ti+1
) − W (Tiε ))
1
ε
− h′ε (XTεi )σε2 (Ti+1
− Tiε ); X̂(Tiε )
2
X̂(Tiε ) = X̂(Tiε −) + h(X̂(Tiε −))∆Z(Tiε ).
(5)
(6)
ε
Similarly, for any other point t ∈ (Tiε , Ti+1
), we define
1
X̂(t) = θ γε (t − ηt ) + σε (W (t) − W (ηt )) − h′ (X̂(ηt ))σε2 (t − ηt ); X̂(ηt ) , (7)
2
where we set ηt := sup{Tiε : Tiε ≤ t}. Here W denotes a one dimensional
Brownian motion.
Therefore, the idea is to replace the original equation with an AsmussenRosiński type approximation which is explicitly solvable between the times of
4
big jumps and is exact for all h if the driving process is deterministic, and for
affine h in all cases. The purpose of this construction becomes clear from the
following lemma.
Lemma 1. Let h ∈ C 1 . The process (X̂(t)) defined by (5)–(7) is the solution
of the stochastic differential equation
1 ′
ε
′
2
dX̂t = h(X̂t− ) dZt + σε dWt + γε dt + (h (X̂t ) − h (X̂η(t) ))σε dt .
2
Proof. It is enough to show that the process
1
Yt := θ(γε t + σε Wt − h′ (x)σε2 t; x)
2
(8)
is the solution of the continuous SDE
1 ′
′
2
dYt = h(Yt ) σε dWt + γε dt + (h (Yt ) − h (x))σε dt .
2
This follows by an application of Itô formula to (8) using
∂θ(t, z)
= h(θ(t, z))
∂t
∂ 2 θ(t, z)
= h′ (θ(t, z))h(θ(t, z)).
∂t2
For the convergence analysis, we introduce two sets of conditions (parameterized by an integer number n):
R
(Hn ) f ∈ C n , h ∈ C n , f (k) and h(k) are bounded for 1 ≤ k ≤ n and |z|2n ν(dz) <
∞.
(k)
(H′n ) f ∈ C n , h ∈ C n , h(k) are bounded for
have at most
R 1k ≤ k ≤ n, f
polynomial growth for 1 ≤ k ≤ n and |z| ν(dz) < ∞ for all k ≥ 1.
Theorem 2.
(i) Assume (H3 ) or (H′3 ) + (A). Then
2
Z
σε 2
3
(σ + |γε |) + |y| χ̄ε ν(dy) .
|E[f (X̂1 ) − f (X1 )]| ≤ C
λε ε
R
(ii) Assume (H4 ) or (H′4 ) + (A), let γε be bounded and let the measure ν
satisfy
Z
Z
y 3 χ̄ε ν(dy) ≤ |y|4 χ̄ε ν0 (dy)
(9)
5
for some measure ν0 . Then
|E[f (X̂1 ) − f (X1 )]| ≤ C
σε2 2
(σ + |γε |) +
λε ε
Z
R
|y|4 χ̄ε (ν0 + ν)(dy)
and in particular,
|E[f (X̂1 ) − f (X1 )]| ≤ C
σε2
+
λε
Z
R
|y| χ̄ε (ν0 + ν)(dy) .
4
Remark 3. (i)Under the polynomial growth assumptions (H′3 ) or (H′4 ), the
constants in the above theorem may depend on the initial value x.
(ii) Condition (9) is satisfied, for example, if χε (y) = χε (−y) for all y and ε
and if ν is locally symmetric near zero. That is, ν(dy) = (1 + ξ(y))ν0 (dy), where
ν0 is a symmetric measure and ξ(y) = O(y) for y → 0.
Remark 4. The convergence rates given in Theorem 2 are obtained under the
assumption that the equation (4) is solved explicitly. If a discretization scheme
is used for this equation as well, the total error will be bounded from below by
the error of this scheme. However, usually this constraint is not very important
since it is easy to construct a discretization scheme for (4) with a high convergence order, for example, the classical Runge-Kutta scheme has convergence
order 4.
Corollary 5 (Worst-case convergence rates). Let χε (x) = 1|x|>ε . Then, under
the conditions of part (i) of Theorem 2,
−1
|E[f (X̂1 ) − f (X1 )]| ≤ o(λε 2 ),
and under the conditions of part (ii) of Theorem 2,
|E[f (X̂1 ) − f (X1 )]| ≤ o(λ−1
ε ).
Proof. These worst-case bounds follow from the following estimates. First, for
every Lévy process, σε → 0 as ε → 0. By the dominated convergence theorem,
ε2 λε → 0 as ε → 0. This implies
p
p Z
ε→0
λε
|y|3 ν(dy) ≤ ε2 λε σε2 −−−→ 0
|y|≤ε
and similarly
λε
Z
|y|≤ε
ε→0
|y|4 ν(dy) ≤ ε2 λε σε2 −−−→ 0.
Example 6 (Stable-like behavior). Once again, let χε (x) = 1|x|>ε , and assume
that the Lévy measure has an α-stable-like behavior near zero, meaning that ν
has a density ν(z) satisfying
ν(z) =
g(z)
,
|z|1+α
6
where g has finite nonzero right and left limits at zero as, for example, for the
tempered stable process (CGMY) [5]. Then σε2 = O(ε2−α ), λε = O(ε−α ),
Z
Z
|y|3 ν(dy) = O(ε3−α ) and
|y|4 ν(dy) = O(ε4−α ),
|y|≤ε
|y|≤ε
so that in general for α ∈ (0, 2)
3
1− α
|E[f (X̂1 ) − f (X1 )]| ≤ O(λε
),
and if the Lévy measure is locally symmetric near zero and α ∈ (0, 1),
4
1− α
|E[f (X̂1 ) − f (X1 )]| ≤ O(λε
).
Example 7 (Simulation using series representation). In this example we explain
why it can be useful to make other choices of truncation than χε (x) = 1|x|>ε .
The gamma process has Lévy density ν(z) = ce z 1z>0 . If one uses the truncation function χε (x) = 1|x|>ε , one will need to simulate random variables with
−λz
ce
1z>ε , which may be costly. Instead, one can use one of the many
law zν((ε,∞))
series representations for the gamma process [16]. Maybe the most convenient
one is
∞
X
Xt =
λ−1 e−Γi /c Vi 1Ui ≤t ,
−λz
i=1
where (Γi ) is a sequence of jump times of a standard Poisson process, (Ui ) is an
independent sequence of independent random variables uniformly distributed on
[0, 1] and (Vi ) is an independent sequence of independent standard exponential
random variables. For every τ > 0, the truncated sum
X
Xt =
λ−1 e−Γi /c Vi 1Ui ≤t ,
i:Γi <τ
defines a compound Poisson process with Lévy density
i
τ /c
c h −λx
ντ (x) =
e
− e−λxe
1x>0 ,
x
and therefore this series representation corresponds to
τ /c
χε (x) = 1 − e−λx(e
−1)
,
where ε can be linked to τ , for example, by setting τ = 1ε .
Theorem 2 will be proved after a series of lemmas.
Lemma 8 (Bounds on moments of X and X̂). Assume
Z
|z|p ν(dz) < ∞ for some p ≥ 2,
R
7
h ∈ C 1 (R) and
|h(x)| ≤ K(1 + |x|)
and
|h′ (x)| ≤ K
for some K < ∞.
Then there exists a constant C > 0 (which may depend on p but not on ε) such
that
E[ sup |Xs |p ] ≤ C(1 + |x|p ),
(10)
E[ sup |X̂s |p ] ≤ C(1 + |x|p ).
(11)
0≤s≤1
0≤s≤1
Proof. We shall concentrate on the bound (11), the other one will follow by
making ε go to zero. We have
Z t
Z t
X̂t = x +
h(X̂s )γ̃s ds +
h(X̂s− )dZ̃s
0
with
0
Z
1
zν(dz) + σε2 {h′ (X̂t ) − h′ (X̂η(t) )},
2
|z|>1
Z tZ
Z̃t = σε Wt +
z N̂ε (dz, ds).
γ̃t = γ +
0
R
Observe that γ̃t is bounded uniformly on t and on ε. Using a predictable version
of the Burkholder-Davis-Gundy inequality [6, lemma 2.1], we then obtain for
t ≤ 1 and for some constant C < ∞ which may change from line to line
"
Z t
p
p
|h(X̂s )|p |γ̃s |p ds
E[ sup |X̂s | ] ≤ CE |x| +
0≤s≤t
+
0
Z
0
t
h
2
(X̂s )(σε2
+
Z
2
z χε ν(dz))ds
R
p/2
+
Z
0
t
p
|h(X̂s )| ds
Z
R
Z t
Z t
≤ CE |x|p +
|h(X̂s )|p ds ≤ CE 1 + |x|p +
|X̂s |p ds .
0
p
#
|z| χε ν(dz)
0
The bound (11) now follows from Gronwall’s inequality.
Lemma 9 (Derivatives of the flow). Let p ≥ 2 and for an integer n ≥ 1 assume
Z
|z|np ν(dz) < ∞,
R
n
h ∈ C (R) and
|h(x)| ≤ K(1 + |x|)
and
|h(k) (x)| ≤ K
for some K < ∞ and all k with 1 ≤ k ≤ n. Then
p k
∂
(t,x) E k XT < ∞
∂x
for all k with 1 ≤ k ≤ n.
8
Proof. See the proof of lemma 4.2 in [15].
Lemma 10. Let u(t, x) := E (t,x) [f (XT )].
k
(i) Assume (Hn ) with n ≥ 2. Then u ∈ C 1,n ([0, 1] × R), ∂∂xuk are uniformly
bounded for 1 ≤ k ≤ n and u is a solution of the equation
Z
∂u
∂u
∂u
(t, x) + γ (t, x)h(x) +
u(t, x + h(x)y) − u(t, x) −
(t, x)h(x)y ν(dy)
∂t
∂x
∂x
|y|≤1
Z
+
(u(t, x + h(x)y) − u(t, x)) ν (dy) = 0
(12)
|y|>1
u(1, x) = f (x).
(ii) Assume (H′n ) with n ≥ 2. Then u ∈ C 1,n ([0, 1] × R), u is a solution of
equation (12) and there exist C < ∞ and p > 0 with
k
∂ u(t, x) p
∂xk ≤ C(1 + |x| )
for all t ∈ [0, 1], x ∈ R and 1 ≤ k ≤ n.
Proof. The derivative
∂u
∂x
satisfies
∂u(t, x)
(t,x) ∂
(t,x)
′
= E f (XT ) XT
.
∂x
∂x
The interchange of the derivative and the expectation is justified using lemma 9.
The boundedness under (Hn ) or the polynomial growth under (H′n ) then follow
from lemmas 8 and 9. The other derivatives with respect to x are obtained by
successive differentiations under the expectation. The derivative with respect
(t,x)
to t is obtained from Itô’s formula applied to f (XT ). In fact, note that since
(t,x)
(0,x)
Z is a Lévy process and h does not depend on t, E[f (XT )] = E[f (XT −t )]
∂E[f (X
(0,x)
)]
t
and hence it is sufficient to study the derivative
. The Itô formula
∂t
yields
Z t
E[f (Xt )] = f (x) + E
f ′ (Xs− )h(Xs− )dZs
0
Z tZ
+E
{f (Xs− + h(Xs− )z) − f (Xs− ) − f ′ (Xs− )h(Xs− )z}N (dz, ds).
0
R
R
Denoting by Z̃ the martingale part of Z and by γ̃ := γ + |z|>1 zν(dz) the
residual drift, and using lemma 8, we get
Z t
E[f (Xt )] = f (x) +
dsγ̃E[f ′ (Xs )h(Xs )]
0
Z
Z t
+
dsE
{f (Xs + h(Xs )z) − f (Xs ) − f ′ (Xs )h(Xs )z}ν(dz) ,
0
R
9
and therefore
Z
∂E[f (Xt )]
= γ̃E[f ′ (Xt )h(Xt )] + E
{f (Xt + h(Xt )z) − f (Xt ) − f ′ (Xt )h(Xt )z}ν(dz)
∂t
R
Z
Z 1
2
′
2
2 ∂ f (Xt + θzh(Xt ))
= γ̃E[f (Xt )h(Xt )] +
dθE
(1 − θ)h (Xt )z
ν(dz) .
∂x2
0
R
Now, once again, lemma 8 allows to prove the finiteness of this expression.
Finally, equation (12) is a consequence of Itô’s formula applied, this time, to
u(t, Xt ).
Lemma 11. Let f : [0, ∞) → [0, ∞) be an increasing function. Then
Z 1
E
f (t − η(t))dt ≤ E[f (τ )],
0
where τ is an exponential random variable with intensity λε . In particular,
Z 1
(p + 1)!
.
E
(t − η(t))p+1 dt ≤
λp+1
0
ε
Proof. Let kε = sup{k : Tkε < 1}. Then
"k Z ε
Z 1
Z
ε
Ti
X
ε
f (t − η(t))dt = E
f (t − Ti−1
)dt +
E
0
=E
≤E
=E
"
"
i=1
ε
Ti−1
kε Z
X
i=1
ε
Ti−1
kε Z
X
i=1
Z 1
0
Tiε
Tiε
ε
Ti−1
1
Tkε
f (Tiε
− t)dt +
f (Tiε − t)dt +
f (inf{Tiε
:
Tiε
Z
1
Tkε
Z
f (t − Tkε )dt
f (1 − t)dt
#
#
1
Tkε
f (Tkε +1 − t)dt
#
> t} − t)dt = E[f (τ )].
The second statement of the lemma is a direct consequence of the first one.
Proof of theorem 2. From Itô’s formula and lemmas 8 and 10,
E[f (X̂1 ) − f (X1 )] = E[u(1, X̂1 ) − u(0, X0 )]
"
Z 1
1 ∂ 2 u(t, X̂t ) 2 2
=
dtE
σε h (X̂t )
2
∂x2
0
#
Z
∂u(t, X̂t )
− χ̄ε ν(dy)(u(t, X̂t + h(X̂t )y) − u(t, X̂t ) −
h(X̂t )y)
∂x
R
"
#
Z 1
1 2 ∂u(t, X̂t )
+
dtE
σ
h(X̂t )(h′ (X̂t ) − h′ (X̂η(t) )) .
2 ε
∂x
0
10
(13)
(14)
Denote the expectation in (13) by At and the one in (14) by Bt . From lemmas
10 and 8, we then have, under the hypothesis (H3 ),
"Z
#
Z 1
3
1 3 3
2 ∂ u(t, X̂t + θyh(X̂t ))
|At | ≤ E
y h (X̂t )χ̄ε
(1 − θ)
dθν(dy)
∂x3
R 2
0
Z
Z
≤ CE[1 + |h|3 (X̂t )] |y|3 χ̄ε ν(dy) ≤ C(1 + |x|3 ) |y|3 χ̄ε ν(dy).
R
R
(H′3 )
If, instead, the polynomial growth condition
is satisfied, then for some
p > 0,
Z
At ≤ CE
|y|3 (1 + |h|3 (X̂t ))(1 + |X̂t |p + |yh(X̂t )|p )χ̄ε ν(dy)
R
and once again, we use lemma 8 together with the assumption (A), because
the Lévy measure is now supposed integrable with any power of y. Under the
condition of part (ii) and the hypothesis (H4 ),
"
#
Z
∂ 3 u(t, X̂t ) 3
3
|At | ≤ E
h
(
X̂
)
y
χ̄
ν(dy)
t
ε
∂x3
R
"Z
#
Z 1
4
1 4 4
3 ∂ u(t, X̂t + θyh(X̂t ))
+ E
y h (X̂t )
(1 − θ)
dθ
χ̄
ν(dy)
ε
4
24
∂x
0
R
Z
≤ C(1 + |x|4 ) |y|4 χ̄ε (ν0 + ν)(dy).
R
The case of (H′4 ) is treated as above. To analyze the term Bt , define
H(t, x) =
∂u(t, x)
h(x)
∂x
and assume, to fix the notation, that (H3 ) is satisfied. Then, once again by
11
lemmas 10 and 8, Taylor formula and the Cauchy-Schwartz inequality,
1 |Bt | ≤ σε2 E[(H(t, X̂t ) − H(t, X̂η(t) ))(h′ (X̂t ) − h′ (X̂η(t) ))]
2
i
1 h
+ σε2 E H(t, X̂η(t) )(h′ (X̂t ) − h′ (X̂η(t) )) 2 "
Z 1
1 ∂H
dθ
(X̂η(t) + θ(X̂t − X̂η(t) ))
≤ σε2 E (X̂t − X̂η(t) )2
2 ∂x
0
#
Z 1
′ ′′
′
×
dθ h (X̂η(t) + θ (X̂t − X̂η(t) )) 0
h
h
ii
1 + σε2 E H(t, X̂η(t) )E (h′ (X̂t ) − h′ (X̂η(t) ))|Fη(t) 2
≤ Cσε2 E[(X̂t − X̂η(t) )2 (1 + |X̂t | + |X̂η(t) |)]
1 + σε2 E[H(t, X̂η(t) )h′′ (X̂η(t) )E[X̂t − X̂η(t) |Fη(t) ]]
2
Z 1
1
+ σε2 E H(t, X̂η(t) )(X̂t − X̂η(t) )2
dθ(1 − θ)h(3) (X̂η(t) + θ(X̂t − X̂η(t) ))
2
0
1
≤ Cσε2 E[(X̂t − X̂η(t) )4 ]1/2 + σε2 E[H(t, X̂η(t) )h′′ (X̂η(t) )E[X̂t − X̂η(t) |Fη(t) ]] .
2
(15)
Note that
1
h(X̂s ) σε dWs + γε ds + (h′ (X̂s ) − h′ (X̂ηs ))σε2 ds ,
2
η(t)
"Z
#
t
1 ′
′
2
E[X̂t − X̂η(t) |Fη(t) ] = E
h(X̂s ) γε ds + (h (X̂s ) − h (X̂ηs ))σε ds |Fη(t) .
2
η(t)
X̂t − X̂η(t) =
Z
t
The Burkholder-Davis-Gundy inequality and, the Cauchy-Schwartz inequality
and lemma 8 then give, under the hypothesis (H3 ):


4
Z t
1
E[(X̂t − X̂η(t) )4 ] ≤ CE 
h(X̂s ) γε ds + (h′ (X̂s ) − h′ (X̂ηs ))σε2 ds 
η(t)
2

!2 
Z t
+ Cσε4 E 
h2 (X̂s )ds 
η(t)
≤ C(|γε | + σε2 )4 E[(t − η(t))4 (1 + sup X̂)4 ] + Cσε4 E[(t − η(t))2 (1 + sup X̂)4 ]
1
1
≤ C(|γε | + σε2 )4 E[(t − η(t))8 ] 2 + Cσε4 E[(t − η(t))4 ] 2 .
Similarly, for the second term in (15), we get
1 2 1
σε E[H(t, X̂η(t) )h′′ (X̂η(t) )E[X̂t − X̂η(t) |Fη(t) ]] ≤ Cσε2 (|γε | + σε2 )E[(t − η(t))2 ] 2 .
2
12
Assembling together the estimates for the two terms in (15),
1
1
1
|Bt | ≤ Cσε2 ((|γε |+σε2 )2 E[(t−η(t))8 ] 2 +σε2 E[(t−η(t))4 ] 2 +(|γε |+σε2 )E[(t−η(t))2 ] 2 )
Using the Jensen inequality and lemma 11, this is further reduced to
Z 1
(|γε | + σε2 )2
|γε | + σε2
2
+
.
|Bt |dt ≤ Cσε
λ4ε
λε
0
From the Cauchy-Schwartz inequality we get
Z
|y|≤1
!2
|y|χε ν(dy)
≤ λε
Z
|y|≤1
y 2 χε ν(dy) ≤ Cλε ,
√
|γ |+σ2
which implies that |γε | ≤ C λε and finally |B| ≤ Cσε2 ελε ε . Assembling these
estimates with the ones for A, we complete the proof under the assumptions
(H3 ) (or (H4 )). Under the assumptions (H′3 ) or (H′4 ) the proof is done in a
similar fashion.
3
Approximating multidimensional SDE using
expansions
In this section we propose an alternative approximation scheme, which yields
similar rates to the ones obtained in section 2 but has the advantage of being
applicable in the multidimensional case. On the other hand, it is a little more
difficult to implement. As before, we start by replacing the small jumps of Z
with a suitable Brownian motion, yielding the SDE
dX̄t = h(X̄t− ){γε dt + dWtε + dZtε },
(16)
where W ε is a d-dimensional Brownian motion with covariance matrix Σε . This
process can also be written as
Z t
Z t
X̄(t) = X̄(ηt ) +
h X̄(s) dW ε (s) +
h X̄(s) γε ds,
ηt
ηt
X̄(Tiε ) = X̄(Tiε −) + h(X̄(Tiε −))∆Z(Tiε ).
The idea now is to expand the solution of (16) between the jumps of Z ε around
the solution of the deterministic dynamical system (4), treating the stochastic
term as a small random perturbation (see [8, Chapter 2]).
Assume that the coefficient h is Lipschitz and consider a family of processes
(Y α )0≤α≤1 defined by
α
Y (t) = X̄(ηt ) + α
Z
t
α
ε
h (Y (s)) dW (s) +
ηt
13
Z
t
h (Y α (s)) γε ds
ηt
Our idea is to replace the process X̄ := Y 1 with its first-order Taylor approximation:
∂ α
X̄(t) ≈ Y 0 (t) +
Y (t)|α=0 .
∂α
Therefore, the new approximation X̃ is defined by
X̃(t) = Y0 (t) + Y1 (t),
t > ηt ,
X̃(Tiε ) = X̃(Tiε −) + h(X̃(Tiε −))∆Z(Tiε ),
Z t
Y0 (t) = X̃(ηt ) +
h(Y0 (t))γε ds
Y1 (t) =
Z
ηt
t
ηt
∂h
(Y0 (s)) Y1i (s)γε ds +
∂xi
Z
t
(17)
h (Y0 (s)) dW ε (s)
ηt
where we used the Einstein convention for summation over repeated indices.
Conditionally on the jump times (Tiε )i≥1 , the random vector Y1 (t) is Gaussian
with mean zero. Applying the integration by parts formula to Y1i (t)Y1j (t), it can
be shown that its covariance matrix Ω(t) satisfies the (matrix) linear equation
Z t
(Ω(s)M (s) + M ⊥ (s)Ω⊥ (s) + N (s))ds
(18)
Ω(t) =
ηt
where M ⊥ denotes the transpose of the matrix M and
Mij (t) =
∂hik (Y0 (t)) k
γε
∂xj
and N (t) = h(Y0 (t))Σε h⊥ (Y0 (t)).
Successive applications of Gronwall’s inequality yield the following bounds
for Y0 and Ω:
kY0 (t)k ≤ (kY0 (ηt )k + Ckγε k(t − ηt )) eCkγε k(t−ηt )
kΩ(t)k ≤ CkΣε k(t − ηt )(1 + kY0 (ηt )k2 )eCkγε k(1+kγε k)(t−ηt ) .
(19)
Lemma 12. Assume ν(Rd ) = ∞. Then
lim
ε→0
|γε |2
= 0.
λε
Proof. Left to the reader as an exercise.
To prove Theorem 14, we need to generalize lemmas 8, 9 and 10 to the multidimensional setting. While the generalization of the last two lemmas is straightforward, the first one requires a little work, because it needs to be adapted to
the new discretization scheme.
Lemma 13 (Bounds on moments of X̃). Assume ν(Rd ) = ∞, h ∈ Cb1 (Rn ) and
Z
|z|p ν(dz) < ∞ for some p ≥ 2.
z∈Rd
14
Then there exists a constant C > 0 (which may depend on p but not on ε) such
that for all ε sufficiently small,
E[ sup |X̃s |p ] ≤ C(1 + |x|p ).
(20)
0≤s≤1
Proof. Denote ht := h(X̃t ) and h̃t := h(Y0 (t)). The SDE for X̃ can be rewritten
as
∂h
dX̃t = ht− dZ̃tε + h̃t dWtε + ht γ̃dt + (h̃t − ht )γε dt +
(Y0 (t)) Y1i (t)γε dt,
∂xi
Z tZ
Z
where Z̃t =
y Nˆε (dy, ds),
γ̃ = γ +
yν(dy).
0
|y|>ε
|y|>1
By predictable Burkholder-Davis-Gundy inequality [6, lemma 2.1], we then have
"
Z
p/2
Z
E
t
sup kX̃s kp ≤ CE kxkp +
0≤s≤t
+
Z
t
khs kp ds
0
+
Z
t
0
p
0
Z
khs k2
|z|p χε ν(dz) +
p
p
khs k |γ̃| ds + |γε |
"
p
≤ CE kxk +
|z|2 χε ν(dz)ds
Z
0
t
p
Z
0
t
Z
0
t
kh̃s k2 kΣε kds
p
p
kh̃s − hs k ds + |γε |
p
khs k ds + (1 + |γε | )
Z
0
t
p/2
Z
0
t
p
kY1 (s)k ds
p
p
kh̃s − hs k ds + |γε |
where the constant C does not depend on ε and may change from line to line.
Since h′ is bounded, we have
"
#
Z t
Z t
E sup kX̃s kp ≤ CE kxkp +
khs kp ds + (1 + |γε |p )
kY1 (s)kp ds .
0≤s≤t
0
0
(21)
Using (19), the last term can be estimated as
Z t
Z t
p
p
p
p
E (1 + |γε | )
kY1 (s)k ds = CE (1 + |γε | )
E kY1 (s)k |Fη(s) ds
0
0
Z t
≤ CE (1 + |γε |p )
kΩ(s)kp/2 ds
0
Z t
(1 + kX̃η(s) )kp )kΣε kp/2 (s − ηs )p/2 eC|γε |(1+|γε |)(s−ηs ) ds
≤ CE (1 + |γε |p )
0
Z t
p
≤ CE (1 + |γε | )
(1 + kX̃η(s) )kp )kΣε kp/2 τ p/2 eC|γε |(1+|γε |)τ ds ,
0
15
#
Z
0
t
p
#
kY1 (s)k ds ,
where τ is an independent exponential random variable with parameter λε . Due
to Lemma 12, for ε sufficiently small, the expectation with respect to τ exists,
and computing it explicitly we obtain
Z t
Z t
p
p
p
E (1 + |γε | )
kY1 (s)k ds ≤ Cε E
(1 + kX̃η(s) k )ds ,
0
0
where Cε → 0 as ε → 0. Inequality (21) therefore becomes
"
#
Z t
Z t
p
p
p
p
E sup kX̃s k ≤ CE kxk +
khs k ds + Cε
(1 + kX̃η(s) k )
0≤s≤t
"
0
p
≤ CE kxk +
Z
0
t
0
p
p
#
khs k ds + Cε t(1 + sup kX̃s k ) ,
0≤s≤t
which implies that for ε sufficiently small,
Z t
khs kp ds ,
E sup kX̃s kp ≤ CE 1 + kxkp +
0≤s≤t
0
and we get (20) by Gronwall’s lemma.
Theorem 14.
(i) Assume (H3 ) or (H′3 ) + (A). Then
Z
kΣε k
3
|E[f (X̂1 ) − f (X1 )]| ≤ C
(kΣε k + |γε |) +
|y| χ̄ε ν(dy) .
λε
Rd
(ii) Assume (H4 ) or (H′4 ) + (A), let γε be bounded and suppose that for some
measure ν0
Z
Z
yi yj yk χ̄ε ν(dy) ≤
|y|4 χ̄ε ν0 (dy)
Rd
Rd
for all i, j, k and all ε sufficiently small. Then
Z
kΣε k
|E[f (X̂1 ) − f (X1 )]| ≤ C
+
|y|4 χ̄ε (ν0 + ν)(dy) .
λε
Rd
Proof. By Itô formula we have
E[f (X̃1 ) − f (X1 )] = E[u(1, X̃1 ) − u(0, X0 )]
"
Z 1
∂u(t, X̃(t))
∂hij (Y0 (t)) k
=
dtE
hij (Y0 (t)) +
Y1 (t) − hij (X̃(t)) γεj (22)
∂xi
∂xk
0
+
1 ∂ 2 u(X̃(t))
hik (Y0 (t))Σkl
(23)
ε hjl (Y0 (t))
2 ∂xi ∂xj
)
#
Z (
∂u(t, X̃t )
hij (X̃(t))yj χ̄ε ν(dy) .
−
u(t, X̃(t) + h(X̃(t))y) − u(t, X̃(t)) −
∂xi
Rd
(24)
16
Denote the expectation term in (22) by At and the sum of the terms in (23)
and (24) by Bt . The term At satisfies
"
#
∂u(t, X̃(t)) 2
|At | ≤ CE |Y1 (t)| |γε |
∂x
Under the assumption (H4 ) or (H3 ), using (19), we have
|A| ≤ CE |Y1 (t)|2 |γε | ≤ CE [kΩ(t)k] |γε |
≤ CE[1 + |Y0 (ηt )|4 ]1/2 kΣε k|γε |E[(t − ηt )eC|γε |(1+|γε |)(t−ηt ) ].
Using Lemmas 11 and 13, we then get
Z 1
kΣε k|γε |
|At |dt ≤ C(1 + |x|2 )
.
λε
0
Under (H′4 ) or (H′3 ) this result can be obtained along the same lines.
Let us now turn to the term Bt . It is rewritten via
" Z
n
∂u(t, X̃(t))
Bt =E −
u(t, X̃(t) + h(X̃(t))y) − u(t, X̃(t)) −
hij (X̃(t))yj
∂xi
d
R
o
∂ 2 u(t, X̃t )
−
hik (X̃(t))hjl (X̃(t))yk yl χ̄ε ν(dy)
∂xi ∂xj
o
2
∂ u(t, Y0 (t)) kl n
+
Σε hik (Y0 (t))hjl (Y0 (t)) − hik (X̃(t))hjl (X̃(t))
∂xi ∂xj
!
#
n
o
∂ 2 u(t, X̃(t)) ∂ 2 u(t, Y0 (t))
kl
+
−
Σε hik (Y0 (t))hjl (Y0 (t)) − hik (X̃(t))hjl (X̃(t))
∂xi ∂xj
∂xi ∂xj
Denote the first two lines by B1 (t), the third line by B2 (t) and the fourth by
B3 (t). Under the hypotheses (H4 ) or (H3 ), we get
Z
∂ 3 u(t, X̃t + θh(X̃(t))y)
|B1 (t)| ≤ E hil (X̃(t))hjm (X̃t )hkn (X̃(t))yl ym yn χ̄ε ν(dy)
Rd
∂xi ∂xj ∂k
Z
Z
≤ CEkh(X̃t )k3
|y|3 χ̄ε ν(dy) ≤ C(1 + |x|3 )
|y|3 χ̄ε ν(dy).
Rd
Rd
Let Fikjl (x) := hik (x)hjl (x). Using the fact that conditionally on Fηt , Y1 is a
centered Gaussian process,
2
o
∂ u(t, Y0 (t)) kl n
|B2 (t)| ≤ E Σε E Fikjl (Y0 (t)) − Fikjl (X̃(t))|Fηt ∂xi ∂xj
2
Z 1
∂ u(t, Y0 (t)) kl p
∂ 2 Fikjl (Y0 (t) + θY1 (t)) ≤ E Σε Y1 (t)Y1q (t)
(1 − θ)
dθ
∂xi ∂xj
∂xp ∂xq
0
≤ CE kΣε k|Y1 (t)|2 (1 + sup |X̃s |) .
s
17
Using once again Lemmas 11 and 13, we obtain
Z 1
kΣε k2
|B2 |dt ≤ C(1 + |x|)
.
λε
0
With a similar reasoning we obtain that B3 also satisfies
Z 1
kΣε k2
.
|B3 |dt ≤ C(1 + |x|)
λε
0
The proof of the first part of the theorem under assumptions (H′4 ) or (H′3 ) is
done along the same lines.
4
Applications
Stochastic models driven by Lévy processes are gaining increasing popularity
in financial mathematics, where they offer a much more realistic description of
the underlying risks than the traditional diffusion-based models. In this context, many quantities of interest are given by solutions of stochastic differential
equations which cannot be solved explicitly. In this paper we consider two
such applications: portfolio management with constraints and the Libor market
model with jumps.
One-dimensional SDE: portfolio management with constraints The
value Vt of a portfolio in various common portfolio management strategies can
often be expressed as solution to SDE. One example is the so-called Constant
Proportion Portfolio Insurance (CPPI) strategy which aims at maintaining the
value of the portfolio above some prespecified guaranteed level Bt and consists
in investing m(Vt − Bt ) into the risky asset St and the remaining amount into
the risk-free bond, where Bt is the time-t value of the guarantee:
dVt = m(Vt− − Bt− )
dSt
+ (Vt− − m(Vt− − Bt− ))rdt.
St−
For simplicity we shall suppose that the guarantee is the value of a zero-coupon
bond expiring at time t with nominal amount N = 1: Bt = e−r(T −t) N . This
equation can then easily be solved analytically, but often additional constraints
imposed on the portfolio make the analytic solution impossible. For instance,
often leverage is not allowed, and the portfolio dynamics becomes
dVt = min(Vt− , m(Vt− − Bt− ))
dSt
+ (Vt− − min(Vt− , m(Vt− − Bt− )))rdt.
St−
(25)
Suppose that the stock S follows an exponential Lévy model:
dZt∗ =
dSt
− rdt,
St−
18
Vt
where Z ∗ is a pure-jump Lévy process, and let Vt∗ = B
be the value of the
t
portfolio computed using Bt as numéraire. Equation (25) can be simplified to
∗
∗
dVt∗ = min(Vt−
, m(Vt−
− 1))dZt∗ .
(26)
This equation cannot be solved explicitly, but the corresponding deterministic
ODE
dXt = min(Xt , m(Xt − 1))dt
admits a simple explicit solution for m > 1: if x ≥
m
m−1
then
Xt = xet .
Otherwise, for 1 < x <
m
m−1
Xt = 1 + (x − 1)emt , t ≤ t∗
m
[(x − 1)(m − 1)]1/m et ,
Xt =
m−1
t ≥ t∗
1
with t∗ = − m
log [(x − 1)(m − 1)].
Figure 1 shows the trajectories of the solution of (26) simulated in the variance gamma model. The variance gamma model is a pure-jump Lévy process
with Lévy density
ν(x) =
Ce−λ+ x
Ce−λ− |x|
1x<0 +
1x>0 .
|x|
x
The variance gamma process is thus a difference of two independent gamma
processes and we simulate it using the approximation of Example 7. The “nearexact” solution was produced using the standard Euler scheme with 10000 discretization steps. With 100 jumps, the jump-adapted scheme has a greater
error than the Euler scheme with 100 points, because the Euler scheme uses
exact simulation of increments. However for 200 jumps, the error of the jumpadapted scheme virtually disappears, whereas the Euler scheme still has a significant error even with 500 steps. This illustrates the exponential convergence
of the jump-adapted scheme for the variance gamma process. Note that these
graphs show the trajectories of the solution and hence illustrate the strong convergence. The effect of the weak approximation of small jumps by Brownian
motion is therefore not visible, but even without this additional improvement,
the convergence for the variance gamma process is very fast.
Multidimensional SDE: Libor market model with jumps One important example of a non-trivial multidimensional SDE arises in the Libor market
model, which describes joint arbitrage-free dynamics of a set of forward interest rates. Libor market models with jumps were considered among others
by Jamshidian [11], Glasserman and Kou [9] and Eberlein and Özkan [7]. Let
Ti = T1 + (i − 1)δ, i = 1, . . . , n+ 1 be a set of dates called tenor dates. The Libor
rate Lit is the forward interest rate, defined at date t for the period [Ti , Ti+1 ]. If
19
4.0
3.5
Jump−adapted with 100 jumps
Euler with 100 points
Jump−adapted with 200 jumps
3.5
Euler with 500 points
3.0
Near−exact solution
Near−exact solution
3.0
2.5
2.5
2.0
2.0
1.5
1.5
1.0
1.0
0
1
2
3
4
5
6
7
8
9
10
0
1
2
3
4
5
6
7
8
9
10
Figure 1: CPPI portfolio simulation with V0∗ = 1.5 and m = 2 in the variance
gamma model (no diffusion component added). Left: jump-adapted scheme.
Right: standard Euler scheme.
Bt (T ) is the price at time t of a zero-coupon bond, that is, a bond which pays
to its holder 1 unit at date T , the Libor rates can be computed via
1
Bt (Ti )
Lit =
−1
δ Bt (Ti+1 )
A Libor market model (LMM) is a model describing an arbitrage-free dynamics
of a set of Libor rates.
In this example, we shall use a simplified version of the general semimartingale LMM given in [11], supposing that all Libor rates are driven by a ddimensional pure jump Lévy process. In this case, following [11], an arbitragefree dynamics of L1t , . . . , Lnt can be constructed via the multi-dimensional SDE


!
Z
n
j j
Y
δL
σ
(t)z
dLit
t
= σ i (t)dZt −
σ i (t)z 
1+
− 1 ν(dz)dt.
(27)
j
Lit−
1
+
δL
Rd
t
j=i+1
Here Z is a d-dimensional martingale pure jump Lévy process with Lévy measure
ν, and σ i (t) are d-dimensional deterministic volatility functions.
Equation (27) gives the Libor dynamics under the so-called terminal measure, which corresponds to using the last zero-coupon bond Bt (Tn+1 ) as numéraire.
This means that the price at time t of an option having payoff H = h(L1T1 , . . . , LnT1 )
20
at time T1 is given by
#
h(L1T1 , . . . , LnT1 ) πt (H) = Bt (Tn+1 )E
Ft
BT1 (Tn+1 ) #
"
n
Y
1
n
i = Bt (Tn+1 )E h(LT1 , . . . , LT1 ) (1 + δLT1 )Ft
i=1
#
"
n
Y
Bt (T1 )
1
n
i = Qn
E
h(L
,
.
.
.
,
L
)
(1
+
δL
)
Ft
T
T
T
1
1
1
i
i=1 (1 + δLt )
i=1
"
(28)
The price of any such option can therefore be computed by Monte Carlo using
equation (27).
Introducing a n + 1-dimensional state process Xt with Xt0 ≡ t and Xti = Lit
for 1 ≤ i ≤ n, a d + 1-dimensional driving Lévy process Z̃t = (t Zt )⊥ , and
a (n + 1) × (d + 1)-dimensional function h(x) defined by h11 = 1, h1j = 0 for
i
j = 2, . . . , d + 1, hi1 = f i (x) and hij = σj−1
(x0 ) with


n
j
Y
δxj σ (x0 )z
f i (x) := −
σ i (x0 )z 
1+
− 1 ν(dz),
1 + δxj
Rd
j=i+1
Z
we see that the equation (27) takes the homogeneous form dXt = h(Xt− )dZ̃t ,
to which the discretization scheme of section 3 can be readily applied.
For the purposes of illustration, we simplify the model even further, taking
d = 1 and supposing that the functions σ i (t) are constant: σ i (t) ≡ 1. The driving Lévy process Z is supposed to follow the CGMY model with two alternative
sets of parameters: α = 0.5 and C = 1.5 in Case 1 and α = 1.8 and C = 0.01
in Case 2. In both cases, we take λ+ = 10 and λ− = 20. The set of tenor dates
for the Libor market model is {5, 6, 7, 8, 9, 10} years, which corresponds to a
stochastic differential equation in dimension 5. The initial values of all forward
Libor rates are all equal to 15%. This big a value was taken to emphasize the
non-linear effects in the simulation.
For the numerical implementation of the scheme of section 3, we solved the
equations (17) and (18) simultaneously using the classical 4-th order Rungeε
Kutta scheme, with time step equal to max(1, Ti+1
− Tiε ). This means that
most of the time there is only one step of the Runge-Kutta scheme between two
consecutive jump times, and our numerical tests have shown that even in this
case the error associated to the Runge-Kutta scheme is negligible compared to
the stochastic approximation error.
As a sanity check, we first compute the price of a zero-coupon bond with maturity T1 , which corresponds to taking h ≡ 1 in equation (28). By construction
of the model, if the SDE is solved correctly, we must recover the input price
of the zero-coupon bond. Figure 2 shows the ratio of the zero coupon bond
price estimated using the first-order scheme of section 3 to the input value. For
comparison, we also give the value computed using the 0-order approximation
21
1.010
1.01
0−th order
1.005
1−st order
0−th order
1−st order
1.00
1.000
0.99
0.995
0.990
0.98
0.985
0.97
0.980
0.96
0.975
0.970
0.0
0.95
0.5
1.0
1.5
2.0
2.5
3.0
3.5
4.0
−1
0
1
2
3
log(intensity)
4
5
6
7
log(intensity)
Figure 2: Ratio of estimated to theoretical zero coupon bond price in Case 1
(left) and Case 2 (right) with 1 standard deviation bounds. 105 trajectories
were used for all points except the three points with the biggest intensity in the
right graph, where 104 simulations were made to save time.
Y 0 only. We do not compare our results with the classical Euler scheme because
this would require us to simulate the increments of the CGMY process for which
no standard algorithm is available.
The graphs in Figure 2 show that already for the intensity of 1 jump per
year, the true price of the zero coupon is within the Monte Carlo confidence
bounds for the 1st order scheme, on the other hand, for the 0-th order scheme
the convergence is very slow, especially in case 2. This is consistent with the
theoretical convergence rates which are of order of λ−3 in Case 1 and λ−0.11 in
Case 2 for 0-th order approximation and, respectively, λ−7 and λ−1.22 for 1-st
order approximation.
Next, we use our method to compute the price of the so-called receiver
swaption, which gives its holder the right but not the obligation to enter an
interest rate swap with fixed rate K at date T1 . This means that its pay-off at
date T1 is equal to
!+
n
X
1
n
h(LT1 , . . . , LT1 ) = 1 − BT1 (Tn+1 ) − Kδ
BT1 (Ti+1 )

=
i=1
n
Y
i=1
(1 + δLiT1 )−1 − 1 − Kδ
n Y
i
X
+
(1 + δLiT1 )−1  .
i=1 j=1
Figure 3 shows the price of this product with K = 15% estimated using the
method of section 3, and compared once again to the 0-order scheme. The
theoretical value is not known in closed form in this case, but we see that
despite the fact that the pay-off function is not differentiable, the convergence
of the 1-st order scheme is achieved very quickly while the 0-th order scheme
takes a long time to converge.
22
0.080
0.08
0.075
0.07
0.070
0.06
0.065
0.05
0.060
0.04
0.055
0.03
0.050
0−th order
0.045
0.02
0−th order
1−st order
0.040
0.0
1−st order
0.01
0.5
1.0
1.5
2.0
2.5
3.0
3.5
−1
log(intensity)
0
1
2
3
4
5
log(intensity)
Figure 3: Estimated price of an ATM receiver swaption with maturity 5 years in
Case 1 (left) and Case 2 (right) with 1 standard deviation bounds. The Monte
Carlo simulation was performed with 105 trajectories.
Finally, Figure 4 shows the execution time for the 1-st order and the 0-th
order scheme as a function of the jump intensity λε (this dependence is very
similar in cases 1 and 2). These times were obtained on a standard (rather old)
Pentium-III PC using a very simple implementation of the scheme of section
3, without any code optimization or variance reduction which could accelerate
the computation. Despite the fact that for the same intensity, the 1-st order
scheme needs 5 times as much computational effort as the 0-th order scheme,
the improvement of convergence is such that even in Case 1 it is advantageous
to use the 1-st order scheme.
Acknowledgement
This research of Peter Tankov is supported by the Chair Financial Risks of the
Risk Foundation sponsored by Société Générale, the Chair Derivatives of the
Future sponsored by the Fédération Bancaire Française, and the Chair Finance
and Sustainable Development sponsored by EDF and Calyon.
References
[1] S. Asmussen and J. Rosiński, Approximations of small jumps of Lévy
processes with a view towards simulation, J. Appl. Probab., 38 (2001),
pp. 482–493.
[2] N. Bruti-Liberati and E. Platen, Strong approximations of stochastic
differential equations with jumps, J. Comput. Appl. Math., 205 (2007),
pp. 982–1001.
23
4
Execution time for 100,000 paths
10
3
10
2
10
0−th order
1−st order
1
10 −1
10
0
10
1
10
Intensity
2
10
3
10
Figure 4: Execution times for 105 trajectories on a Pentium-III PC.
[3] P. Carr, H. Geman, D. Madan, and M. Yor, The fine structure of
asset returns: An empirical investigation, Journal of Business, 75 (2002),
pp. 305–332.
[4] S. Cohen and J. Rosiński, Gaussian approximation of multivariate
Lévy processes with applications to simulation of tempered stable processes,
Bernoulli, 13 (2007), pp. 195–210.
[5] R. Cont and P. Tankov, Financial Modelling with Jump Processes,
Chapman & Hall / CRC Press, 2004.
[6] K. Dzhaparidze and E. Valkeila, On the Hellinger type distances for
filtered experiments, Probab. Theor. Relat. Fields, 85 (1990), pp. 105–117.
[7] E. Eberlein and F. Ozkan, The Lévy LIBOR model, Finance and
Stochastics, 9 (2005), pp. 327–348.
[8] M. I. Freidlin and A. D. Wentzell, Random perturbations of dynamical systems, vol. 260 of Grundlehren der Mathematischen Wissenschaften
[Fundamental Principles of Mathematical Sciences], Springer-Verlag, New
York, second ed., 1998. Translated from the 1979 Russian original by Joseph
Szücs.
[9] P. Glasserman and S. Kou, The term structure of forward rates with
jump risk, Math. Finance, 13 (2003), pp. 383–410.
[10] J. Jacod, T. G. Kurtz, S. Méléard, and P. Protter, The approximate Euler method for Lévy driven stochastic differential equations, Ann.
Inst. H. Poincaré Probab. Statist., 41 (2005), pp. 523–558.
24
[11] F. Jamshidian, LIBOR market models with semimartingales, Tech. Report, NetAnalytics Ltd., 1999.
[12] D. Madan and M. Yor, CGMY and Meixner subordinators are absolutely
continuous with respect to one sided stable subordinators. Prépublication
du Laboratiore de Probabilités et Modèles Aléatoires.
[13] E. Mordecki, A. Szepessy, R. Tempone, and G. E. Zouraris, Adaptive weak approximation of diffusions with jumps, SIAM J. Numer. Anal.,
46 (2008), pp. 1732–1768.
[14] J. Poirot and P. Tankov, Monte Carlo option pricing for tempered stable (CGMY) processes, Asia-Pacific Financial Markets, 13 (2006), pp. 327–
344.
[15] P. Protter and D. Talay, The Euler scheme for Lévy driven stochastic
differential equations, Ann. Probab., 25 (1997), pp. 393–423.
[16] J. Rosiński, Series representations of Lévy processes from the perspective of point processes, in Lévy Processes — Theory and Applications,
O. Barndorff-Nielsen, T. Mikosch, and S. Resnick, eds., Birkhäuser, Boston,
2001.
[17] S. Rubenthaler, Numerical simulation of the solution of a stochastic
differential equation driven by a Lévy process, Stochastic Process. Appl.,
103 (2003), pp. 311–349.
25