Multi-country real business cycle models: Accuracy tests and test

Transcription

Multi-country real business cycle models: Accuracy tests and test
Multi-country real business cycle models:
Accuracy tests and test bench∗
Michel Juillard†
Sébastien Villemot‡
First draft: April 2010
This draft: August 26, 2010
Abstract
This paper describes the methodology used to compare the results of different solution
algorithms for a multi-country real business cycle model. It covers in detail the structure of
the model, the choice of values for the parameters, the accuracy tests used in the comparison,
and the computer program specifically developed for performing the tests.
JEL classification: C63; C68; C88; F41
Keywords: Accuracy tests; Numerical solutions; Heterogeneous agents
Introduction
In order to compare complex algorithms such as the ones necessary to compute the solution
of nonlinear dynamic stochastic general equilibrium models, one needs to define clearly the
dimensions along which the results will be compared. Furthermore, it is also important to
compare the results of the different algorithms using the same implementation of the tests so
that one can exclude differences in the administration of the testing procedure. For this reason,
separately from the implementation of each algorithm by the participants to the comparison
project, we have developed a computer program that takes as input the solution provided by
each group and subjects it to an identical set of tests.
Following previous literature (i.e. Aruoba et al., 2006; Den Haan and Marcet, 1994; Heer
and Maussner, 2008; Judd, 1992), we focus on evaluating errors of approximation: the errors
that one obtains by plugging back the approximated solution in the original problem. We
implement this approach in two different setups: the first test computes errors of approximation
at different points in the state space located on a sphere around the deterministic steady state of
the problem. This test informs us on the ability of a given solution method to provide accurate
approximations of the solution further and further away from a point “easy” to solve for: the
deterministic steady state. Of course, ordering points by their distance to the steady state does
not indicate how likely these points are in simulated data. This remark motivates the second
test: measuring the errors of approximation along a given simulation path. More precisely, each
solution is simulated starting from the same initial point, the deterministic steady state, and
∗
This is the authors’ version of a work that was accepted for publication in the Journal of Economic Dynamics
and Control. Changes resulting from the publishing process, such as peer review, editing, corrections, structural
formatting, and other quality control mechanisms may not be reflected in this document. Changes may have
been made to this work since it was submitted for publication. A definitive version was subsequently published
in the Journal of Economic Dynamics and Control, volume 35, number 2, pages 178–185, February 2011, http:
//dx.doi.org/10.1016/j.jedc.2010.09.011
†
Bank of France and CEPREMAP. E-mail: [email protected]
‡
Paris School of Economics and CEPREMAP. E-mail: [email protected]
1
subjected to the same set of random shocks. In the literature, the test proposed by Den Haan
and Marcet (1994) is often used to test whether prediction errors implied by the numerical
solutions are orthogonal to the elements of information set. Finally, we didn’t retain it in the
testing procedure used for this exercise, for reasons that we present below.
The multi-country RBC models used in this comparison exercise are introduced in Den Haan
et al. (2010). The four basic specifications try to introduce both a simple model such as the
first one that considers fixed labor and a simple Cobb-Douglas function, and more complicated
setups with endogenous labor, CES production functions and asymmetries between countries.
The number of countries varies from two to ten. Altogether the comparison involves 30 different
models.
All approaches provide an approximated solution to the first-order conditions of the social
planner optimization problem. In the first section, we provide a formal presentation of the model
and derive the first-order conditions of the social planner problem. The detailed specifications
and the parameterization of the 30 models are given in section 2. The approximation errors
are defined in section 3 along with the specification of the tests retained in the comparison. In
section 4, we describe the computer program that serves as the test bench. It is this software
that is used in the comparison paper by Kollmann et al. (2010).
1
1.1
Description of the model economies
General setup
We consider a multi-country RBC model with complete asset markets. We denote by N the
number of countries. A single homogenous good is produced, traded and consumed across
countries.
Production of country j ∈ {1, . . . , N } at date t is equal to ajt f j (ktj , `jt ), where f j is the
production function, ktj is the beginning-of-period capital stock, `jt is hours worked, and ajt is
productivity level. The law of motion of capital is:
j
= (1 − δ)ktj + ijt
kt+1
(1)
where ijt is investment and δ is the depreciation rate of capital.
The law of motion of productivity is:
ln ajt = ρ ln ajt−1 + σ(ejt + et )
(2)
where ejt is a country specific shock and et is a worldwide shock. Shocks are assumed to be
independent and identically distributed Gaussian variables with zero mean and unit variance.
Parameters ρ and σ determine the autocorrelation and variance of the logarithm of the productivity level, respectively.
There is an adjustment cost on the capital stock:
ijt
φ
Γjt = ktj
2
ktj
!2
−δ
(3)
where parameter φ governs the intensity of the friction. The value of this parameter has an
impact on the dimension of the state space. When φ > 0, all the country-level capital stocks
are state variables. In contrast, when φ = 0 (i.e. when there is no adjustment cost), only the
world capital stock matters, since in that case capital is perfectly mobile. In this project, we
are only interested in specifications where the stock of capital of each country matters (φ > 0).
2
Each country has a representative agent, whose instantaneous utility is uj (cjt , `jt ), where uj
is the utility function and cjt is consumption.
The aggregate world resource constraint is:
N X
cjt
+
ijt
−
δ ktj
N X
≤
j=1
ajt f j (ktj , `jt ) − Γjt
(4)
j=1
It is a well known result that the decentralized market equilibrium with complete asset
markets is equivalent to the solution of a corresponding social planner problem, where each
country has a weight τ j (Negishi weight) in the planner’s objective, the weights depending on
initial endowments.
We therefore formulate the problem as a social planner problem. Since we want to evaluate algorithms used for general equilibrium problems, solution algorithms are required to use
Euler equations together with dynamic and static constraints to compute the decision rules; in
particular, this rules out algorithms that compute the value function of the social planner.
The problem of the social planner is:
!
N
+∞
X
X
E0
τj
β t uj (cjt , `jt )
(5)
max
j
,`jt }j=1,...,N
{cjt ,ijt ,kt+1
t=0,...,+∞
j=1
t=0
subject to (1), (3) and (4), where β is the subjective time discount factor.
We call λt the Lagrange multiplier of the aggregate resource constraint, which is binding in
equilibrium. The first-order conditions are:
τ j ujc (cjt , `jt ) = λt
"
λt 1 + φ
τ j uj` (cjt , `jt )
!#
ijt
−δ
ktj
=
(6)
−λt ajt f`j (ktj , `jt )
(
(7)
"
j
, `jt+1 )
= β Et λt+1 1 + ajt+1 fkj (kt+1
+φ 1 − δ +
ijt+1
j
kt+1
!!
ijt+1
1
−
2
j
kt+1
j
kt+1
= (1 − δ)ktj + ijt

N
N X
X
ajt f j (ktj , `jt ) − φ ktj
=
cjt + ijt − δktj
2
j=1
j=1
ln ajt = ρ ln ajt−1 + σ et + ejt
(8)
−δ
ijt+1
j
kt+1
! #)
−δ
(9)
ijt
ktj
!2 
−δ

(10)
(11)
where ujc (uj` ) is the derivative of uj with respect to c (`), and fkj (f`j ) is the derivative of f j
with respect to k (`).
1.2
Specifications of utility and production functions
Four specifications are used for the utility function:
1. A constant relative risk aversion (CRRA) function, incorporating only consumption (no
labor):
1−
j
u
(cjt , `jt )
3
(cj )
= t
1−
1
γj
1
γj
(12)
where γ j is the intertemporal elasticity of substitution.
2. A utility function separable in consumption and labor:
1−
uj (cjt , `jt )
(cj )
= t
1−
1
γj
1
γj
1+
(`j )
− bj t
1+
1
ηj
(13)
1
ηj
where η j is the Frisch elasticity of labor supply, and bj governs the relative weight of
consumption and leisure.
3. A Cobb-Douglas utility function (hence with unit elasticity of substitution between consumption and leisure):
j
u
(cjt , `jt )
(1−ψj ) 1− γ1j
γj
j ψj
j
e
(ct )
L − `t
= j
γ −1
(14)
where Le is the labor endowment of the representative agent in each country, and ψ j
governs the relative weight of consumption and leisure.
4. A constant elasticity of substitution (CES) function embedded inside a CRRA function:
1− 1
uj (cjt , `jt ) =
γj
γj − 1
j
1− 1j 1− γ1
1
1−
χ
χj
(cjt ) χj + bj Le − `jt
(15)
where χj is the elasticity of substitution between consumption and leisure.
Three specifications are used for the production function:
1. A production function with decreasing returns to scale, using only capital:
f j (ktj , `jt ) = A(ktj )α
(16)
where A is a constant and α < 1.
2. A Cobb-Doublas specification with constant returns to scale:
f j (ktj , `jt ) = A(ktj )α (`jt )1−α
(17)
where α is the share of capital in the production function.
3. A constant elasticity of substitution (CES) function:
h
i 1j
j
j
f j (ktj , `jt ) = A α(ktj )µ + (1 − α)(`jt )µ µ
where
1
1−µj
is the elasticity of substitution between capital and labor.
4
(18)
1.3
Classification of variables and policy function
The model described above has the following characteristics:12
• There are 4N + 1 endogenous (control) variables: consumption (cjt ), labor (`jt ), capital
j
stock (kt+1
), investment (ijt ), Lagrange multiplier (λt ).
• Consequently, there are 4N + 1 model equations given by (6)-(10).
• There are N exogenous variables (technology, ajt ), governed by equation (11).
• Among the endogenous variables, only previous period capital stock ktj is a predetermined
state variable.
• The state space is therefore of dimension 2N .3
We denote by zt = (ct , `t , it , λt ) the vector of the 3N + 1 endogenous variables (excluding
capital stocks). The model can then be written in the following compact form:
Et H(zt+1 , zt , kt+1 , kt , at+1 , at ) = 0
The model solution is given by policy functions kt+1 = F (kt , at ) and zt = G(kt , at ), which
satisfy:
Et H(G(F (kt , at ), at+1 ), G(kt , at ), F (kt , at ), kt , at+1 , at ) = 0
2
Specifications, values and deterministic steady state
We tried to select examples that are both representative of useful models and challenging from
a numerical point of view. Finally, thirty different models were selected. Table 1 gives the
values of the parameters which are constant across all specifications. Table 2 lists the various
specifications with the functional form of the utility and production functions, and gives the
values of the parameters which vary across specifications.4 Note that, for parameter φ (the
investment adjustment cost), we choose a low value, thereby increasing the variance of the
endogenous variables: in test 2, which considers accuracy along a simulated time path, this
leads to considering state points further away from the steady state.
The number of countries, N , considered in each model is different so as to check the ability
of each algorithm to deal with both small and larger models.
Some parameters still need to be specified: the Negishi weights τ j , and the parameters of
the utility and production functions: A, bj (only for A2, A4, A6, A8), ψ j (only for A3, A7) and
Le (only for A3, A4, A7, A8). These parameters are chosen so that:
1
In models A1 and A5, there is no labor. There are therefore only 3N + 1 endogenous variables and as many
model equations.
2
An alternative categorization of variables would include technology level ajt in the set of endogenous variables,
thus making a total of 5N + 1 endogenous variables and as many model equations. The set of exogenous variables
would then consist of the N + 1 shocks ejt and et , and there would be 2N predetermined endogenous state
variables (ktj and ajt−1 ). We don’t adopt this point of view since it artificially expands the dimension of the state
space (it becomes of dimension 3N + 1).
3
As noted above, this relies on the fact that φ > 0. If there was no adjustment cost, the state space would be
of dimension N + 1.
4
In this table, for asymmetric specifications A5-A8, the range notation [a, b] for a parameter (say γ) means
that γ j = a + (j−1)(b−a)
. In other words, a (b) is the lowest (highest) value of γ across countries, and the N
N −1
values for γ are evenly spread over [a, b].
5
Parameter
β
α
δ
σ
ρ
φ
Value
0.99
0.36
0.025
0.01
0.95
0.5
Table 1: Values of parameters common to all specifications
Model
A1
A2
A3
A4
A5
A6
A7
A8
N
2,4,6,8,10
2,4,6,8
2,4,6
2,4,6
2,4,6,8,10
2,4,6,8
2,4,6
2,4,6
u
(12)
(13)
(14)
(15)
(12)
(13)
(14)
(15)
f
(16)
(17)
(17)
(18)
(16)
(17)
(17)
(18)
γ
1
0.25
0.25
0.25
[0.25,1]
[0.25,1]
[0.25,1]
[0.2,0.4]
η
µ
χ
-0.2
0.83
[-0.3,0.3]
[0.75,0.9]
0.1
[0.1,1]
Notes: N is the number of countries; u the equation number for the utility specification; f the equation
number for the production function specification; γ, η, µ and χ, the value of the parameter when it
appears in a given model specification.
Table 2: Model specifications
• At the steady state, all the countries consume exactly their net domestic production.
This implies that in the dynamic model, net foreign assets remain small, which is a rough
approximation of reality.
• At the steady state, all the countries have the same size. More precisely we assume that
steady capital and labor supply are equal to unity (which implies that production of a
country equals A, and therefore c¯j = A given the previous assumption).
• In all models, the steady state share of capital income in production is equal to α. This
will be even true in A4 and A8, because we impose k¯j = `¯j = 1.
• In models with a time endowment (A3, A4, A7, A8), the steady state ratio of labor supply
over the time endowment is equal to 40%.
6
These assumptions lead to the following values for the remaining parameters:
A =
1−β
αβ
1
1
= j
j ¯j ¯j
uc (c , ` )
uc (A, 1)
τj
=
bj
= (1 − α)A
e
L
j
1−
1
γj
(for A2, A6)
= 2.5 (for A3, A4, A7, A8)
b
= (1 − α)
ψj
=
A
1−
1
χj
A
1−
1
χj
= (1 − α)
(for A4, A8)
− 1
− 1
(Le − `¯j ) χj
(Le − 1) χj
1
1
= e
(for A3, A7)
e − 1)
¯
e
e
j
L
−
α(L
L − α(L − ` )
Given these restrictions on the parameters, the deterministic steady state for all the model
specifications is:
`¯j
k¯j
= 1 (except A1, A5)
a¯j
i¯j
= 1
c¯j
= A
= 1
= δ
λ̄ = 1
3
3.1
Accuracy tests
Approximation errors
We now turn to the definition of approximation errors. For given approximated policy functions
(F̂ , Ĝ), we denote by R(F̂ ,Ĝ) (kt , at ) the vector of unit-free approximation errors at point (kt , at ),
determined as follows:
7
Rj
(F̂ ,Ĝ)
(kt , at ) =
RN +j (kt , at ) =
τ j ujc (cjt , `jt ) − λt
τ j ujc (cjt , `jt )
τ j uj` (cjt , `jt ) + λt ajt f`j (ktj , `jt )
τ j uj` (cjt , `jt )
(F̂ ,Ĝ)
"
R2N +j (kt , at )
(F̂ ,Ĝ)
=
R3N +j (kt , at ) =
(F̂ ,Ĝ)
ijt
!#
(
"
−δ
− β Et λt+1 1
ktj
!!
j
ijt+1
1 it+1
+φ 1 − δ + j −
−δ
j
kt+1 2 kt+1
!#!
"
ijt
−δ
/ λt 1 + φ
ktj
j
kt+1
− (1 − δ)ktj − ijt
j
kt+1
λt 1 + φ
j
+ ajt+1 fkj (kt+1
, `jt+1 )
ijt+1
j
kt+1
! #)!
−δ
2 j
j
j
j
j j j j
φ j it
j=1 ct + it − δkt − at f (kt , `t ) + 2 kt kj − δ
t
PN j
j
j
j=1 ct + it − δkt
PN
R4N +1 (kt , at ) =
(F̂ ,Ĝ)
for j = 1, . . . , N , where kt+1 = F (kt , at ), (ct , `t , it , λt ) = G(kt , at ) and (ct+1 , `t+1 , it+1 , λt+1 ) =
G(F (kt , at ), at+1 ).
By definition, the true policy function (F, G) is such that R(F,G) (kt , at ) = 0 for all points of
the state space.
Note that, from a computational point of view, evaluating R(F̂ ,Ĝ) (kt , at ) given F̂ and Ĝ is
straightforward, except for the approximation error of the Euler equation where an expectation
term enters in; this requires the use of a numerical integration method, over the distribution of
shocks which enter equation (11), as we describe it below.
3.2
Accuracy on a sphere (Test 1)
The first test compares the approximation errors of the participants’ solutions on a hypersphere
around the steady state of the model.
More formally, we denote xt = (kt , at ) a point in the state space, x̄ = (k̄, ā) = (1, 1) the
steady state and Sr the hypersphere of radius r around the steady state given by Sr = {xt ∈
R2N ; kxt − x̄k = r} (where k k is the Euclidian norm).
For the three values r = 0.01, r = 0.1 and r = 0.3, we draw 1,000 points in Sr , and for every
approximated policy function (F̂ , Ĝ), we compute the approximation errors R(F̂ ,Ĝ) on these
points, and report the maximum errors (for each equation separately, and for the aggregate
across equations). This is the way errors are calculated in Judd (1992).
The purpose of this test is to quantify the loss of accuracy of a solution when going away from
the steady state. It is expected that projection methods will perform better than perturbation
methods for large values of the radius r, but this test will indicate to which extent this is true,
and whether this is counterbalanced by a greater accuracy of perturbation methods at points
very close to the steady state.
8
3.3
Accuracy along a simulated path (Test 2)
The second test compares the approximation errors along a simulated path, starting from the
steady state of the model, and generating random draws of the shocks.
For a given approximated policy function (F̂ , Ĝ), we generate a series of points in the
state space x0 , . . . , xT where xt = (kt , at ) and T =10,200. The initial state is the steady state
x0 = (k̄, ā), and the subsequent states are given by kt+1 = F (xt ) and the law of motion of
at (equation (11)). The series of shocks (e1t , . . . , eN
t , et ) for t = 1, . . . , T are generated with a
pseudo-random number generator.
The first 200 points in the generated series are discarded, so that the 10,000 remaining points
can be considered as representative of the ergodic distribution of the model variables. On these
points, we compute the approximation errors R(F̂ ,Ĝ) , and report the mean and maximum errors
(for each equation separately, and for the aggregate across equations). This is the way errors
are calculated in Jin and Judd (2002).
3.4
Den Haan-Marcet statistics (Test 3)
It was initially planned to use the Den Haan and Marcet (1994) statistics as a third testing
device for participants’ solutions.
The test statistics checks the orthogonality of the error residuals (as specified by R) with
some arbitrary function of the state variables (called instruments), along a simulated path (like
those used in accuracy test 2). The instruments used here are a constant and the first and
second order monomials of the state variables. For each approximated policy function, this
exercise is repeated a given number of times, in order to obtain an empirical distribution of
the test statistics. For the true policy function, the test statistics is distributed according to
a χ2 distribution. The accuracy criterion for an approximated policy function is therefore the
closeness of the empirical distribution of the statistics to the χ2 distribution.
It was finally decided to drop this test in the comparison project for the following two
reasons:
• The computational time for the test is extremely large for high values of N (number
of countries) and reasonable simulation lengths (T =10,000) when all the instruments are
used. Computations are only feasible with a small number of countries, smaller simulation
lengths or restricted sets of instruments.
• More importantly, the test seems unable to usefully discriminate between the participants’
solutions. For the exercises which were performed, the distribution of the test statistics
was very close across all solutions: they were either all very close to the χ2 distribution,
or all very far from it, depending on the simulation length and the set of instruments.
Since this property was not extensively tested, it cannot be considered as a more general
result, but nevertheless this apparent lack of discriminative power of the Den Haan-Marcet
statistics for this exercise was deemed a sufficient reason to drop it.
4
Test bench
In order to test the participants’ solutions on an equal basis, we wrote a testing program which
performs the accuracy tests on all the solutions. With that test bench, one can be sure that:
• the random elements of the tests are the same across all solutions: for test 1, these random
elements are the points on the sphere around the steady state; for test 2, they are the
series of random shocks used to generate the simulated paths;
9
• when computing the approximation error of the Euler equation, the numerical integration
method is the same across all solutions;
• the numerical precision used in the calculations is the same across all solutions.
Here we shortly describe some algorithmic choices made in the test bench, and then give an
overview of the implementation.
4.1
Algorithmic choices
Three numerical integration methods are implemented in the test bench:
• the product four-point Gauss-Hermite quadrature, as described in Abramowitz and Stegun
(1964, p. 890, eq. 25.4.46);
• the degree 5 monomial formula described in Judd (1998, p. 275, eq. 7.5.11);
• quasi-Monte Carlo integration, using Niederreiter quasi-random low-discrepancy sequence
(see Bratley et al., 1992).
The default choice of the test bench is to use the Gauss-Hermite formula up to dimension
6 (i.e. for N ≤ 5 since there are N + 1 random shocks), and the monomial formula for higher
dimensional problems (i.e. for N ≥ 6).
For test 1, the test bench generates uniformly distributed points on a hypersphere of dimension 2N using the simple algorithm described by Muller (1959). The idea is to draw points from
a multivariate standard Gaussian distribution of dimension 2N , and to divide these points by
their Euclidian norm: it is easy to see that the resulting points are uniformly distributed over
the unit hypersphere. Points uniformly distributed over any hypersphere can immediately be
generated by rescaling and translating. The Gaussian draws are generated using draws from the
Sobol quasi-random low-discrepancy sequences (see Antonov and Saleev, 1979) which are transformed in Gaussian vectors by applying the reciprocal of the Gaussian cumulative distribution
function.
For test 2, the random draws for shocks are generated using a Mersenne-Twister random
number generator (see Matsumoto and Nishimura, 1998).
4.2
Implementation choices
The test bench is a standalone program written in the C++ programming language, and relies
on the GNU Scientific library5 for linear algebra computations and random number generation.
The source code of the program is freely available on the web.6
The program is made of two main modules, plus one module per solution to be tested. The
first module implements the 30 specifications and computes the residuals.7 The second one
implements the accuracy tests.8 Then for each of the six participants’ solutions (PER1, PER2,
MRGAL, SMOL, CGA and SSA1, using the notations of Kollmann et al., 2010), a module
5
See Galassi et al. (2003) or http://www.gnu.org/software/gsl.
The source code that was used for generating the comparison results presented in Kollmann et al. (2010) is
available as supplemental material to this paper, and is downloadable at: .... Future versions of the software
will be available at: http://www.dynare.org/JedcTestsuiteWiki/TestBenchProblemA.
7
See class ModelSpec and its subclasses ModelSpecA1 to ModelSpecA8.
8
See class SolutionTester and main file tester.cc.
6
10
implements the corresponding approximated policy function.9 Note that the test bench does
not contain the code to fully recompute each solution:10 it is only able to simulate the solution,
using the coefficients of the parameterized functional form used for approximating the true
solution.
By default the program computes test 1 and 2 for all the specifications (symmetric and
asymmetric ones), and uses the same settings as those used in the comparison paper of Kollmann
et al. (2010). It is however possible to change the tasks to be performed (computing the Den
Haan-Marcet statistics, restricting to only some solutions or model specifications), or to change
some parameters of the tests (the number of points, the seed of the random number generators,
the numerical integration method, or the normalization of the residual for the aggregate resource
constraint equation).
References
Abramowitz, M. and I. A. Stegun (1964): Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables, New York: Dover, ninth Dover printing, tenth GPO
printing ed.
Antonov, I. A. and V. M. Saleev (1979): “An economic method of computing LP[tau]sequences,” USSR Computational Mathematics and Mathematical Physics, 19, 252–256.
Aruoba, S. B., J. Fernandez-Villaverde, and J. F. Rubio-Ramirez (2006): “Comparing solution methods for dynamic equilibrium economies,” Journal of Economic Dynamics
and Control, 30, 2477–2508.
Bratley, P., B. L. Fox, and H. Niederreiter (1992): “Implementation and tests of
low-discrepancy sequences,” ACM Transactions on Modeling and Computer Simulation, 2,
195–213.
Den Haan, W. J., K. L. Judd, and M. Juillard (2010): “Computational suite of models
with heterogeneous agents: Multi-country real business cycle models,” Journal of Economic
Dynamics and Control, this issue.
Den Haan, W. J. and A. Marcet (1994): “Accuracy in Simulations,” Review of Economic
Studies, 61, 3–17.
Galassi, M., J. Davies, J. Theiler, B. Gough, G. Jungman, M. Booth, and F. Rossi
(2003): GNU Scientific Library: Reference Manual, Network Theory Ltd.
Heer, B. and A. Maussner (2008): Dynamic general equilibrium modelling, computational
methods and applications, Berlin: Springer.
Jin, H.-H. and K. L. Judd (2002): “Perturbation methods for general dynamic stochastic
models,” Unpublished manuscript, Stanford University.
Judd, K. L. (1992): “Projection methods for solving aggregate growth models,” Journal of
Economic Theory, 58, 410–452.
9
Note that SMOL, CGA and SSA1 approximated policy functions do not return a value for the Lagrange
multiplier λt . The workaround is the following: for SMOL solution, the value of τ 1 u1c (c1t , `1t ) is used as a
replacement for λt ; for CGA and SSA, the arithmetic mean of τ j ujc (cjt , `jt ) over j = 1, . . . , N is used as a
replacement for λt .
10
The source code for recomputing the various solutions is available as supplemental material to this issue, and
is downloadable at: ....
11
——— (1998): Numerical Methods in Economics, MIT Press Books, The MIT Press.
Kollmann, R., S. Maliar, B. A. Malin, and P. Pichler (2010): “Comparison of numerical solutions to a suite of multi-country real business cycle models,” Journal of Economic
Dynamics and Control, this issue.
Matsumoto, M. and T. Nishimura (1998): “Mersenne twister: a 623-dimensionally equidistributed uniform pseudo-random number generator,” ACM Transactions on Modeling and
Computer Simulation, 8, 3–30.
Muller, M. E. (1959): “A note on a method for generating points uniformly on n-dimensional
spheres,” Communications of the Association for Computing Machinery, 2, 19–20.
12

Documents pareils