MFE MATLAB Function Reference Financial Econometrics
MFE MATLAB Function Reference Financial Econometrics
Financial Econometrics
Kevin Sheppard
Notes v
5 Vector Autoregressions 61
5.1 Stationary Vector Autoregression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1.1 Vector Autoregression estimation: vectorar . . . . . . . . . . . . . . . . . . . . . . . . 61
5.1.2 Granger Causality Testing: grangercause . . . . . . . . . . . . . . . . . . . . . . . . . 67
5.1.3 Impulse Response function calculation: impulseresponse . . . . . . . . . . . . . . . . 70
6 Volatility Modeling 73
6.1 GARCH Model Simulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
6.1.1 ARCH/GARCH/AVARCH/TARCH/ZARCH Simulation: tarch_simulate . . . . . . . . . . 73
6.1.2 EGARCH Simulation: egarch_simulate . . . . . . . . . . . . . . . . . . . . . . . . . 76
6.1.3 APARCH Simulation: aparch_simulate . . . . . . . . . . . . . . . . . . . . . . . . . 78
6.1.4 FIGARCH Simulation: figarch_simulate . . . . . . . . . . . . . . . . . . . . . . . . 81
6.2 GARCH Model Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
6.2.1 ARCH/GARCH/GJR-GARCH/TARCH/AVGARCH/ZARCH Estimation: tarch . . . . . . . . 84
6.2.2 EGARCH Estimation: egarch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
6.2.3 APARCH Estimation: aparch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
6.2.4 AGARCH and NAGARCH estimation: agarch . . . . . . . . . . . . . . . . . . . . . . . 94
6.2.5 IGARCH estimation igarch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
6.2.6 FIGARCH estimation figarch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
License
This software and documentation is provided "as is", without warranty of any kind, express or implied,
including but not limited to the warranties of merchantability, fitness for a particular purpose and non-
infringement. In no event shall the authors or copyright holders be liable for any claim, damages or other
liability, whether in an action of contract, tort or otherwise, arising from, out of or in connection with the
software or the use or other dealings in the software.
Copyright
Except where explicitly noted, all contents of the toolbox and this documentation are
c 2001-2009 Kevin
Sheppard.
MATLAB
R
is a registered trademark The Mathworks, Inc.
I welcome bug reports and feedback about the software. The best type of bug report should include the
command run that produced the errors, a description of the data used (a zipped .MAT file with the data may
be useful) and the version of MATLAB run. I am usually working on a recent version of MATLAB (currently
R2009b, 7.9) and while I try to ensure some backward compatibility, it is likely that this code will not run
flawlessly on ancient versions of MATLAB.
Please do not ask me for code or advice finding code that I do not provide, unless that code is directly
related to my own original research (e.g. certain correlation models). Also, please do not ask for help with
your homework.
The toolbox comes with a large number of functions that are used to support other functions, for exam-
ple functions that are used to compute numerical Hessians. Please consult the help contained within the
function for more details.
Data Files
• mvnormloglik
MATLAB Compatability
These functions are work-a-like functions of a few MATLAB provided functions so that the statistics toolbox
may not be needed in some cases. If you have the Statistics toolbox, you should not use these functions.
• chi2cdf
• kurtosis
• iscompatible
• normcdf
• norminv
• normloglik
• normpdf
Chapter 2
2.1 Regression
Regression with both classical (homoskedastic) and White (heteroskedasticity robust) variance covariance
estimation, with an option to exclude the intercept.
β̂ = X0 X X0 y
ε̂0 ε̂
R2C = 1 −
ỹ0 ỹ
where ỹ = y − ȳ are the demeaned regressands and ε̂ = y − Xβ̂ are the estimated residuals. If the intercept
is excluded, these are computed using uncentered estimators,
ε̂0 ε̂
R2U = 1 −
y0 y
Examples
Required Inputs
[outputs] = ols(Y,X)
• X: An n by k vector containing the regressors. X should be full rank and should not contain a constant
column.
Optional Inputs
[outputs] = ols(Y,X,C)
Outputs
ols provides many other outputs than the estimated parameters. The full ols command can return
[B,TSTAT,S2,VCV,VCVWHITE,R2,RBAR,YHAT] = ols(inputs)
• S2: Estimated variance of the regression error. Computed using a degree of freedom adjustment (n −
k ).
Comments
Linear regression estimation with homoskedasticity and White heteroskedasticity robust standard
errors.
USAGE:
[B,TSTAT,S2,VCV,VCV_WHITE,R2,RBAR,YHAT] = ols(Y,X,C)
INPUTS:
Y - N by 1 vector of dependent data
X - N by K vector of independent data
C - 1 or 0 to indicate whether a constant should be included (1: include constant)
OUTPUTS:
B - A K(+1 is C=1) vector of parameters. If a constant is included, it is the first
2.1 Regression 7
parameter.
TSTAT - A K(+1) vector of t-statistics computed using White heteroskedasticity robust
standard errors.
S2 - Estimated error variance of the regression.
VCV - Variance covariance matrix of the estimated parameters. (Homoskedasticity assumed)
VCVWHITE - Heteroskedasticity robust VCV of the estimated parameters.
R2 - R-squared of the regression. Centered if C=1.
RBAR - Adjusted R-squared. Centered if C=1.
YHAT - Fit values of the dependent variable
COMMENTS:
The model estimated is Y = X*B + epsilon where Var(epsilon)=S2
EXAMPLES:
Estimate a regression with a constant
b = ols(y,x)
Estimate a regression without a constant
b = ols(y,x,0)
ARMA and ARMAX simulation using either normal innovations or user-provided residuals.
ARMA(P,Q) simulation
P Q
X X
y t = φ0 + φ p y t −p + θq εt −q + εt .
p =1 q =1
ARMA(P,Q) simulation requires the orders for both the AR and MA portions to be defined. To simulate
an irregular AR(P) - an AR(P) with some coefficients 0 - simply simulate a regular AR(P) and insert 0 for
omitted lags.
Examples
y t = 1 + .9y t −1 + εt (3.1)
y t = 1 + .8εt −1 + εt (3.2)
y t = 1 + 1.5y t −1 − .9y t −2 + .8εt −1 + .4εt −2 + εt (3.3)
y t = 1 + y t −1 − .8y t −3 + εt (3.4)
y t = 1 + .9y t −1 + ηt (3.5)
i.i.d. i.i.d.
where εt ∼ N (0, 1) are standard normally distributed and ηt ∼ t 6 are Student’s T with 6 degrees of freedom
distributed.
% Simulates 1000 draws from an AR(1) with phi0 = 1
T=1000; phi = .9; constant = 1; ARorder = 1;
y = armaxfilter_simulate(T, constant, ARorder, phi);
10 Stationary Time Series
% Simulates 1000 draws from an AR(1) with phi0 = 1 using Students-t innovations
e = trnd(6,1000,1);
e=e./sqrt(6/4); % Transforms the errors to have unit variance
T=1000; phi = .9; constant = 1; ARorder = 1;
y = armaxfilter_simulate(e,constant, ARorder, phi);
ARMAX(P,Q) simulation
ARMAX simulation extends standard ARMA(P,Q) simulation to include the possibility of exogenous regres-
sors, x k t for k = 1, . . . , K . An ARMAX(P,Q) model is specified
P K Q
X X X
y t = φ0 + φ p y t −p + βk x k ,t −1 + θq εt −q + εt
p =1 k =1 q =1
Note: While the x k ,t −1 terms are all written with a t − 1 index, they can be from any time before t by simply
redefining x k ,t −1 to refer to some variable at t − j . For example, x 1,t −1 = SP500t −1 , x 2,t −1 = SP500t −2 and
so on.
Examples
i.i.d.
where εt ∼ N (0, 1) are standard normally distributed and x t = .8 ∗ x t −1 + εt .
% First simulate x
T=1001; phi = .8; constant = 0; ARorder = 1; % 1001 needed due to
% losses in lagging
x = armaxfilter_simulate(T, constant, ARorder, phi);
% Then lags x
[x, xlags1] = newlagmatrix(x,1,0);
T=1000; phi = .9; constant = 1; ARorder = 1; Xp=.5; X=xlags1;
y = armaxfilter_simulate(T, constant, ARorder, phi, 0, [], X, Xp);
3.1 ARMA Simulation 11
% First simulate x
T=1002; phi = .8; constant = 0; ARorder = 1; % 1002 needed due to
% losses in lagging
x = armaxfilter_simulate(T, constant, ARorder, phi);
% Then lags x
[x, xlags12] = newlagmatrix(x,2,0);
T=1000; phi = .9; constant = 1; ARorder = 1; Xp=[.5 -.2]; X=xlags12;
y = armaxfilter_simulate(T, constant, ARorder, phi, 0, [], X, Xp);
Required Inputs
[outputs] = armaxfilter_simulate(T,CONST)
• T: Either a scalar integer or a vector of random numbers. If scalar, T represents the length of the time
series to simulate. If a T by 1 vector of random numbers, these will be used to construct the simulated
time series.
• CONST: Scalar value containing the constant term in the simulated model
Optional Inputs
[outputs] = armaxfilter_simulate(T,CONST,AR,ARPARAMS,MA,MAPARAMS,X,XPARAMS)
• ARPARAMS: Column vector containing AR elements containing the values of the parameters on the AR
terms. Ordered from smallest to largest.
• MAPARAMS: Column vector containing MA elements containing the values of the parameters on the MA
terms. Ordered from smallest to largest.
Outputs
[Y,ERRORS] = armaxfilter_simulate(inputs)
Comments
ARMAX(P,Q) simulation with normal errors. Also simulates AR, MA and ARMA models.
USAGE:
AR:
[Y,ERRORS]=armaxfilter_simulate(T,CONST,AR,ARPARAMS)
MA:
[Y,ERRORS]=armaxfilter_simulate(T,CONST,0,[],MA,MAPARAMS)
ARMA:
[Y,ERRORS]=armaxfilter_simulate(T,CONST,AR,ARPARAMS,MA,MAPARAMS);
ARMAX:
[Y,ERRORS]=armaxfilter_simulate(T,CONST,AR,ARPARAMS,MA,MAPARAMS,X,XPARAMS);
INPUTS:
T - Length of data series to be simulated OR
T by 1 vector of user supplied random numbers (e.g. rand(1000,1)-0.5)
CONST - Value of the constant in the model. To omit, set to 0.
AR - Order of AR in model. To include only selected lags, for example t-1 and t-3, use 3
and set the coefficient on 2 to 0
ARPARAMS - AR by 1 vector of parameters for the AR portion of the model
MA - Order of MA in model. To include only selected lags of the error, for example t-1
and t-3, use 3 and set the coefficient on 2 to 0
MAPARAMS - MA by 1 vector of parameters for the MA portion of the model
X - T by K matrix of exogenous variables
XPARAMS - K by 1 vector of parameters on the exogenous variables
OUTPUTS:
Y - A T by 1 vector of simulated data
ERRORS - The errors used in the simulation
COMMENTS:
The ARMAX(P,Q) model simulated is:
y(t) = const + arp(1)*y(t-1) + arp(2)*y(t-2) + ... + arp(P) y(t-P) +
+ ma(1)*e(t-1) + ma(2)*e(t-2) + ... + ma(Q) e(t-Q)
+ xp(1)*x(t,1) + xp(2)*x(t,2) + ... + xp(K)x(t,K)
+ e(t)
EXAMPLES:
Simulate an AR(1) with a constant
y = armaxfilter_simulate(500, .5, 1, .9)
Simulate an AR(1) without a constant
y = armaxfilter_simulate(500, 0, 1, .9)
Simulate an ARMA(1,1) with a constant
y = armaxfilter_simulate(500, .5, 1, .95, 1, -.5)
Simulate a MA(1) with a constant
y = armaxfilter_simulate(500, .5, [], [], 1, -.5)
Simulate a seasonal MA(4) with a constant
y = armaxfilter_simulate(500, .5, [], [], 4, [.6 0 0 .2])
As special cases of an ARMAX, AR(1) and AR(P), both regular and irregular, can be estimated using
armaxfilter. The AR(1),
y t = φ0 + φ1 y t −1 + εt
where the first argument is the time series, the second argument takes the value 1 or 0 to indicate whether a
constant should be included in the model (i.e. if it were 0, the model y t = φ1 y t −1 + εt would be estimated),
and the third argument contains the autoregressive lags to be included in the model. An AR(P),
y t = φ0 + φ1 y t −1 + . . . + φP y t −P + εt
which would estimate an AR(3). The final argument in armaxfilter is [1:3] because all three lags of y ,
y t −1 , y t −2 and y t −3 should be included (Note that [1:3] = [1 2 3]). An irregular AR(3) that includes only
the first and third lag, y t = φ0 + φ1 y t −1 + φ3 y t −3 + εt can be fit using
parameters = armaxfilter(y,1,[1 3])
where the final argument changes from [1:3] to [1 3] to indicate that only lags 1 and 3 should be included.
Estimation of MA(1) and MA(Q) models is similar to estimation of AR(P) models. The commands to the MA
coefficient in armaxfilter are identical and the AR coefficients are set to 0 (or empty, []). Estimation of
an MA(1),
y t = θ1 εt −1 + εt
where the empty argument ([]) indicates that no AR terms are to be included. Parameter estimates for an
MA(Q),
y t = φ0 + θ1 εt −1 + . . . + θQ εt −Q + εt
Q=3;
parameters = armaxfilter(y,1,[],[1:Q])
and an irregular MA(3) that only includes lags 1 and 3 can be estimated by replacing the final argument,
[1:3], with [1 3].
parameters = armaxfilter(y,1,[],[1 3])
ARMA(P,Q)
Regular and Irregular ARMA(P,Q) estimation simply combines the two above steps. For example, to estimate
a regular ARMA(1,1),
y t = φ0 + φ1 y t −1 + θ1 εt −1 + εt
call
parameters = armaxfilter(y,1,1,1)
y t = φ0 + φ1 y t −1 + . . . + φP y t −P + θ1 εt −1 + . . . + θQ εt −Q + εt
and irregular ARMA(P,Q) processes can be computed by replacing the regular arrays [1:P] and [1:Q] with
arrays of only the lags to be included,
parameters = armaxfilter(y,1,[1 3],[1 4])
Including exogenous variables in AR(P), MA(Q) and ARMA(P,Q) models is identical to the above save one
additional step needed to align the data. Suppose that two time series {y t } and {x t } are available and that
they are aligned, so that x 1 and y 1 are from the same point in time. To regress y t on one lag of itself and a lag
of x t , it is necessary to promote x so that the element in the sth position is actually x s −1 and thus that y t will
be coupled with x t −1 . This is simple to do using the command newlagmatrix. newlagmatrix produces
two outputs, a vector of contemporary values that has been adjusted to remove lags (i.e. if the original series
has T observations, and newlagmatrix is requested to produce 2 lags, the new series will have T − 2.) and
a matrix of lags of the form y t −1 y t −2 . . . y t −P . To estimate an ARX(P), it is necessary to adjust both x and y
so that they line up. For example, to estimate
y t = φ0 + φ1 y t −1 + β1 x t −1 + εt ,
call
[yadj, ylags] = newlagmatrix(y,1,0);
[xadj, xlags] = newlagmatrix(x,1,0);
% Regress the adjusted values of y on the lags of x
3.2 ARMA Estimation 15
X = xlags;
parameters = armaxfilter(yadj,1,1,0,X);
Aside from the step needed to properly align the data, estimating ARX(P), MAX(Q) and ARMAX(P,Q) models
is identical to AR(P), MA(Q) and ARMA(P,Q). Regular models can be estimated by including 1:P or 1:Q and
irregular models can be estimated using irregular arrays (e.g. [1 3] or [1 2 4]).
The key to estimating ARMAX(P,Q) models is to lags both y and x by as many lags of x as are included in
the model. Consider the final example of an ARMAX(1,1) where 3 lags of x are to be included,
y t = φ0 + φ1 y t −1 + β1 x t −1 + β2 x t −2 + β3 x t −3 + θ1 εt −1 + εt .
Assuming that the original x and y data “line-up” - so that x(1) and y(1) occurred at the same point in
time - this model can be estimated using the following code:
[yadj, ylags] = newlagmatrix(y,3,0);
[xadj, xlags] = newlagmatrix(x,3,0);
% Regress the adjusted values of y on the lags of x
X = xlags;
parameters = armaxfilter(yadj,1,1,1,X);
Required Inputs
[outputs] = armaxfilter(Y,CONSTANT)
Note: The required inputs only estimate the (unconditional) mean, and so it will generally be necessary to
use some of the optional inputs.
Optional Inputs
[outputs] = armaxfilter(Y,CONSTANT,P,Q,X,STARTINGVALS,OPTIONS,HOLDBACK)
• X: T by k matrix of exogenous regressors. Should be aligned with Y so that the ith row of X is known
when the observation in the ith row of Y is observed.
• STARTINGVALS: Column vector containing starting values for estimation. Used only for models with
an MA component.
• HOLDBACK: Scalar integer indicating the number of observations to withhold at the start of the sample.
Useful when testing models with different lag lengths to produce comparable likelihoods, AICs and
SBICs. Should be set to the highest lag length (AR or MA) in the models studied.
16 Stationary Time Series
Outputs
armaxfilter provides many other outputs than the estimated parameters. The full armaxfilter com-
mand can return
[PARAMETERS, LL, ERRORS, SEREGRESSION, DIAGNOSTICS, VCVROBUST, VCV, LIKELIHOODS, SCORES]
=armaxfilter(inputs here)
• LL: The log-likelihood computed using the estimated residuals and assuming a normal distribution.
• DIAGNOSTICS: A MATLAB structure of output that may be useful. To access elements of a structure,
enter diagnostics.fieldname where fieldname is one of:
• VCVROBUST: Heteroskedasticity-robust covariance matrix for the estimated parameters. The square-
root of the ith diagonal element is the standard deviation of the ith element of PARAMETERS.
• SCORES: A T by # parameters matrix of scores of the model. These are used in some advanced test.
Examples
See above.
3.2 ARMA Estimation 17
Comments
ARMAX(P,Q) estimation
USAGE:
[PARAMETERS]=armaxfilter(Y,CONSTANT,P,Q)
[PARAMETERS, LL, ERRORS, SEREGRESSION, DIAGNOSTICS, VCVROBUST, VCV, LIKELIHOODS, SCORES]
=armaxfilter(Y,CONSTANT,P,Q,X,STARTINGVALS,OPTIONS,HOLDBACK)
INPUTS:
Y - A column of data
CONSTANT - Scalar variable: 1 to include a constant, 0 to exclude
P - Non-negative integer vector representing the AR orders to include in the model.
Q - Non-negative integer vector representing the MA orders to include in the model.
X - [OPTIONAL] a T by K matrix of exogenous variables. These line up exactly with
the Y’s and if they are time series, you need to shift them down by 1 place,
i.e. pad the bottom with 1 observation and cut off the top row [ T by K]. For
example, if you want to include X(t-1) as a regressor, Y(t) should line up
with X(t-1)
STARTINGVALS - [OPTIONAL] A (CONSTANT+length(P)+length(Q)+K) vector of starting values.
[constant ar(1) ... ar(P) xp(1) ... xp(K) ma(1) ... ma(Q) ]’
OPTIONS - [OPTIONAL] A user provided options structure. Default options are below.
HOLDBACK - [OPTIONAL] Scalar integer indicating the number of observations to withhold at
the start of the sample. Useful when testing models with different lag lengths
to produce comparable likelihoods, AICs and SBICs. Should be set to the highest
lag length (AR or MA) in the models studied.
OUTPUTS:
PARAMETERS - A 1+length(p)+size(X,2)+length(q) column vector of parameters with
[constant ar(1) ... ar(P) xp(1) ... xp(K) ma(1) ... ma(Q) ]’
LL - The log-likelihood of the regression
ERRORS - A T by 1 length vector of errors from the regression
SEREGRESSION - The standard error of the regressions
DIAGNOSTICS - A structure of diagnostic information containing:
P - The AR lags used in estimation
Q - The MA lags used in estimation
C - Indicator if constant was included
nX - Number of X variables in the regression
AIC - Akaike Information Criteria for the estimated model
SBIC - Bayesian (Schwartz) Information Criteria for the
estimated model
ADJT - Length of sample used for estimation after HOLDBACK adjustments
T - Number of observations
ARROOTS - The characteristic roots of the ARMA
process evaluated at the estimated parameters
ABSARROOTS - The absolute value (complex modulus if
complex) of the ARROOTS
VCVROBUST - Robust parameter covariance matrix%
VCV - Non-robust standard errors (inverse Hessian)
LIKELIHOODS - A T by 1 vector of log-likelihoods
SCORES - Matrix of scores (# of params by T)
COMMENTS:
The ARMAX(P,Q) model is:
18 Stationary Time Series
The main optimization is performed with lsqnonlin with the default options:
options = optimset(’lsqnonlin’);
options.MaxIter = 10*(maxp+maxq+constant+K);
options.Display=’iter’;
You should use the MEX file (or compile if not using Win64 Matlab) for armaxerrors.c as it
provides speed ups of approx 10 times relative to the m file version armaxerrors.m
EXAMPLE:
To fit a standard ARMA(1,1), use
parameters = armaxfilter(y,1,1,1)
To fit a standard ARMA(3,4), use
parameters = armaxfilter(y,1,[1:3],[1:4])
To fit an ARMA that includes lags 1 and 3 of y and 1 and 4 of the MA term, use
parameters = armaxfilter(y,1,[1 3],[1 4])
Estimates heterogeneous autoregressions, which are restricted parameterizations of standard ARs. A HAR
is a model of the class
P
X
y t = φ0 + φi ȳ t −1:i + εt
i =1
Pi
where ȳ t −1:i = i −1 j =1 y t − j . If all lags are included from 1 to P then the HAR is just a re-parameterized Pth
order AR, and so it is generally the case that most lags are set to zero, as in the common volatility HAR,
where ȳ t −1:1 = y t −1 .
Examples
Required Inputs
[outputs] = heterogeneousar(Y,CONSTANT,P)
• P: Vector or Matrix. If a vector, must be a column vector. The values are interpreted as the number of
lags to average in each term. For example, [1 5 22] would fit the HAR
If a matrix, must be number of terms by 2 where the first column indicates the start point and the
20 Stationary Time Series
second indicates the end point. The matrix equivalent to the above vector notation is
1 1
1 5 .
1 22
The matrix notation allows a HAR with non-overlapping intervals to be specified, such as
1 1
2 5
10 22
Optional Inputs
[outputs] = heterogeneousar(Y,CONSTANT,P,NW,SPEC)
The optional inputs are:
• NW: Number of lags to include when computing the covariance of the estimated parameters. Default
is 0.
• SPEC: String value, either ’STANDARD’ or ’MODIFIED’. Modified reparameterizes the usual HAR as a
series of non-overlapping intervals, and so
would be reparameterized as
when estimated. The model fits are identical, and the ’MODIFIED’ version is only helpful for presen-
tation and interpretation.
Outputs
• PARAMETERS: A vector of estimated parameters. The size of parameters is determined by whether the
constant is included and the number of lags included in the HAR.
• ERRORS: A T by 1 vector of estimated errors from the model. The first max(max(P)) are set to 0.
• DIAGNOSTICS: A MATLAB structure of output that may be useful. To access elements of a structure,
enter diagnostics.fieldname where fieldname is one of:
3.2 ARMA Estimation 21
• VCVROBUST: Heteroskedasticity-robust covariance matrix for the estimated parameters. Also autocor-
relation robust if NW selected appropriately. The square-root of the ith diagonal element is the standard
deviation of the ith element of PARAMETERS.
Comments
USAGE:
[PARAMETERS] = heterogeneousar(Y,CONSTANT,P)
[PARAMETERS, ERRORS, SEREGRESSION, DIAGNOSTICS, VCVROBUST, VCV]
= heterogeneousar(Y,CONSTANT,P,NW,SPEC)
INPUTS:
Y - A column of data
CONSTANT - Scalar variable: 1 to include a constant, 0 to exclude
P - A column vector or a matrix.
If a vector, should include the indices to use for the lag length, such as in
the usual case for monthly volatility data P=[1; 5; 22]. This indicates that
the 1st lag, average of the first 5 lags, and the average of the first 22 lags
should be used in estimation. NOTE: When using the vector format, P MUST BE A
COLUMN VECTOR to avoid ambiguity with the matrix format. If P is a matrix, the
values indicate the start and end points of the averages. The above vector can
be equivalently expressed as P=[1 1;1 5;1 22]. The matrix notation allows for
the possibility of skipping lags, for example P=[1 1; 5 5; 1 22]; would have
the 1st lag, the 5th lag and the average of lags 1 to 22. NOTE: When using the
matrix format, P MUST be # Entries by 2.
NW - [OPTIONAL] Number of lags to use when computing the long-run variance of the
scores in VCVROBUST. Default is 0.
SPEC - [OPTIONAL] String value indicating which representation to use in parameter
estimation. May be:
’STANDARD’ - Usual representation with overlapping lags
’MODIFIED’ - Modified representation with non-overlapping lags
OUTPUTS:
PARAMETERS - A 1+length(p) column vector of parameters with
[constant har(1) ... har(P)]’
22 Stationary Time Series
ERRORS - A T by 1 length vector of errors from the regression with 0s in first max(max(P))
places
SEREGRESSION - The standard error of the regressions
DIAGNOSTICS - A structure of diagnostic information containing:
P - List of HAR lags used in estimation
C - Indicator if constant was included
AIC - Akaike Information Criteria for the estimated model
SBIC - Bayesian (Schwartz) Information Criteria for the
estimated model
T - Number of observations
ADJT - Length of sample used for estimation
ARROOTS - The characteristic roots of the ARMA
process evaluated at the estimated parameters
ABSARROOTS - The absolute value (complex modulus if
complex) of the ARROOTS
VCVROBUST - Robust parameter covariance matrix, White if NW = 0,
Newey-West if NW>0
VCV - Non-robust standard errors (inverse Hessian)
EXAMPLES:
Simulate data from a HAR model
y = armaxfilter_simulate(1000,1,22,[.1 .3/4*ones(1,4) .55/17*ones(1,17)])
Standard HAR with 1, 5 and 22 day lags
parameters = heterogeneousar(Y,1,[1 5 22]’)
Standard HAR with 1, 5 and 22 days lags using matrix notation
parameters = heterogeneousar(Y,1,[1 1;1 5;1 22])
Standard HAR with 1, 5 and 22 day lags using the non-overlapping reparameterization
parameters = heterogeneousar(Y,1,[1 5 22]’,[],’MODIFIED’)
Standard HAR with 1, 5 and 22 day lags with Newey-West standard errors
[parameters, errors, seregression, diagnostics, vcvrobust, vcv] = ...
heterogeneousar(Y,1,[1 5 22]’,ceil(length(Y)^(1/3)))
Nonstandard HAR with lags 1, 2 and 10-22 day lags
parameters = heterogeneousar(Y,1,[1 1;2 2;10 22])
Examples
The output of tsresidualplot is in figure 3.1 (this was generated suing the second command above):
Required Inputs
[outputs] = tsresidualplot(Y,ERRORS)
Optional Inputs
[outputs] = tsresidualplot(Y,ERRORS,DATES)
Outputs
[HAXIS,HFIG] = tsresidualplot(inputs)
Comments
Produces a plot for visualizing time series data and residuals from a time series model
USAGE:
tsresidualplot(Y,ERRORS)
[HAXIS,HFIG] = tsresidualplot(Y,ERRORS,DATES)
INPUTS:
Y - A T by 1 vector of data
24 Stationary Time Series
Data
15 Fit
10
Q1−07 Q2−07 Q3−07 Q4−07 Q1−08 Q2−08 Q3−08 Q4−08 Q1−09 Q2−09 Q3−09
Residual
4
Residual
−2
−4
Q1−07 Q2−07 Q3−07 Q4−07 Q1−08 Q2−08 Q3−08 Q4−08 Q1−09 Q2−09 Q3−09
Figure 3.1: The output of tsresidplot generated using the code in the second example.
OUTPUTS:
HAXIS - A 2 by 1 vector axis handles to the top subplots
HFIG - A scalar containing the figure handle
COMMENTS:
HAXIS can be used to change the format of the dates on the x-axis when MATLAB dates are provides
by calling
datetick(HAXIS(j),’x’,DATEFORMAT,’keeplimits’)
where j is 1 (top) or 2 (bottom subplot) and DATEFORMAT is a numeric value between 28. See doc
datetick for more details. For example,
3.2 ARMA Estimation 25
datetick(HAXIS(1),’x’,25,’keeplimits’)
will change the top subplot’s x-axis labels to the form yy/mm/dd.
EXAMPLES:
Estimate a model and produce a plot of fitted and residuals
[parameters, LL, errors] = armaxfilter(y, 1, 1, 1);
tsresidualplot(y, errors)
Estimate a model and produce a plot of fitted and residuals with dates
[parameters, LL, errors] = armaxfilter(y, 1, 1, 1);
dates = datenum(’01Jan2007’) + (1:length(y));
tsresidualplot(y, errors, dates)
Computes the characteristic roots (and their absolute values) of the characteristic equation that correspond
to an ARMAX(P,Q) equation. It is usually called after or during armaxfilter.
Examples
armaroots can be used with either the output of armaxfilter or with hypothetical parameters. The first
example shows how to use them with armaxfilter while the second and third demonstrate their use
with hypothetical ARMA parameters. Note that the AR and MA lag lengths are identical to those used in
armaxfilter, so a regular ARMA(P,Q) requires [1:P] and [1:Q] to be input. This allows roots of irregular
ARMA(P,Q) to be computed by including the indices of the lags used (i.e. [1 3]).
T=1000; phi = .9; constant = 1; ARorder = 1;
y = armaxfilter_simulate(T, constant, ARorder, phi);
% ARMA(1,1) with a constant;
[parameters, LL, errors] = armaxfilter(y, 1, 1, 1);
[arroots, absarroots] = armaroots(parameters, 1, 1, 1)
arroots =
0.9023
absarroots =
0.9023
% An ARMA(2,2)
phi = [1.3 -.35]; theta = [.4 .3]; parameters=[1 phi theta]’;
[arroots, absarroots] = armaroots(parameters, 1, [1 2], [1 2])
arroots =
0.9193
0.3807
absarroots =
0.9193
0.3807
% An irregular AR(3)
% Note that phi contains phi1 and phi3 and that there is no phi2
phi = [1.3 -.35]; parameters = [1 phi]’;
% There will be three roots
[arroots, absarroots] = armaroots(parameters, 1, [1 3],[])
arroots =
0.8738 + 0.1364i
0.8738 - 0.1364i
-0.4475
absarroots =
0.8843
3.2 ARMA Estimation 27
0.8843
0.4475
Required Inputs
[outputs] = armaroots(PARAMETERS,CONSTANT,P,Q)
• PARAMETERS: A vector of parameters. The size of parameters is determined by whether the constant
is included, the number of lags included in the AR and MA portions and the number of exogenous
variables included (if any).
Optional Inputs
[outputs] = armaroots(PARAMETERS,CONSTANT,P,Q,X)
Outputs
[ARROOTS,ABSARROOTS] = armaroots(inputs)
• ARROOTS: Vector containing roots of characteristic function associated with AR. The highest lag in P
determines the number of roots.
Comments
USAGE:
[ARROOTS] = armaroots(PARAMETERS,CONSTANT,P,Q)
[ARROOTS,ABSARROOTS] = armaroots(PARAMETERS,CONSTANT,P,Q,X)
INPUTS:
PARAMETERS - A CONSTANT+length(P)+length(Q)+size(X,2) by 1 vector of parameters, usually an
output from ARMAXFILTER
CONSTANT - Scalar variable: 1 to include a constant, 0 to exclude
P - Non-negative integer vector representing the AR orders to include in the model.
Q - Non-negative integer vector representing the MA orders to include in the model.
X - [OPTIONAL] A T by K matrix of exogenous variables.
OUTPUTS:
ARROOTS - A max(P) by 1 vector containing the roots of the characteristic equation
corresponding to the ARMA model input
28 Stationary Time Series
COMMENTS:
EXAMPLES:
Compute the AR roots of an ARMA(2,2)
phi = [1.3 -.35]; theta = [.4 .3]; parameters=[1 phi theta]’;
[arroots, absarroots] = armaroots(parameters, 1, [1 2], [1 2])
Compute the AR roots of an irregular AR(3)
phi = [1.3 -.35]; parameters = [1 phi]’;
[arroots, absarroots] = armaroots(parameters, 1, [1 3],[])
Computes the Akaike Information Criteria (AIC) and the Schwartz/Bayes Information Criterion for an AR-
MAX(P,Q). The AIC is given by
2k
AI C = ln σ̂2 +
T
where k is the number of parameters in the model, including the constant, AR coefficients, MA coefficient
and any X variables. The SBIC is given by
ln T k
S B I C = ln σ̂2 + .
T
Examples
sbic =
-0.0235
% ARMA(1,1)
aic =
-0.0327
sbic =
-0.0179
% If using exogenous variables,
[aic,sbic] = aicsbic(errors,constant,p,q,X)
Required Inputs
[outputs] = aicsbic(ERRORS,CONSTANT,P,Q)
Optional Inputs
[outputs] = aicsbic(ERRORS,CONSTANT,P,Q,X)
Outputs
[AIC,SBIC] = aicsbic(inputs)
Comments
Computes the Akaike and Schwartz/Bayes Information Criteria for an ARMA(P,Q) as parameterized in
ARMAXFILTER
USAGE:
[AIC] = aicsbic(ERRORS,CONSTANT,P,Q)
[AIC,SBIC] = aicsbic(ERRORS,CONSTANT,P,Q,X)
INPUTS:
ERRORS - A T by 1 length vector of errors from the regression
CONSTANT - Scalar variable: 1 to include a constant, 0 to exclude
P - Non-negative integer vector representing the AR orders to include in the model.
Q - Non-negative integer vector representing the MA orders to include in the model.
X - [OPTIONAL] a T by K matrix of exogenous variables.
OUTPUTS:
AIC - The Akaike Information Criteria
SBIC - The Schwartz/Bayes Information Criteria
COMMENTS:
This is a helper for ARMAXFILTER and uses the same inputs, CONSTANT, P, Q and X. ERRORS should
be the errors returned from a call to ARMAXFILTER with the same values of P, Q, etc.
EXAMPLES:
Compute AIC and SBIC from an ARMA
[parameters, LL, errors] = armaxfilter(y, constant, p, q);
[aic,sbic] = aicsbic(errors,constant,p,q)
Produces h-step ahead forecasts from an ARMA(P,Q) model. arma_forecaster also computed h-step
ahead forecast standard deviation, aligns y t +h and ŷ t +h |t (so that they both appear at time t ) and computes
forecast errors.
arma_forecaster produces ŷ t +h |t , the h-step ahead forecast of y starting at time t , starting at obser-
vation R and continuing until the end of the sample. The function will return a vector containing R “NaN”
values (since there are no forecasts for the first R observations) followed by T − R elements forming the
sequence ŷ r +h |r , ŷ r +h+1|r +1 , . . . , ŷ T +h |T . The function will also return y t +h shifted back h places. The first R
elements of y t +h will also be “NaN”. The next T − R − h will be y r +h , y r +h+1 , . . . , y T +h and the final h are
also “NaN”. The h-NaNs at the end of the sample are present because y T +1 , . . . y T +h are not available (since
by construction the series end at observation T ). The function also produces the forecast errors which are
simply ê t +h |t = y t +h − ŷ t +h |t , with the error from the forecast computed at time-t placed in the t th ele-
ment of the vector. The final output of this function is the forecast standard deviation which is computed
assuming homoskedasticity
Examples
Comments
Produces h-step ahead forecasts from ARMA(P,Q) models starting at some point
in the sample, R, and ending at the end of the sample. Also shifts the
data to align y(t+h) with y(t+h|t) in slot t, computes the theoretical
forecast standard deviation (assuming homoskedasticity) and the forecast
errors.
USAGE:
[YHATTPH] = arma_forecaster(Y,PARAMETERS,CONSTANT,P,Q,R,H)
[YHATTPH,YTPH,FORERR,YSTD] =
arma_forecaster(Y,PARAMETERS,CONSTANT,P,Q,R,H,SEREGRESSION)
32 Stationary Time Series
INPUTS:
Y - A column of data
CONSTANT - Scalar variable: 1 if the model includes a constant, 0 to exclude
P - Non-negative integer vector representing the AR orders
included in the model.
Q - Non-negative integer vector representing the MA orders
included in the model.
R - Length of sample used in estimation. Sample is split
up between R and P, where the first R (regression) are
used for estimating the model and the remainder are
used for prediction (P) so that R+P=T.
H - The forecast horizon
SEREGRESSION - [OPTIONAL] The standard error of the regression. Used
to compute confidence intervals. If omitted,
SEREGRESSION is set to 1.
OUTPUTS:
YHATTPH - h-step ahead forecasts of Y. The element in position t
of YHATTPH is the time t forecast of Y(t+h). The
first R elements of YHATTPH are NaN. The next T-R-H
are pseudo in-sample forecasts while the final H are
out-of-sample.
YTPH - Value of original data at time t+h shifted to position
t. The first R elements of YTPH are NaN. The next
T-R-H are the values y(R+H),...,y(T), and the final H
are NaN since there is no data available for comparing
to the final H forecasts.
FORERR - The forecast errors, YHATTPH-YTPH
YSTD - The theoretical standard deviation of the h-step ahead
forecast (assumed homoskedasticity)
COMMENTS:
Values not relevant for the forecasting exercise have NaN returned.
Computes the sample autocorrelations and standard errors. Standard errors can be computed under as-
sumptions of homoskedasticity or heteroskedasticity. The sth sample autocorrelation is computed using
the regression
y t = ρ s y t −s + εt
where the mean has been subtracted from the data and the standard errors use the usual OLS covariance
estimators, either the homoskedastic form or White’s.
Examples
ac =
-0.0250
-0.0608
-0.0080
0.0123
-0.0067
acstd =
0.0331
0.0332
0.0312
0.0310
0.0323
ac =
-0.0250
-0.0608
-0.0080
0.0123
-0.0067
acstd =
0.0316
0.0317
0.0317
0.0317
0.0317
Comments
USAGE:
[AC,ACSTD] = sacf(DATA,LAGS)
[AC,ACSTD] = sacf(DATA,LAGS,ROBUST)
INPUTS:
DATA - A T by 1 vector of data
LAGS - The number of autocorrelations to compute
ROBUST - [OPTIONAL] Logical variable (0 (non-robust) or 1 (robust)) to
indicate whether heteroskedasticity robust standard errors
should be used. Default is to use robust standard errors
(ROBUST=1).
OUTPUTS:
AC - A LAGS by 1 vector of autocorrelations
PVAL - A LAGS by 1 vector of standard deviations
COMMENTS:
Sample autocorrelations are computed using the maximum number of
observations for each lag. For example, if DATA has 100 observations,
the first autocorrelation is computed using 99 data points, the second
with 98 data points and so on.
3.4 Sample autocorrelation and partial autocorrelation 35
Computes the partial sample autocorrelations and standard errors. Standard errors can be computed under
assumptions of homoskedasticity or heteroskedasticity. The sth sample autocorrelation is computed using
the regression
y t = φ1 y t −1 + . . . + φs −1 y t −s +1 + ϕs y t −s + εt
and the standard errors use the usual OLS covariance estimators, either the homoskedastic form or White’s.
Examples
pac =
0.0098
0.0015
0.0432
0.0006
0.0768
pacstd =
0.0316
0.0313
0.0315
0.0311
0.0324
[pac, pacstd] = spacf(x,5,0) % Non-heteroskedasticity robust result
pac =
0.0098
0.0015
0.0432
0.0006
0.0768
pacstd =
0.0316
0.0316
0.0316
0.0316
0.0316
Comments
USAGE:
36 Stationary Time Series
[PAC,PACSTD] = spacf(DATA,LAGS)
[PAC,PACSTD] = spacf(DATA,LAGS,ROBUST)
INPUTS:
DATA - A T by 1 vector of data
LAGS - The number of autocorrelations to compute
ROBUST - [OPTIONAL] Logical variable (0 (non-robust) or 1 (robust)) to
indicate whether heteroskedasticity robust standard errors
should be used. Default is to use robust standard errors
(ROBUST=1).
OUTPUTS:
PAC - A LAGS by 1 vector of partial autocorrelations
PACSTD - A LAGS by 1 vector of standard deviations
COMMENTS:
Sample partial autocorrelations computed from autocorrelations that are
computed using the maximum number of observations for each lag. For
example, if DATA has 100 observations, the first autocorrelation is
computed using 99 data points, the second with 98 data points and so on.
3.5 Theoretical autocorrelation and partial autocorrelation 37
Computes the theoretical autocorrelations from an ARMA(P,Q) by solving the Yule-Walker equations.
Examples
The two examples correspond to an AR(1) with φ1 = .9 and an ARMA(1,1) with φ1 = .9 and θ1 = .9.
ac = acf(.9,0,5)
ac =
1.0000
0.9000
0.8100
0.7290
0.6561
0.5905
ac = acf(.9,.9,5)
ac =
1.0000
0.9499
0.8549
0.7694
0.6924
0.6232
Comments
USAGE:
[AUTOCORR, SIGMA2_T] = acf(PHI,THETA,N)
[AUTOCORR, SIGMA2_T] = acf(PHI,THETA,N,SIGMA2_E)
INPUTS:
PHI - Autoregressive parameters, in the order t-1,t-2,...
THETA - Moving average parameters, in the order t-1,t-2,...
N - Number of autocorrelations to be computed
SIGMA2_E - [OPTIONAL] Variance of errors. If omitted, sigma2_e=1
OUTPUTS:
AUTOCORR - N+1 by 1 vector of autocorrelation. To recover the
autocovariance of an ARMA(P,Q), use AUTOCOV = AUTOCORR * SIGMA2_Y
SIGMA2_Y - Long-run variance, denoted gamma0 of ARMA process with
innovation variance SIGMS2_E
38 Stationary Time Series
COMMENTS:
Note: The ARMA model is parameterized as follows:
y(t)=phi(1)y(t-1)+phi(2)y(t-2)+...+phi(p)y(t-p)+e(t)+theta(1)e(t-1)
+theta(2)e(t-2)+...+theta(q)e(t-q)
To compute the autocorrelations for an ARMA that does not include all
lags 1 to P, insert 0 for any excluded lag. For example, if the model
was y(t) = phi(2)y(t-1), THETA = [0 phi(2)]
3.5 Theoretical autocorrelation and partial autocorrelation 39
Computes the theoretical partial autocorrelations from an ARMA(P,Q). The function uses acf to produce
the theoretical autocorrelations and then transforms them to partial autocorrelations by noting that the sth
partial autocorrelation is given by φs in the regression
y t = φ1 y t −1 + φ2 y t −2 + . . . + φs y t −s + εt
and is computed using the first s + 1 autocorrelations and the population regression coefficients.
Examples
The two examples correspond to an AR(1) with φ1 = .9 and an ARMA(1,1) with φ1 = .9 and θ1 = .9.
pac = pacf(.9,0,5)
pac =
1.0000
0.9000
0
0
0
0
pac = pacf(.9,.9,5)
pac =
1.0000
0.9499
-0.4843
0.3226
-0.2399
0.1892
Comments
USAGE:
[PAUTOCORR] = pacf(PHI,THETA,N)
INPUTS:
PHI - Autoregressive parameters, in the order t-1,t-2,...
THETA - Moving average parameters, in the order t-1,t-2,...
N - Number of autocorrelations to be computed
OUTPUTS:
PAUTOCORR - N+1 by 1 vector of partial autocorrelations.
COMMENTS:
Note: The ARMA model is parameterized as follows:
40 Stationary Time Series
y(t)=phi(1)y(t-1)+phi(2)y(t-2)+...+phi(p)y(t-p)+e(t)+theta(1)e(t-1)
+theta(2)e(t-2)+...+theta(q)e(t-q)
To compute the autocorrelations for an ARMA that does not include all
lags 1 to P, insert 0 for any excluded lag. For example, if the model
was y(t) = phi(2)y(t-1), THETA = [0 phi(2)]
3.6 Testing for serial correlation 41
The Ljung-Box statistic tests whether the first k autocorrelations are zero against an alternative that at least
one is non-zero. The Ljung-Box Q is computed
k
X ρ̂i
Q = T (T + 2)
T −K
i =1
where ρ̂i is the kth sample autocorrelation. This test statistic has an asymptotic χK2 distribution. Note: The
Ljung-Box statistic is not appropriate for heteroskedastic data.
Examples
Q =
0.2825
1.2403
2.0262
2.0316
3.8352
pval =
0.4049
0.4621
0.4330
0.2701
0.4266
Comments
USAGE:
[Q,PVAL] = ljungbox(DATA,LAGS)
INPUTS:
DATA - A T by 1 vector of data
LAGS - The maximum number of lags to compute the LB. The statistic and
pval will be returned for all sets of lags up to and
including LAGS
OUTPUTS:
Q - A LAGS by 1 vector of Q statistics
PVAL - A LAGS by 1 set of appropriate pvals
42 Stationary Time Series
COMMENTS:
This test statistic is common but often inappropriate since it assumes
homoskedasticity. For a heteroskedasticity consistent serial
correlation test, see lmtest1
SEE ALSO:
lmtest1, lmtest2
3.6 Testing for serial correlation 43
Conducts an LM test that there is no evidence of serial correlation up to an including Q lags of the dependant
variable. The test is an LM-test for testing the null that all of the regression coefficients are zero in
y t = φ0 + φ1 y t −1 + φ2 y t −2 + . . . + φQ y t −Q + εt .
The null tested is H 0 : φ1 = φ2 = . . . = φQ = 0 and the test is computed as an LM test of the form
LM = T ŝŜ−1 ŝ
PT
where s = T −1 X0 ε̃ and S = T −1 t =1 ε̃t xt x0t where xt = [y t −1 y t −2 . . . y t −Q ] and ε̃t = y t − ȳ . The function
is called by passing the data and the number of lags to test into the function
and returns a Q by 1 vector of LM tests where the first value tests 1 lag, the second value
tests 2 lags, and so on up to the Q \th which returns the Q -lag LM test for serial
correlation. \texttt{lmtest1} can take an optional third argument which determines the
covariance estimator (Ŝ): 0 uses a non-heteroskedasticity robust estimator while
1 (default) uses a heteroskedasticity robust estimator. To use the alternative form, use the
three parameter form
\begin{MATLAB}LM = lmtest1(data, Q, robust)
where robust is either 0 or 1. lmtest1 also returns an optional second output, the p-values of each test
statistic computed using a χ j2 where j is the number of lags used in that test, so 1 for the first value of LM, 2
for the second and so on up to a χQ2 for the final value.
\subsubsection{Examples}
\begin{MATLAB}
x = randn(1000,1); % Define x to be a 1000 by 1 vector or random data
[LM, pval] = lmtest1(x,5) % Results will vary based on the random numbers used
LM =
0.0223
0.1279
0.5606
0.7200
0.5851
pval =
0.8813
0.9381
0.9054
0.9488
0.9887
LM =
0.0229
44 Stationary Time Series
0.1256
0.5827
0.7308
0.5879
pval =
0.8798
0.9391
0.9004
0.9475
0.9886
Comments
USAGE:
[LM,PVAL] = lmtest1(DATA,Q)
[LM,PVAL] = lmtest1(DATA,Q,ROBUST)
INPUTS:
DATA - A set of deviates from a process with or without mean
Q - The maximum number of lags to regress on. The statistic and
pval will be returned for all sets of lags up to and including q
ROBUST - [OPTIONAL] Logical variable (0 (non-robust) or 1 (robust)) to
indicate whether heteroskedasticity robust standard errors
should be used. Default is to use robust standard errors
(ROBUST=1).
OUTPUTS:
LM - A Qx1 vector of statistics
PVAL - A Qx1 set of appropriate pvals
COMMENTS:
To increase power of this test, the variance estimator is computed under
the alternative. As a result, this test is an LR-class test but, aside
from the variance estimator, is identical to the usual LM test for serial
correlation
3.7 Filtering 45
3.7 Filtering
Baxter & King (1999) filter for extracting the trend and cyclic component from macroeconomic time series.
Examples
Required Inputs
[outputs] = bkfilter(Y,P,Q)
Optional Inputs
[outputs] = bkfilter(Y,P,Q,K)
Outputs
[TREND,CYCLIC,NOISE] = bkfilter(Y,P,Q,K)
• TREND: The filtered trend, which is the signal with a period larger than Q. The first and last K points of
TREND will be equal to Y.
• CYCLIC: The cyclic component, which is the signal with a period between P and Q. The first and last K
points of CYCLIC will be 0.
• NOISE: The high frequency noise component, which is the signal with a period shorter than P. The
first and last K points of NOISE will be 0.
46 Stationary Time Series
Comments
USAGE:
[TREND,CYCLIC,NOISE] = bkfilter(Y,P,Q,K)
INPUTS:
Y - A T by K matrix of data to be filtered.
P - Number of periods to use in the higher frequency filter (e.g. 6 for quarterly data).
Must be at least 2.
Q - Number of periods to use in the lower frequency filter (e.g. 32 for quarterly data). Q
can be inf, in which case the low pass filter is a 2K+1 moving average.
K - [OPTIONAL] Number of points to use in the finite approximation bandpass filter. The
default value is 12. The filter throws away the first and last K points.
OUTPUTS:
TREND - A T by K matrix containing the filtered trend. The first and last K points equal Y.
CYCLIC - A T by K matrix containing the filtered cyclic component. The first and last K points are 0.
NOISE - A T by K matrix containing the filtered noise component. The first and last K points are 0.
COMMENTS:
The noise component is simply the original data minus the trend and cyclic component, NOISE = Y -
TREND - CYCLIC where the trend is produces by the low pass filter and the cyclic component is
produced by the difference of the high pass filter and the low pass filter. The recommended
values of P and Q are 6 and 32 or 40 for quarterly data, or 18 and 96 or 120 for monthly data.
Setting Q=P produces a single bandpass filer and the cyclic component will be 0.
EXAMPLES:
Load US GDP data
load GDP
Standard BK Filter with periods of 6 and 32
[trend, cyclic] = bkfilter(log(GDP),6,32)
BK Filter for low pass filtering only at 40 period, CYCLIC will be 0
[trend, cyclic] = bkfilter(log(GDP),40,40)
BK Filter using a 2-sided 20 point approximation
trend = bkfilter(log(GDP),6,32,20)
Hodrick & Prescott (1997) filter for extracting the trend and cyclic component from macroeconomic time
series. The HP filter identifies the trend as the solution to
T
X 2
y t − µt + λ µt −1 − µt − µt + µt +1
min
{µt }
t =1
where λ is a parameter which determines the cutoff frequency of the filter and any trend points outside of
1, . . . , T are dropped. If λ = 0 then µt = y t and as λ → ∞ µt limits to a least squares linear trend fit.
Examples
Required Inputs
[outputs] = hp_filter(Y,LAMBDA)
• LAMBDA: Smoothing parameter for HP filter. Values above 101 0 produce unstable matrix inverses and
so a linear trend is forced at this point.
Outputs
[TREND,CYCLIC] = hp_filter(inputs)
Comments
USAGE:
[TREND,CYCLIC] = hp_filter(Y,LAMBDA)
INPUTS:
Y - A T by K matrix of data to be filtered.
LAMBDA - Positive, scalar integer containing the smoothing parameter of the HP filter.
OUTPUTS:
TREND - A T by K matrix containing the filtered trend
CYCLIC - A T by K matrix containing the filtered cyclic component
48 Stationary Time Series
COMMENTS:
The cyclic component is simply the original data minus the trend, CYCLIC = Y - TREND. 1600 is
the recommended value of LAMBDA for Quarterly Data while 14400 is the recommended value of LAMBDA
for monthly data.
EXAMPLES:
Load US GDP data
load GDP
Standard HP Filter with lambda = 1600
[trend, cyclic] = hp_filter(log(GDP),1600)
Regression with Newey-West variance-covariance estimation. Aside from the difference variance-covariance
estimator, is virtually identical to ols.
Examples
Required Inputs
[outputs] = ols(Y,X)
• X: A T by k vector containing the regressors. X should be full rank and should not contain a constant
column.
Optional Inputs
[outputs] = olsnw(Y,X,C,NWLAGS)
• NWLAGS: Number of lags to use when computing the variance-covariance matrix of the estimated pa-
1
rameters. The default value is bT 3 c.
Outputs
olsnw provides many other outputs than the estimated parameters. The full olsnw command can return
[B,TSTAT,S2,VCVNW,R2,RBAR,YHAT] = olsnw(inputs)
• S2: Estimated variance of the regression error. Computed using a degree of freedom adjustment (n −
k ).
Comments
USAGE:
[B,TSTAT,S2,VCVNW,R2,RBAR,YHAT] = olsnw(Y,X,C,NWLAGS)
INPUTS:
Y - T by 1 vector of dependent data
X - T by K vector of independent data
C - 1 or 0 to indicate whether a constant should be included (1: include constant)
NWLAGS - Number of lags to included in the covariance matrix estimator. If omitted or empty,
NWLAGS = floor(T^(1/3)). If set to 0 estimates White’s Heteroskedasticity Consistent
variance-covariance.
OUTPUTS:
B - A K(+1 is C=1) vector of parameters. If a constant is included, it is the first parameter
TSTAT - A K(+1) vector of t-statistics computed using Newey-West HAC standard errors
S2 - Estimated error variance of the regression, estimated using Newey-West with NWLAGS
VCVNW - Variance-covariance matrix of the estimated parameters computed using Newey-West
R2 - R-squared of the regression. Centered if C=1
RBAR - Adjusted R-squared. Centered if C=1
YHAT - Fit values of the dependent variable
COMMENTS:
The model estimated is Y = X*B + epsilon where Var(epsilon)=S2.
EXAMPLES:
Regression with automatic BW selection
b = olsnw(y,x)
Regression without a constant
b = olsnw(y,x,0)
Regression with a pre-specified lag-length of 10
b = olsnw(y,x,1,10)
Regression with White standard errors
b = olsnw(y,x,1,0)
L
0
X
2
σ̂N W = Γ̂0 + w i (Γ̂i + Γ̂i )
i =1
PT
where w i = (L − i + 1)/(L + 1) for i = 1, 2, . . . , L and Γ̂i = t =i +1 x̃t x̃t −i where x̃t = xt − x̄ are the
(optionally) demeaned data.
Examples
y = armaxfilter_simulate(1000,0,1,.9);
% Newey-West covariance with automatic BW selection
lrcov = covnw(y)
% Newey-West covariance with 10 lags
lrcov = covnw(y, 10)
% Newey-West covariance with 10 lags and no demeaning
lrcov = covnw(y, 10, 0)
Required Inputs
[outputs] = covnw(DATA)
Optional Inputs
• DEMEAN: Logical value indicating whether the demean the data (1) or to compute the long-run covari-
ance of the data directly. Default is to demean.
Outputs
[V] = covnw(inputs)
• V: k by k covariance matrix.
Comments
52 Stationary Time Series
USAGE:
V = covnw(DATA)
V = covnw(DATA,NLAG,DEMEAN)
INPUTS:
DATA - T by K vector of dependent data
NLAG - Non-negative integer containing the lag length to use. If empty or not included,
NLAG=min(floor(1.2*T^(1/3)),T) is used
DEMEAN - Logical true or false (0 or 1) indicating whether the mean should be subtracted when
computing the covariance
OUTPUTS:
V - A K by K covariance matrix estimated using Newey-West (Bartlett) weights
COMMENTS:
EXAMPLES:
y = armaxfilter_simulate(1000,0,1,.9);
% Newey-West covariance with automatic BW selection
lrcov = covnw(y)
% Newey-West covariance with 10 lags
lrcov = covnw(y, 10)
% Newey-West covariance with 10 lags and no demeaning
lrcov = covnw(y, 10, 0)
Long-run covariance estimation using the VAR-based estimator of ?. The basic idea of their estimator is
the compute the long-run variance of a process from a Vector Autoregression. Suppose a vector of data yt
follows a stationary VAR,
yt − µ = Φ1 yt −1 − µ + . . . + Φk yt −k − µ + εt
yt − µ − Φ1 yt −1 − µ − . . . − Φk yt −k − µ = εt
where Σ = E εt ε0t is the unconditional covariance of the residuals (assumed to be a vector White Noise
process).
Note: This function differs slightly from the procedure of Den Haan and Levin in that it only conduct a
global lag length search, and so the resultant VAR will not have any zero elements. Den Haan and Levin
recommend using a series-by-series search with the possibility of having different lag lengths of own lags
and other lags. Changing to their procedure is something that may happen in future releases. Despite this
difference, the estimator in the code is still consistent as long as the maximum lag length
Examples
y = armaxfilter_simulate(1000,0,1,.9);
% VAR HAC covariance with automatic BW selection
lrcov = covvar(y)
% VAR HAC with at most 10 lags
lrcov = covvar(y, 10)
% VAR HAC with at most 10 lags selected using AIC
lrcov = covnw(y, 10, 3)
Required Inputs
[outputs] = covvar(DATA)
Optional Inputs
• METHOD: A scalar numeric value indicating the method to use when searching:
54 Stationary Time Series
Outputs
[V,LAGSUSED] = covvar(inputs)
• V: k by k covariance matrix.
Comments
USAGE:
V = covnw(DATA)
V = covnw(DATA,NLAG,DEMEAN)
INPUTS:
DATA - T by K vector of dependent data
NLAG - Non-negative integer containing the lag length to use. If empty or not included,
NLAG=min(floor(1.2*T^(1/3)),T) is used
DEMEAN - Logical true or false (0 or 1) indicating whether the mean should be subtracted when
computing the covariance
OUTPUTS:
V - A K by K covariance matrix estimated using Newey-West (Bartlett) weights
COMMENTS:
EXAMPLES:
y = armaxfilter_simulate(1000,0,1,.9);
% Newey-West covariance with automatic BW selection
lrcov = covnw(y)
% Newey-West covariance with 10 lags
lrcov = covnw(y, 10)
% Newey-West covariance with 10 lags and no demeaning
lrcov = covnw(y, 10, 0)
Estimates an Augmented Dickey-Fuller regression and returns the appropriate p-value for the assumption
made on the model and data generating process. The estimated model is
y t = α + ρy t −1 + γt + δ1 ∆y t −1 + ... + δP ∆y t −p
The deterministic terms, α and γ may be included or excluded depending on which case it used and the
number of lags used in the estimation can be specified. augdf supports 4 cases:
• Case 1: DGP contains no deterministic time trend but estimated model includes a constant and a
time-trend
• Case 2: DGP contains a constant or a time trend. Estimated model includes both a constant and a
time trend.
Other versions including lags in the ADF and deterministic trends can be estimated using
P-values were computed form 2 million simulations using gaussian errors. The function augdfcv re-
turns the appropriate critical values and p-values for the choice of case and size of the data sample (T ).
56 Nonstationary Time Series
Examples
ADFstat =
-0.3941
ADFpval =
0.5472
ADFstat =
-2.3527
ADFpval =
0.1584
ADFpval =
0.7267
ADFstat =
-3.3738
ADFpval =
-0.0139
critval =
-3.4494
-2.8739
-2.5769
-0.4366
-0.0758
0.6123
Comments
USAGE:
[ADFSTAT,PVAL,CRITVAL] = augdf(Y,P,LAGS)
[ADFSTAT,PVAL,CRITVAL,RESID] = augdf(Y,P,LAGS)
INPUTS:
Y - A T by 1 vector of data
P - Order of the polynomial of include in the ADF regression:
0 : No deterministic terms
1 : Constant
2 : Time Trend
3 : Constant, DGP assumed to have a time trend
LAGS - The number of lags to include in the ADF test (0 for DF test)
OUTPUTS:
ADFSTAT - Dickey-Fuller statistic
PVAL - Probability the series is a unit root
CRITVALS - A 6 by 1 vector with the [.01 .05 .1 .9 .95 .99] values from the DF distribution
RESID - Residual (adjusted for lags) from the ADF regression
COMMENTS:
Conducts an ADF test using up to a maximum number of lags where the lag length is automatically selected
according to the AIC or BIC. All of the actual testing is done by augdf.
Examples
% Simulate an MA(3)
x = armaxfilter_simulate(1000,0, 0, [], 3, [.8 .3 .9]);
x = cumsum(x); % Integrate x
maxlag = 24;
% Default is to use AIC
[ADFstat, ADFpval, critval,resid, lags] = augdfautolag(x,1,maxlag);
lags
lags =
15
lags =
9
Comments
USAGE:
[ADFSTAT,PVAL,CRITVAL] = augdfautolag(Y,P,LAGS,IC)
[ADFSTAT,PVAL,CRITVAL,RESID,LAGS] = augdfautolag(Y,P,LAGS,IC)
INPUTS:
Y - A T by 1 vector of data
P - Order of the polynomial of include in the ADF regression:
0 : No deterministic terms
1 : Constant
2 : Time Trend
3 : Constant, DGP assumed to have a time trend
MAXLAGS - The maximum number of lags to include in the ADF test
IC - [OPTIONAL] String, either ’AIC’ (default) or ’BIC’ to choose the criteria to select
the model
OUTPUTS:
ADFSTAT - Dickey-Fuller statistic
PVAL - Probability the series is a unit root
CRITVALS - A 6 by 1 vector with the [.01 .05 .1 .9 .95 .99] values from the DF distribution
LAGS - The selected number of lags
COMMENTS:
4.1 Unit Root Testing 59
Vector Autoregressions
Estimates Pth order (regular and irregular) vector autoregressions. The options for vectorar include the
ability to include or exclude a constant, choose the lag order, and to specify which assumptions should be
made for computing the covariance matrix of the estimated parameters. The parameter covariance matrix
can be estimated under 4 sets of assumptions on the errors:
To examine the outputs and choices of the covariance estimator consider a regular bivariate VAR(2),
yt = Φ0 + Φ1 yt −1 + Φ2 yt −2 + εt
The first four outputs of vectorar all share a common structure, cell arrays. Cell arrays are structures
of other MATLAB elements. In this function, each of these are cell arrays of P elements where each element
is a k by k matrix of parameters. (2 by 2 in the bivariate case). To estimate a bivariate VAR with a constant
in MATLAB , call
[parameters,stderr,tstat,pval] = vectorar(y,1,[1 2]);
where the first input is the T by k matrix of y data, the second is either 1 (include a constant) or 0 and the
their is a vector of lags to include in the model. The outputs are cell arrays with P elements where each
element is composed of a k by k matrix. Suppose y was T by 2, then
62 Vector Autoregressions
parameters =
[2x2 double] [2x2 double]
ans =
0.6885 0.1621
0.1038 0.7500
ans =
0.0267 0.0473
0.0503 -0.0031
The elements of parameters are identical to the elements of Φ j above. Thus, the (i,j) element of Φ1 will
be contained n the (i,j) element of parameters{1} and the (i,j) element of Φ2 will be in the (i,j) element of
parameters{2}. The other four outputs in the function call above return similar cell structures of standard
errors, T-statistics and the corresponding p-values, all with the same ordering.
The full call to vectorar returns some additional information including the complete parameter co-
variance matrix.
[parameters,stderr,tstat,pval,const,conststd,r2,errors,s2,paramvec,vcv] ...
= vectorar(y,1,[1 2]);
• const: k by 1 vector containing Φ0 . If no constant is included in the model, this value will be empty
([]).
• conststd: k by 1 vector containing the standard errors or the estimated intercept parameters. If no
constant is included in the model, this value will be empty ([]).
• s2: k by k matrix containing the estimated covariance matrix of the residuals, Σ̂.
[φ̂1,0 φ̂11,1 φ̂12,1 φ̂11,2 φ̂12,2 φ̂2,0 φ̂21,1 φ̂22,1 φ̂21,2 φ̂22,2 ]0
• vcv: A square matrix where each dimension is as large as the length of paramvec. The covariance
matrix has the same order as the elements of paramvec. In the bivariate VAR, the (1,1) element of vcv
5.1 Stationary Vector Autoregression 63
would contain the estimated variance of φ̂1,0 , the (1,2) is the covariance between φ̂1,0 and φ̂11,1 and
so on. The estimation strategy for vcv depends on the values on het and uncorr (see below).
[parameters] = vectorar(y,constant,lags,het,uncorr);
where
• y: T by k vector of data.
• lags: Vector of lags to include. A standard Pth order VAR can be called by setting lags to [1:P]. An ir-
regular Pth order VAR can be called by leaving out some of the lags. For example [1 2 4] would produce
an irregular 4th order VAR excluding lag 3.
• het: Scalar value of either 1 (assume heteroskedasticity) or 0 (assume homoskedasticity). The default
value for this optional parameters is 1.
• uncorr: Scalar value of either 0 (assume the errors are correlated) or 1 (assume no error correlation).
The default value for this optional parameters is 0.
The primary options are choosing het and uncorr. Since each can take one of 4 values, there are 4
combination.
This is the simplest estimator. This estimator assumes that Σ is diagonal. The estimated covariance matrix
is given by
Ω̂ = Σ̂ ⊗ (X0 X)−1
where Σ̂ is a diagonal matrix with the variance of ε̂i ,t on the ith diagonal and X is a T by P K (or P K + 1
if a constant is included) matrix of regressors in the regular VAR case. To understand the structure of X,
decompose it as
x1
x2
X= ..
.
xT
where x1 is the set of regressors in any of the k regression equations in a VAR. In the bivariate example above,
The choice of X is motivated by noticing that a Pth order VAR can be consistently estimated using OLS
by regressing the k yi vector on X, θ̂ i = (X0 X)−1 X0 yi where θ̂ i is the estimated “row” of parameters in a VAR.
In the bivariate VAR(2) above,
64 Vector Autoregressions
" #
σ̂11 (X0 X)−1 0M M
Ω̂ =
0M M σ̂22 (X0 X)−1
The correlated homoskedastic case is similar to the previous case with the change that Σ̂ is no longer as-
sumed to be diagonal. Once this change has been made, the variance covariance estimator is identical
Ω̂ = Σ̂ ⊗ (X0 X)−1
where σ̂i j is the estimated covariance between εi ,t and ε j ,t and σ̂12 = σ̂21 .
When residuals are heteroskedastic a White (or sandwich) -style covariance estimator is required. The two
parts of the sandwich are denoted  and B̂.  is given by
X0 X
 = ( ) ⊗ IK
T
and
T
X
B̂ = T −1 εt ε0t ⊗ xt x0t
t =1
Ω̂ = T −1 Â−1 B̂Â−1
PT p
The assumption that the errors are uncorrelated in this form imposes that T −1 t =1 εi ,t ε j ,t xt x0t →
0M M and so B̂ is a “block diagonal” matrix where all of the elements in the off diagonal blocks are 0. In the
bivariate VAR(2) above,
" #
X0 X
0M M
 = T
X0 X
0M M T
and
" PT #
T −1 0
t =1 ε1,t ⊗ xt xt
2
0
B̂ = −
PT M M2 0 .
t =1 ε2,t ⊗ xt xt
1
0M M T
5.1 Stationary Vector Autoregression 65
Finally the T −1 is present int he formula for Ω̂ since  and B̂ both converge to constants although the vari-
ance of the estimated coefficients should be decreasing with T .
The correlated heteroskedastic case is essentially identical to the uncorrelated heteroskedastic case where
PT p
the assumption that T −1 t =1 εi ,t ε j ,t xt x0t → 0M M is not made. In the VAR(2) from above, Â is unchanged
and B̂ is now
" PT PT #
T −1 0
t =1 ε1,t ⊗ xt xt
2
T −1 t =1 ε1,t ε2,t ⊗ xt x0t
B̂ = T PT .
T −1 t =1 ε1,t ε2,t ⊗ xt x0t T −1 t =1 ε22,t ⊗ xt x0t
P
Ω̂ = T −1 Â−1 B̂Â−1 .
Examples
% To estimate a VAR(1)
parameters = vectorar(y,1,1);
Comments
Estimate a Vector Autoregression and produce the parameter variance-covariance matrix under a
variety of assumptions on the covariance of the errors:
* Conditionally Homoskedastic and Uncorrelated
* Conditionally Homoskedastic but Correlated
* Heteroskedastic but Conditionally Uncorrelated
* Heteroskedastic and Correlated
USAGE:
[PARAMETERS]=vectorar(Y,CONSTANT,LAGS)
66 Vector Autoregressions
[PARAMETERS,STDERR,TSTAT,PVAL,CONST,CONSTSTD,R2,ERRORS,S2,PARAMVEC,VEC]
= vectorar(Y,CONSTANT,LAGS,HET,UNCORR)
INPUTS:
Y - A T by K matrix of data
CONSTANT - Scalar variable: 1 to include a constant, 0 to exclude
LAGS - Non-negative integer vector representing the VAR orders to include in the model.
HET - [OPTIONAL] A scalar integer indicating the type of covariance estimator
0 - Homoskedastic
1 - Heteroskedastic [DEFAULT]
UNCORR - [OPTIONAL] A scalar integer indicating the assumed structure of the error covariance
matrix
0 - Correlated errors [DEFAULT]
1 - Uncorrelated errors
OUTPUTS:
PARAMETERS - Cell structure containing K by K matrices in the position of the indicated in
LAGS. For example if LAGS = [1 3], PARAMETERS{1} would be the K by K
parameter matrix for the 1st lag and PARAMETERS{3} would be the K by K matrix
of parameters for the 3rd lag
STDERR - Cell structure with the same form as PARAMETERS containing parameter standard
errors estimated according to UNCORR and HET
TSTAT - Cell structure with the same form as PARAMETERS containing parameter t-stats
computed using STDERR
PVAL - P-values of the parameters
CONST - K by 1 vector of constants
CONSTSTD - K by 1 vector standard errors corresponding to constant
R2 - K by 1 vector of R-squares
ERRORS - K by T vector of errors
S2 - K by K matrix containing the estimated error variance
PARAMVEC - K*((# lags) + CONSTANT) by 1 vector of estimated parameters. The first (# lags
+ CONSTANT) correspond to the first row in the usual var form:
[CONST(1) P1(1,1) P1(1,2) ... P1(1,K) P2(1,1) ... P2(1,K) ...]
The next (# lags + CONSTANT) are the 2nd row
[CONST(1) P1(2,1) P1(2,2) ... P1(2,K) P2(2,1) ... P2(2,K) ...]
and so on through the Kth row
[CONST(K) P1(K,1) P1(K,2) ... P1(K,K) P2(K,1) ... P2(K,K) ...]
VCV - A K*((# lags) + CONSTANT) by K*((# lags) + CONSTANT) matrix of estimated
parameter covariances computed using HET and UNCORR
COMMENTS:
Estimates a VAR including any lags.
y(:,t)’ = CONST + P(1) * y(:,t-1) + P(2)*y(:,t-2) + ... + P(1)*y(:,t-K)’
where P(j) are K by K parameter matrices and CONST is a K by 1 parameter matrix (if CONSTANT==1)
EXAMPLE:
To fit a VAR(1) with a constant
parameters = vectorar(y,1,1)
To fit a VAR(3) with no constant
parameters = armaxfilter(y,0,[1:3])
To fit a VAR that includes lags 1 and 3 with a constant
parameters = armaxfilter(y,1,[1 3])
Granger Causality testing in a VAR. Most of the choices in grangercause are identical to those in vectorar
and knowledge of the features of vectorar is recommended. The only new options are the ability to choose
one of the three test statistics:
• Likelihood Ratio: If the data are assumed to be homoskedastic, the classic likelihood ratio presented
in the notes is used. If the data are heteroskedastic, an LM-type test based on the scores under the
null but using a covariance estimator computed under the alternative is computed.
• Lagrange Multiplier: Computes the LM test using the scores and errors estimated under the null. The
assumption about the heteroskedasticity of the residuals and whether the residuals are correlated are
imposed when estimating the score covariance.
• Wald: Computes the GC test statistics using a Wald test where the parameter covariance matrix is
estimated under the assumptions about heteroskedasticity and correlation of the residuals. For more
on covariance matrix estimation, see vectorar.
Aside from these three changes, the inputs are identical to those in vectorar
[stat,pval]=grangercause(y,constant,lags,het,uncorr,inference)
The function has two outputs, the computed statistics, one for each y i begin caused (in rows) and one
for each y j -lags causing. For example in a bivariate VAR(2),
yt = Φ0 + Φ1 yt −1 + Φ2 yt −2 + εt
the (1,1) value of stat contains the GC test statistic for the exclusion restriction of φ11,1 = φ11,2 = 0, the
(1,2) value contains the test statistic for the exclusion restriction of φ12,1 = φ12,2 = 0 and so on. pval
contains a matching matrix of p-values of the null of no Granger Causality.
Examples
Comments
USAGE:
[STAT] = grangercause(Y,CONSTANT,LAGS)
[STAT,PVAL] = grangercause(Y,CONSTANT,LAGS,HET,UNCORR,INFERENCE)
INPUTS:
Y - A T by K matrix of data
CONSTANT - Scalar variable: 1 to include a constant, 0 to exclude
LAGS - Non-negative integer vector representing the VAR orders to include in the model.
HET - [OPTIONAL] A scalar integer indicating the type of covariance estimator
0 - Homoskedastic
1 - Heteroskedastic [DEFAULT]
UNCORR - [OPTIONAL] A scalar integer indicating the assumed structure of the error
covariance matrix
0 - Correlated errors [DEFAULT]
1 - Uncorrelated errors
INFERENCE - [OPTIONAL] Inference method
1 - Likelihood ratio
2 - LM test
3 - Wald test
OUTPUTS:
STAT - K by K matrix of Granger causality statistics computed using the specified
covariance estimator and inference method STAT(i,j) corresponds to a test that
y(i) is caused by y(j)
PVAL - K by K matrix of p-values corresponding to STAT
5.1 Stationary Vector Autoregression 69
COMMENTS:
Granger causality tests based on a VAR including any lags.
where P(j) are K by K parameter matrices and CONST is a K by 1 parameter matrix (if CONSTANT==1)
EXAMPLE:
Conduct GC testing in a VAR(1) with a constant
parameters = grangercause(y,1,1)
Conduct GC testing in a VAR(3) with no constant
parameters = grangercause(y,0,[1:3])
Conduct GC testing in a VAR that includes lags 1 and 3 with a constant
parameters = grangercause(y,1,[1 3])
Impulse response function, standard errors and plotting. impulseresponse derives heavily from vectorar
and uses much of the same syntax. The important new options to impulseresponse are the number if
impulses to compute, leads, and the assumption used for decomposing the error covariance, sqrttype.
impulseresponse always returns leads+1 impulses and standard errors since the 0th is included. leads
is a positive integer. sqrttype can be any one of:
• 1: Use scaled but assume the correlation is zero. The scaling is the estimated standard deviation from
the VAR specification used. This is the default.
• k by k positive definite user provided square root matrix. This option was provided to allow the user
to impose a block spectral structure on the square root should they choose.
where y, constant, lags, het and uncorr are the same as in vectorar. leads and sqrttype are as de-
scribed above and graph is a 1 (produce plot) or 0 variable indicating whether a plot with 95% confidence
bands should be produced. The outputs are:
• impulses: A k by k by leads 3-D matrix of impulse responses. The element in position (i,j,l) is the
impulse response of y i to a shock to ε j , l -periods in the future.
• impulsesstd: A k by k by leads 3-D matrix of impulse response standard errors. These correspond
directly to the impulse response in the same position.
Examples
Comments
USAGE:
[IMPULSES]=impulseresponse(Y,CONSTANT,LAGS,LEADS)
[IMPULSES,IMPULSESTD,HFIG]=impulseresponse(Y,CONSTANT,LAGS,LEADS,SQRTTYPE,GRAPH,HET,UNCORR)
INPUTS:
Y - A T by K matrix of data
CONSTANT - Scalar variable: 1 to include a constant, 0 to exclude
LAGS - Non-negative integer vector representing the VAR orders to include in the model.
LEADS - Number of leads to compute the impulse response function
SQRTTYPE - [OPTIONAL] Either a scalar or a K by K positive definite matrix. This input
determines the type of covariance decomposition used. If it is a scalar if must
be one of:
0 - Unit (unscaled) shocks, covariance assumed to be an identity matrix
1 - [DEFAULT] Scaled but uncorrelated shocks. Scale is based on estimated
error standard deviations.
2 - Scaled and correlated shocks, Choleski decomposition. Scale is based on
estimated error standard deviations.
3 - Scaled and correlated shocks, spectral decomposition. Scale is based on
estimated error standard deviations.
If the input is a K by K positive definite matrix, it is used as the
covariance square root for computing the impulse response function.
GRAPH - [OPTIONAL] Logical variable (0 (no graph) or 1 (graph)) indicating whether the
function should produce a bar plot of the sample autocorrelations and confidence
intervals. Default is to produce a graphic (GRAPH=1).
HET - [OPTIONAL] A scalar integer indicating the type of
covariance estimator
0 - Homoskedastic
1 - Heteroskedastic [DEFAULT]
UNCORR - [OPTIONAL] A scalar integer indicating the assumed structure of the error
covariance matrix
0 - Correlated errors [DEFAULT]
72 Vector Autoregressions
1 - Uncorrelated errors
OUTPUTS:
IMPULSES - Cell structure containing K by K matrices in the position of the indicated in
LAGS. For example if LAGS = [1 3], PARAMETERS{1} would be the K by K parameter
matrix for the 1st lag and PARAMETERS{3} would be the K by K matrix of
parameters for the 3rd lag
IMPULSESSTD - Cell structure containing K by K matrices in the containing parameter standard
errors estimated according to UNCORR and HET
HFIG - Figure handle to the bar plot of the autocorrelations
COMMENTS:
Estimates a VAR including any lags.
y(:,t)’ = CONST + P(1) * y(:,y-1) + P(2)*y(:,y-2) + ... + P(1)*y(:,t-K)’
where P(j) are K by K parameter matrices and CONST is a K by 1 parameter matrix (if CONSTANT==1)
EXAMPLE:
To produce the IR for 12 leads form a VAR(1) with a constant
impulses = impulserepsonse(y,1,1,12)
To produce the IR for 12 leads form a VAR(3) without a constant
impulses = impulserepsonse(y,0,[1:3],12)
To produce the IR for 12 leads form an irregular VAR(3) with only lags 1 and 3 with a constant
impulses = impulserepsonse(y,1,[1:3],12)
Volatility Modeling
Examples
% GARCH(1,1) simulation
simulatedData = tarch_simulate(1000, [1 .1 .8], 1, 0, 1)
% GJR-GARCH(1,1,1) simulation
simulatedData = tarch_simulate(1000, [1 .1 .1 .8], 1, 1, 1)
% GJR-GARCH(1,1,1) simulation with standardized Student’s T innovations
simulatedData = tarch_simulate(1000, [1 .1 .1 .8 6], 1, 1, 1, ’STUDENTST’)
% TARCH(1,1,1) simulation
simulatedData = tarch_simulate(1000, [1 .1 .1 .8], 1, 1, 1, [], 1)
Required Inputs
• T: Either a scalar integer or a vector of random numbers. If scalar, T represents the length of the time
series to simulate. If a T by 1 vector of random numbers, these will be used to construct the simulated
time series.
[ω α1 . . . αP γ1 . . . γO β1 . . . βQ ]0
Optional Inputs
– ’NORMAL’: Normal
– ’STUDENTST’: Standardized Student’s t . Parameters should contain 1 additional parameter con-
taining the shape of the distribution.
– ’GED’: Generalized Error Distribution. Parameters should contain 1 additional parameter con-
taining the shape of the distribution.
– ’SKEWT’: Skewed t . Parameters should contain 2 additional parameters containing the skewness
and tail parameters, with skewness first.
Outputs
Comments
USAGE:
[SIMULATEDATA, HT] = tarch_simulate(T, PARAMETERS, P, O, Q, ERROR_TYPE, TARCH_TYPE)
INPUTS:
T - Length of the time series to be simulated OR
T by 1 vector of user supplied random numbers (i.e. randn(1000,1))
PARAMETERS - a 1+P+O+Q (+1 or 2, depending on error distribution) x 1 parameter vector
[omega alpha(1) ... alpha(p) gamma(1) ... gamma(o) beta(1) ... beta(q) [nu lambda]]’.
P - Positive, scalar integer representing the number of symmetric innovations
O - Non-negative scalar integer representing the number of asymmetric innovations (0
for symmetric processes)
Q - Non-negative, scalar integer representing the number of lags of conditional
variance (0 for ARCH)
ERROR_TYPE - [OPTIONAL] The error distribution used, valid types are:
’NORMAL’ - Gaussian Innovations [DEFAULT]
’STUDENTST’ - T distributed errors
’GED’ - Generalized Error Distribution
’SKEWT’ - Skewed T distribution
TARCH_TYPE - [OPTIONAL] The type of variance process, either
1 - Model evolves in absolute values
2 - Model evolves in squares [DEFAULT]
OUTPUTS:
6.1 GARCH Model Simulation 75
COMMENTS:
The conditional variance, h(t), of a TARCH(P,O,Q) process is modeled as follows:
g(h(t)) = omega
+ alpha(1)*f(r_{t-1}) + ... + alpha(p)*f(r_{t-p})
+ gamma(1)*I(t-1)*f(r_{t-1}) +...+ gamma(o)*I(t-o)*f(r_{t-o})
+ beta(1)*g(h(t-1)) +...+ beta(q)*g(h(t-q))
NOTE: This program generates 2000 more than required to minimize any starting bias
EGARCH simulation with normal, Student’s t , Generalized Error Distribution, Skew t or user supplied in-
novations.
Examples
Required Inputs
• T: Either a scalar integer or a vector of random numbers. If scalar, T represents the length of the time
series to simulate. If a T by 1 vector of random numbers, these will be used to construct the simulated
time series.
[ω α1 . . . αP γ1 . . . γO β1 . . . βQ ]0
Optional Inputs
– ’NORMAL’: Normal
– ’STUDENTST’: Standardized Student’s t . Parameters should contain 1 additional parameter con-
taining the shape of the distribution.
– ’GED’: Generalized Error Distribution. Parameters should contain 1 additional parameter con-
taining the shape of the distribution.
– ’SKEWT’: Skewed t . Parameters should contain 2 additional parameters containing the skewness
and tail parameters, with skewness first.
6.1 GARCH Model Simulation 77
Outputs
Comments
USAGE:
[SIMULATEDATA, HT] = egarch_simulate(T, PARAMETERS, P, O, Q, ERROR_TYPE)
INPUTS:
T - Length of the time series to be simulated OR
T by 1 vector of user supplied random numbers (i.e. randn(1000,1))
PARAMETERS - a 1+P+O+Q (+1 or 2, depending on error distribution) x 1 parameter vector
[omega alpha(1) ... alpha(p) gamma(1) ... gamma(o) beta(1) ... beta(q) [nu lambda]]’.
P - Positive, scalar integer representing the number of symmetric innovations
O - Non-negative scalar integer representing the number of asymmetric innovations (0
for symmetric processes)
Q - Non-negative, scalar integer representing the number of lags of conditional
variance (0 for ARCH)
ERROR_TYPE - [OPTIONAL] The error distribution used, valid types are:
’NORMAL’ - Gaussian Innovations [DEFAULT]
’STUDENTST’ - T distributed errors
’GED’ - Generalized Error Distribution
’SKEWT’ - Skewed T distribution
OUTPUTS:
SIMULATEDATA - A time series with EGARCH variances
HT - A vector of conditional variances used in making the time series
COMMENTS:
The conditional variance, h(t), of a EGARCH(P,O,Q) process is modeled as follows:
ln(h(t)) = omega
+ alpha(1)*(abs(e_{t-1})-C) + ... + alpha(p)*(abs(e_{t-p})-C)+...
+ gamma(1)*e_{t-1} +...+ e_{t-o} +...
beta(1)*ln(h(t-1)) +...+ beta(q)*ln(h(t-q))
NOTE: This program generates 2000 more than required to minimize any starting bias
EXAMPLES:
ARARCH simulation with normal, Student’s t , Generalized Error Distribution, Skew t or user supplied in-
novations.
Examples
% Simulate a GARCH(1,1)
simulatedData = aparch_simulate(1000, [.1 .1 .85 2], 1, 0, 1)
% Simulate an AVARCH(1,1)
simulatedData = aparch_simulate(1000, [.1 .1 .85 1], 1, 0, 1)
% Simulate a GJR-GARCH(1,1,1)
simulatedData = aparch_simulate(1000, [.1 .1 .1 .8 2], 1, 1, 1)
% Simulate a TARCH(1,1,1)
simulatedData = aparch_simulate(1000, [.1 .1 .1 .8 1], 1, 1, 1)
% Simulate an APARCH(1,1,1)
simulatedData = aparch_simulate(1000, [.1 .1 .1 .8 .8], 1, 1, 1)
% Simulate an APARCH(1,1,1) with Student’s T innovations
simulatedData = aparch_simulate(1000, [.1 .1 .85 2 6], 1, 0, 1, ’STUDENTST’)
Required Inputs
• T: Either a scalar integer or a vector of random numbers. If scalar, T represents the length of the time
series to simulate. If a T by 1 vector of random numbers, these will be used to construct the simulated
time series.
[ω α1 . . . αP γ1 . . . γO β1 . . . βQ ]0
Optional Inputs
– ’NORMAL’: Normal
– ’STUDENTST’: Standardized Student’s t . Parameters should contain 1 additional parameter con-
taining the shape of the distribution.
– ’GED’: Generalized Error Distribution. Parameters should contain 1 additional parameter con-
taining the shape of the distribution.
6.1 GARCH Model Simulation 79
– ’SKEWT’: Skewed t . Parameters should contain 2 additional parameters containing the skewness
and tail parameters, with skewness first.
Outputs
Comments
USAGE:
[SIMULATEDATA, HT] = aparch_simulate(T, PARAMETERS, P, O, Q, ERROR_TYPE)
INPUTS:
T - Length of the time series to be simulated OR
T by 1 vector of user supplied random numbers (i.e. randn(1000,1))
PARAMETERS - a 1+P+O+Q (+1 or 2, depending on error distribution) x 1 parameter vector
[omega alpha(1) ... alpha(p) gamma(1) ... gamma(o) beta(1) ... beta(q) delta
[nu lambda]]’
P - Positive, scalar integer representing the number of symmetric innovations
O - Non-negative scalar integer representing the number of asymmetric innovations (0
for symmetric processes). Must be less than or equal to P
Q - Non-negative, scalar integer representing the number of lags of conditional
variance (0 for ARCH)
ERROR_TYPE - [OPTIONAL] The error distribution used, valid types are:
’NORMAL’ - Gaussian Innovations [DEFAULT]
’STUDENTST’ - T distributed errors
’GED’ - Generalized Error Distribution
’SKEWT’ - Skewed T distribution
OUTPUTS:
SIMULATEDATA - A time series with APARCH variances
HT - A vector of conditional variances used in making the time series
COMMENTS:
The conditional variance, h(t), of a APARCH(P,O,Q) process is modeled as follows:
h(t)^(delta/2) = omega
+ alpha(1)*(abs(r(t-1))+gamma(1)*r(t-1))^delta + ...
alpha(p)*(abs(r(t-p))+gamma(p)*r(t-p))^delta +
beta(1)*h(t-1)^(delta/2) +...+ beta(q)*h(t-q)^(delta/2)
alpha(i) > 0
NOTE: This program generates 2000 more than required to minimize any starting bias
EXAMPLES:
Simulate a GARCH(1,1)
[SIMULATEDATA, HT] = aparch_simulate(1000, [.1 .1 .85 2], 1, 0, 1)
Simulate an AVARCH(1,1)
[SIMULATEDATA, HT] = aparch_simulate(1000, [.1 .1 .85 1], 1, 0, 1)
Simulate a GJR-GARCH(1,1,1)
[SIMULATEDATA, HT] = aparch_simulate(1000, [.1 .1 -.1 .8 2], 1, 1, 1)
Simulate a TARCH(1,1,1)
[SIMULATEDATA, HT] = aparch_simulate(1000, [.1 .1 -.1 .8 1], 1, 1, 1)
Simulate an APARCH(1,1,1)
[SIMULATEDATA, HT] = aparch_simulate(1000, [.1 .1 -.1 .8 .8], 1, 1, 1)
Simulate an APARCH(1,1,1) with Student’s T innovations
[SIMULATEDATA, HT] = aparch_simulate(1000, [.1 .1 -.1 .85 2 6], 1, 1, 1, ’STUDENTST’)
FIGARCH(p, d , q ) simulation with normal, Student’s t , Generalized Error Distribution, Skew t or user sup-
plied innovations for p ∈ {0, 1} and q ∈ {0, 1} where d is the fractional integration order.
Examples
% FIGARCH(0,d,0) simulation
simulatedData = figarch_simulate(2500, [.1 .42],0,0)
% FIGARCH(1,d,1) simulation
simulatedData = figarch_simulate(2500, [.1 .1 .42 .4],1,1)
% FIGARCH(0,d,0) simulation with Student’s T errors
simulatedData = figarch_simulate(2500, [.1 .42],0,0,’STUDENTST’)
% FIGARCH(0,d,0) simulation with a truncation lag of 5000
simulatedData = figarch_simulate(2500, [.1 .42],0,0,[],5000)
Required Inputs
• T: Either a scalar integer or a vector of random numbers. If scalar, T represents the length of the time
series to simulate. If a T by 1 vector of random numbers, these will be used to construct the simulated
time series.
[ω α d β ]0
Optional Inputs
– ’NORMAL’: Normal
– ’STUDENTST’: Standardized Student’s t . Parameters should contain 1 additional parameter con-
taining the shape of the distribution.
– ’GED’: Generalized Error Distribution. Parameters should contain 1 additional parameter con-
taining the shape of the distribution.
– ’SKEWT’: Skewed t . Parameters should contain 2 additional parameters containing the skewness
and tail parameters, with skewness first.
Outputs
• LAMBDA: TRUNCLAG by 1 vector containing the ARCH(∞) weights on lagged squared returns
Comments
FIGARCH(Q,D,P) time series simulation with multiple error distributions for P={0,1} and Q={0,1}
USAGE:
[SIMULATEDATA, HT, LAMBDA] = figarch_simulate(T, PARAMETERS, P, Q, ERRORTYPE, TRUNCLAG, BCLENGTH)
INPUTS:
T - Length of the time series to be simulated OR
T by 1 vector of user supplied random numbers (i.e. randn(1000,1))
PARAMETERS - a 2+P+Q (+1 or 2, depending on error distribution) x 1 parameter vector
[omega phi d beta [nu lambda]]’. Parameters should satisfy conditions in
FIGARCH_ITRANSFORM
P - 0 or 1 indicating whether the autoregressive term is present in the model (phi)
Q - 0 or 1 indicating whether the moving average term is present in the model (beta)
ERRORTYPE - [OPTIONAL] The error distribution used, valid types are:
’NORMAL’ - Gaussian Innovations [DEFAULT]
’STUDENTST’ - T distributed errors
’GED’ - Generalized Error Distribution
’SKEWT’ - Skewed T distribution
TRUNCLAG - [OPTIONAL] Truncation lag for use in the construction of lambda. Default value is
2500.
BCLENGTH - [OPTIONAL] Number of extra observations to produce to reduce start up bias.
Default value is 2500.
OUTPUTS:
SIMULATEDATA - A time series with ARCH/GARCH/GJR/TARCH variances
HT - A vector of conditional variances used in making the time series
LAMBDA - TRUNCLAG by 1 vector of weights used when computing the conditional variances
COMMENTS:
The conditional variance, h(t), of a FIGARCH(1,d,1) process is modeled as follows:
where lambda(i) is a function of the fractional differencing parameter, phi and beta
EXAMPLES:
FIGARCH(0,d,0) simulation
simulatedData = figarch_simulate(2500, [.1 .42],0,0)
6.1 GARCH Model Simulation 83
FIGARCH(1,d,1) simulation
simulatedData = figarch_simulate(2500, [.1 .1 .42 .4],1,1)
FIGARCH(0,d,0) simulation with Student’s T errors
simulatedData = figarch_simulate(2500, [.1 .42],0,0,’STUDENTST’)
FIGARCH(0,d,0) simulation with a truncation lag of 5000
simulatedData = figarch_simulate(2500, [.1 .42],0,0,[],5000)
Many ARCH-family models can be estimated using the function tarch. This function allows estimation of
ARCH, GARCH, TARCH, ZARCH and AVGARCH models all by restricting the lags included in the model.
The evolution of the conditional variance in the generic process is given by
P O Q
δ δ
βq σtδ−q
X X X
σδ2 =ω+ αp |εt −p | + γo |εt −o | I [εt −o <0] +
p =1 o=1 q =1
where δ is either 1 (TARCH, AVGARCH or ZARCH) or 2 (ARCH, GARCH or GJR-GARCH). The basic form of
tarch is
parameters = tarch(resid,p,o,q)
where resid is a T by 1 vector of mean 0 residuals from some conditional mean model and p, o and q are the
(scalar integer) orders for the symmetric, asymmetric and lagged variance terms respectively. This function
only estimated regular models so it is necessary to include the first lag to include the second of any variable.
The output parameters are ordered
0
ω α1 . . . αp γ1 . . . γo β1 . . . βq
If the distribution is specified as something other than a normal, the type hyper-parameters, ν and λ
are appended to parameters
0
ω α1 . . . αp γ1 . . . γo β1 . . . βq ν
or
0
ω α1 . . . αp γ1 . . . γo β1 . . . βq ν λ
[outputs] = tarch(EPSILON,P,O,Q,ERROR_TYPE,TARCH_TYPE,STARTINGVALS,OPTIONS)
where
• ERROR_TYPE: The variable specifies the error distribution as a string and can take the values
if omitted or blank, the default is ’NORMAL’. Specifying ’STUDENTST’ or ’GED’ will result in one extra
output (ν ). Specifying ’SKEWT’ will result in 2, ν (first additional output) and λ (second additional
output).
6.2 GARCH Model Estimation 85
• OPTIONS: A valid fminunc options structure. The defaults are listed in the comments. This options is
useful for preventing output from being displayed if calling the routine many times.
where
• VCV: The maximum likelihood covariance matrix (inverse Hessian) of the estimated parameters.
• SCORES: A T by number of parameters matrix of scores of the parameters. Used in some diagnostic
tests.
• DIAGNOSTICS: A structure that contains information about the status of the optimizer. Useful for
checking if there are convergence problems.
This function has a number of behind the scenes choices that have been made based on my experience.
These include:
• Parameter restrictions: The estimation routine used, fminunc, is unconstrained but this is deceptive.
The parameters are constrained to satisfy:
– αp > 0, p = 1, 2, . . . P
– αp + γo > 0, p = 1, 2, . . . P, o = 1, 2, . . . O
– βq > 0, q = 1, 2, . . . Q
PP PO PQ
– p =1 α + 0.5 o=1 γ+ q =1 β <1
– ν > 2.1 for a Student’s T or Skew T
– ν > 1.05 for a GED
86 Volatility Modeling
Some of these are necessary but the βq > 0 is not when Q > 1. This may lead to issues in estimating
models with Q > 1 and the function will return constrained QML estimates.
• Starting Values: The starting values are computed using a grid of reasonable values (experience
driven). The log-likelihood is evaluated on this grid and the best fit is used to start. If the optimizer
fails to converge, other starting values will be tried to see of a convergent LL can be found. This said,
tarch will never return parameter estimates from anything but the largest LL.
• Back Casts: Back casts are computed using a local algorithm using T 1/2 data points, b a c k c a s t =
PbT 1/2 c
i =1 w i |ri |δ where δ is 1 or 2 depending on the model specification.
• Covariance Estimates: The covariance estimated are produces using 2-sided numerical scores and
Hessian.
Examples
% ARCH(5) estimation
parameters = tarch(y,5,0,0);
% GARCH(1,1) estimation
parameters = tarch(y,1,0,1);
% GJR-GARCH(1,1,1) estimation
parameters = tarch(y,1,1,1);
% ZARCH(1,1,1) estimation
parameters = tarch(y,1,1,1,[],1);
Comments
USAGE:
[PARAMETERS] = tarch(EPSILON,P,O,Q)
6.2 GARCH Model Estimation 87
[PARAMETERS,LL,HT,VCVROBUST,VCV,SCORES,DIAGNOSTICS] =
tarch(EPSILON,P,O,Q,ERROR_TYPE,TARCH_TYPE,STARTINGVALS,OPTIONS)
INPUTS:
EPSILON - A column of mean zero data
P - Positive, scalar integer representing the number of symmetric innovations
O - Non-negative scalar integer representing the number of asymmetric innovations (0
for symmetric processes)
Q - Non-negative, scalar integer representing the number of lags of conditional
variance (0 for ARCH)
ERROR_TYPE - [OPTIONAL] The error distribution used, valid types are:
’NORMAL’ - Gaussian Innovations [DEFAULT]
’STUDENTST’ - T distributed errors
’GED’ - Generalized Error Distribution
’SKEWT’ - Skewed T distribution
TARCH_TYPE - [OPTIONAL] The type of variance process, either
1 - Model evolves in absolute values
2 - Model evolves in squares [DEFAULT]
STARTINGVALS - [OPTIONAL] A (1+p+o+q), plus 1 for STUDENTST OR GED (nu), plus 2 for SKEWT
(nu,lambda), vector of starting values.
[omega alpha(1) ... alpha(p) gamma(1) ... gamma(o) beta(1) ... beta(q) [nu lambda]]’.
OPTIONS - [OPTIONAL] A user provided options structure. Default options are below.
OUTPUTS:
PARAMETERS - A 1+p+o+q column vector of parameters with
[omega alpha(1) ... alpha(p) gamma(1) ... gamma(o) beta(1) ... beta(q) [nu lambda]]’.
LL - The log likelihood at the optimum
HT - The estimated conditional variances
VCVROBUST - Robust parameter covariance matrix
VCV - Non-robust standard errors (inverse Hessian)
SCORES - Matrix of scores (# of params by t)
DIAGNOSTICS - Structure of optimization output information. Useful to check for convergence problems
COMMENTS:
The following (generally wrong) constraints are used:
(1) omega > 0
(2) alpha(i) >= 0 for i = 1,2,...,p
(3) gamma(i) + alpha(i) > 0 for i=1,...,o
(3) beta(i) >= 0 for i = 1,2,...,q
(4) sum(alpha(i) + 0.5*gamma(j) + beta(k)) < 1 for i = 1,2,...p and
j = 1,2,...o, k=1,2,...,q
(5) nu>2 of Students T and nu>1 for GED
(6) -.99<lambda<.99 for Skewed T
g(h(t)) = omega
+ alpha(1)*f(r_{t-1}) + ... + alpha(p)*f(r_{t-p})+...
+ gamma(1)*I(t-1)*f(r_{t-1}) +...+ gamma(o)*I(t-o)*f(r_{t-o})+...
beta(1)*g(h(t-1)) +...+ beta(q)*g(h(t-q))
g(x) = x if tarch_type=2
Default Options
options = optimset(’fminunc’);
options = optimset(options , ’TolFun’ , 1e-005);
options = optimset(options , ’TolX’ , 1e-005);
options = optimset(options , ’Display’ , ’iter’);
options = optimset(options , ’Diagnostics’ , ’on’);
options = optimset(options , ’LargeScale’ , ’off’);
options = optimset(options , ’MaxFunEvals’ , ’400*numberOfVariables’);
You should use the MEX files (or compile if not using Win64 Matlab) as they provide speed ups of
approx 100 times relative to the m file.
6.2 GARCH Model Estimation 89
EGARCH estimation is identical to the estimation of GJR-GARCH models except uses the function egarch
and no parameter constraints are imposed. The EGARCH model estimated is
P O Q
αp |εt −p |δ + γo |εt −o |δ I [εt −o <0] + βq σtδ−q
X X X
σδ2 = ω +
p =1 o=1 q =1
where δ is estimated along with the other parameters. The basic form of egarch is
parameters = egarch(resid,p,o,q)
where the inputs and outputs are identical to tarch. The extended inputs
parameters = egarch(resid,p,o,q,error_type,startingvals,options)
are also identical with the exclusion of tarch_type which is not available.
Examples
Comments
USAGE:
[PARAMETERS] = egarch(DATA,P,O,Q)
[PARAMETERS,LL,HT,VCVROBUST,VCV,SCORES,DIAGNOSTICS]
= egarch(DATA,P,O,Q,ERROR_TYPE,STARTINGVALS,OPTIONS)
INPUTS:
DATA - A column of mean zero data
P - Positive, scalar integer representing the number of symmetric innovations
90 Volatility Modeling
OUTPUTS:
PARAMETERS - A 1+p+o+q column vector of parameters with
[omega alpha(1)...alpha(p) gamma(1)...gamma(o) beta(1)...beta(q) [nu lambda]]’.
LL - The log likelihood at the optimum
HT - The estimated conditional variances
VCVROBUST - Robust parameter covariance matrix
VCV - Non-robust standard errors (inverse Hessian)
SCORES - Matrix of scores (# of params by t)
DIAGNOSTICS - Structure of optimization output information. Useful to check for convergence
problems
COMMENTS:
(1) Roots of the characteristic polynomial of beta are restricted to be less than 1
ln(h(t)) = omega
+ alpha(1)*(abs(e_{t-1})-C) + ... + alpha(p)*(abs(e_{t-p})-C)+...
+ gamma(1)*e_{t-1} +...+ e_{t-o} +...
beta(1)*ln(h(t-1)) +...+ beta(q)*ln(h(t-q))
Default Options
options = optimset(’fmincon’);
options = optimset(options , ’TolFun’ , 1e-005);
options = optimset(options , ’TolX’ , 1e-005);
options = optimset(options , ’Display’ , ’iter’);
options = optimset(options , ’LargeScale’ , ’off’);
options = optimset(options , ’MaxFunEvals’ , 200*(2+p+q));
options = optimset(options , ’MaxSQPIter’ , 500);
options = optimset(options , ’Algorithm’ ,’active-set’);
You should use the MEX files (or compile if not using Win64 Matlab) as they provide speed ups of
approx 100 times relative to the m file
6.2 GARCH Model Estimation 91
APARCH estimation, like EGARCH estimation, is identical to the estimation of GJR-GARCH models except
that it uses the function aparch, one extra parameter is returned and there is a user option to provide a fixed
value of δ (in which case the number of parameters returned is the same as tarch). The APARCH model
estimated is
max(P,O) Q
δ
σtδ βq σtδ−q
X X
=ω+ α j |εt − j | + γ j εt − j +
j =1 q =1
where the inputs are nearly identical to tarch and the output parameters are ordered
0
ω α1 . . . αp γ1 . . . γo β1 . . . βq δ
If the distribution is specified as something other than a normal, the type hyper-parameters, ν and λ
are appended to parameters
0
ω α1 . . . αp γ1 . . . γo β1 . . . βq δ ν
or
0
ω α1 . . . αp γ1 . . . γo β1 . . . βq δ ν λ
.
The extended inputs are
[outputs] = aparch(DATA,P,O,Q,ERRORTYPE,USERDELTA,STARTINGVALS,OPTIONS)
where USERDELTA is an input that lets the model be estimated for a fixed value of δ. This may be useful
for testing against TARCH and GJR-GARCH. TARCH_TYPE is not applicable and hence not available. The
extended outputs,
[parameters,ll,ht,vcvrobust,vcv,scores,diagnostics] = aparch(resid,p,o,q)
are identical.
Examples
parameters = aparch(y,1,1,1,[],1.5);
Comments
USAGE:
[PARAMETERS] = aparch(DATA,P,O,Q)
[PARAMETERS,LL,HT,VCVROBUST,VCV,SCORES,DIAGNOSTICS]
= aparch(DATA,P,O,Q,ERRORTYPE,USERDELTA,STARTINGVALS,OPTIONS)
INPUTS:
DATA - A column of mean zero data
P - Positive, scalar integer representing the number of symmetric innovations
O - Non-negative scalar integer representing the number of asymmetric innovations (0
for symmetric processes)
Q - Non-negative, scalar integer representing the number of lags of conditional
variance (0 for ARCH)
ERRORTYPE - [OPTIONAL] The error distribution used, valid types are:
’NORMAL’ - Gaussian Innovations [DEFAULT]
’STUDENTST’ - T distributed errors
’GED’ - Generalized Error Distribution
’SKEWT’ - Skewed T distribution
USERDELTA - [OPTIONAL] A scalar value between 0.3 and 4 to use for delta in the estimation.
When the user provides a fixed value for delta, the vector of PARAMETERS has
one less element. This is useful for testing an unrestricted APARCH against
TARCH or GJR-GARCH alternatives
STARTINGVALS - [OPTIONAL] A (1+p+o+q+1), plus 1 for STUDENTST OR GED (nu), plus 2 for SKEWT
(nu,lambda), vector of starting values.
[omega alpha(1)...alpha(p) gamma(1)...gamma(o) beta(1)...beta(q) delta [nu lambda]]’.
OPTIONS - [OPTIONAL] A user provided options structure. Default options are below.
OUTPUTS:
PARAMETERS - A 1+p+o+q+1 (+1 or 2) column vector of parameters with
[omega alpha(1)...alpha(p) gamma(1)...gamma(o) beta(1)...beta(q) delta [nu lambda]]’.
LL - The log likelihood at the optimum
HT - The estimated conditional variances
VCVROBUST - Robust parameter covariance matrix
VCV - Non-robust standard errors (inverse Hessian)
SCORES - Matrix of scores (# of params by t)
DIAGNOSTICS - Structure of optimization output information. Useful to check for convergence
problems
COMMENTS:
The following (generally wrong) constraints are used:
6.2 GARCH Model Estimation 93
h(t)^(delta/2) = omega
+ alpha(1)*(abs(r(t-1))+gamma(1)*r(t-1))^delta + ...
alpha(p)*(abs(r(t-p))+gamma(p)*r(t-p))^delta +
beta(1)*h(t-1)^(delta/2) +...+ beta(q)*h(t-q)^(delta/2)
Default Options
options = optimset(’fmincon’);
options = optimset(options , ’TolFun’ , 1e-005);
options = optimset(options , ’TolX’ , 1e-005);
options = optimset(options , ’Display’ , ’iter’);
options = optimset(options , ’Diagnostics’ , ’on’);
options = optimset(options , ’LargeScale’ , ’off’);
options = optimset(options , ’MaxFunEvals’ , ’400*numberOfVariables’);
You should use the MEX files (or compile if not using Win64 Matlab) as they provide speed ups of
approx 100 times relative to the m file
94 Volatility Modeling
P Q
X 2 X
ht = ω + r t −p − γ + h t −q
p =1 q =1
P 2 Q
X p X
ht = ω + rt −p − γ h t −p + h t −q
p =1 q =1
Examples
Required Inputs
[outputs] = agarch(EPSILON,P,Q)
Optional Inputs
[outputs] = agarch(EPSILON,P,Q,MODEL_TYPE,ERROR_TYPE,STARTINGVALS,OPTIONS)
• ERROR_TYPE: The variable specifies the error distribution as a string and can take the values
if omitted or blank, the default is ’NORMAL’. Specifying ’STUDENTST’ or ’GED’ will result in one extra
output (ν ). Specifying ’SKEWT’ will result in 2, ν (first additional output) and λ (second additional
output).
• STARTINGVALS: 2 + P + Q by 1 vector of starting values. If not provided, a grid search is performed using
common values.
Outputs
[PARAMETERS,LL,HT,VCVROBUST,VCV,SCORES,DIAGNOSTICS] = agarch(inputs)
• VCV: The maximum likelihood covariance matrix (inverse Hessian) of the estimated parameters.
• SCORES: A T by number of parameters matrix of scores of the parameters. Used in some diagnostic
tests.
• DIAGNOSTICS: A structure that contains information about the status of the optimizer. Useful for
checking if there are convergence problems.
Comments
USAGE:
[PARAMETERS] = agarch(EPSILON,P,Q)
[PARAMETERS,LL,HT,VCVROBUST,VCV,SCORES,DIAGNOSTICS]
= agarch(EPSILON,P,Q,MODEL_TYPE,ERROR_TYPE,STARTINGVALS,OPTIONS)
INPUTS:
EPSILON - A column of mean zero data
P - Positive, scalar integer representing the number of symmetric innovations
Q - Non-negative, scalar integer representing the number of lags of conditional
variance (0 for ARCH-type model)
MODEL_TYPE - [OPTIONAL] The type of variance process, either
’AGARCH’ - Asymmetric GARCH, Engle (1990) [DEFAULT]
’NAGARCH’ - Nonlinear Asymmetric GARCH, Engle & Ng (1993)
ERROR_TYPE - [OPTIONAL] The error distribution used, valid types are:
’NORMAL’ - Gaussian Innovations [DEFAULT]
’STUDENTST’ - T distributed errors
’GED’ - Generalized Error Distribution
’SKEWT’ - Skewed T distribution
STARTINGVALS - [OPTIONAL] A (2+p+q), plus 1 for STUDENTST OR GED (nu), plus 2 for SKEWT
(nu,lambda), vector of starting values.
[omega alpha(1) ... alpha(p) gamma beta(1) ... beta(q) [nu lambda]]’.
OPTIONS - [OPTIONAL] A user provided options structure. Default options are below.
OUTPUTS:
PARAMETERS - A 2+p+q column vector of parameters with
[omega alpha(1) ... alpha(p) gamma beta(1) ... beta(q) [nu lambda]]’.
96 Volatility Modeling
h(t) = omega
+ alpha(1)*(r_{t-1}-gamma)^2 + ... + alpha(p)*(r_{t-p}-gamma)^2
+ beta(1)*h(t-1) +...+ beta(q)*h(t-q)
h(t) = omega
+ alpha(1)*(r_{t-1}-gamma*sqrt(h(t-1)))^2 + ... + alpha(p)*(r_{t-p}-gamma*sqrt(h(t-p)))^2
+ beta(1)*h(t-1) +...+ beta(q)*h(t-q)
Default Options
options = optimset(’fminunc’);
options = optimset(options , ’TolFun’ , 1e-005);
options = optimset(options , ’TolX’ , 1e-005);
options = optimset(options , ’Display’ , ’iter’);
options = optimset(options , ’Diagnostics’ , ’on’);
options = optimset(options , ’LargeScale’ , ’off’);
options = optimset(options , ’MaxFunEvals’ , ’200*numberOfVariables’);
You should use the MEX files (or compile if not using Win64 Matlab) as they provide speed ups of
approx 10 times relative to the m file
6.2 GARCH Model Estimation 97
IGARCH and IAVARCH estimation both with and without a constant. IGARCH is the integrated version of a
GARCH model with the sum of the coefficients on the dynamic parameters is forced to sum to 1. IAVARCH
is the equivalent for AVARCH.
Examples
Required Inputs
[outputs] = igarch(EPSILON,P,Q)
Optional Inputs
[outputs] = igarch(EPSILON,P,Q,ERRORTYPE,IGARCHTYPE,CONSTANT,STARTINGVALS,OPTIONS)
• ERROR_TYPE: The variable specifies the error distribution as a string and can take the values
if omitted or blank, the default is ’NORMAL’. Specifying ’STUDENTST’ or ’GED’ will result in one extra
output (ν ). Specifying ’SKEWT’ will result in 2, ν (first additional output) and λ (second additional
output).
• CONSTANT: Logical value indicating whether a constant should be included in the model. The default
is 1.
98 Volatility Modeling
• OPTIONS: A valid fminunc options structure. The defaults are listed in the comments. This options is
useful for preventing output from being displayed if calling the routine many times.
Outputs
[PARAMETERS,LL,HT,VCVROBUST,VCV,SCORES,DIAGNOSTICS] = igarch(inputs)
• VCV: The maximum likelihood covariance matrix (inverse Hessian) of the estimated parameters.
• SCORES: A T by number of parameters matrix of scores of the parameters. Used in some diagnostic
tests.
• DIAGNOSTICS: A structure that contains information about the status of the optimizer. Useful for
checking if there are convergence problems.
Comments
USAGE:
[PARAMETERS] = igarch(EPSILON,P,O,Q)
[PARAMETERS,LL,HT,VCVROBUST,VCV,SCORES,DIAGNOSTICS]
= igarch(EPSILON,P,Q,ERRORTYPE,IGARCHTYPE,CONSTANT,STARTINGVALS,OPTIONS)
INPUTS:
EPSILON - A column of mean zero data
P - Positive, scalar integer representing the number of innovations
Q - Positive, scalar integer representing the number of lags of conditional variance
ERRORTYPE - [OPTIONAL] The error distribution used, valid types are:
’NORMAL’ - Gaussian Innovations [DEFAULT]
’STUDENTST’ - T distributed errors
’GED’ - Generalized Error Distribution
’SKEWT’ - Skewed T distribution
IGARCHTYPE - [OPTIONAL] The type of variance process, either
1 - Model evolves in absolute values
6.2 GARCH Model Estimation 99
OUTPUTS:
PARAMETERS - A CONSTANT+p+q column vector of parameters with
[omega alpha(1) ... alpha(p) beta(1) ... beta(q-1) [nu lambda]]’.
Note that the final beta is redundant and so excluded
LL - The log likelihood at the optimum
HT - The estimated conditional variances
VCVROBUST - Robust parameter covariance matrix
VCV - Non-robust standard errors (inverse Hessian)
SCORES - Matrix of scores (# of params by t)
DIAGNOSTICS - Structure of optimization output information. Useful to check for convergence problems
COMMENTS:
The following (generally wrong) constraints are used:
(1) omega > 0 if CONSTANT
(2) alpha(i) >= 0 for i = 1,2,...,p
(3) beta(i) >= 0 for i = 1,2,...,q
(4) sum(alpha(i) + beta(j)) = 1 for i = 1,2,...p and j = 1,2,...q
(5) nu>2 of Students T and nu>1 for GED
(6) -.99<lambda<.99 for Skewed T
g(h(t)) = omega
+ alpha(1)*f(r_{t-1}) + ... + alpha(p)*f(r_{t-p})+...
beta(1)*g(h(t-1)) +...+ beta(q)*g(h(t-q))
Default Options
options = optimset(’fminunc’);
options = optimset(options , ’TolFun’ , 1e-005);
options = optimset(options , ’TolX’ , 1e-005);
options = optimset(options , ’Display’ , ’iter’);
options = optimset(options , ’Diagnostics’ , ’on’);
options = optimset(options , ’LargeScale’ , ’off’);
options = optimset(options , ’MaxFunEvals’ , ’400*numberOfVariables’);
You should use the MEX file for igarch_core (or compile if not using Win64 Matlab)
as they provide speed ups of approx 100 times relative to the m file
100 Volatility Modeling
FIGARCH(p, d , q ) estimation for p ∈ {0, 1} and q ∈ {0, 1}. FIGARCH is a fractionally integrated version of
GARCH, which is usually represented using its ARCH(∞) respresentation
∞
X
h t = ω̄ + λi ε2t −i
i =1
where
δ1 = d
λ1 = φ − β + d
i −1−d
δi = δi −1 , i = 2, . . .
i
λi = β λi −1 + δi − φδi −1 , i = 2, . . .
Examples
% FIGARCH(0,d,0)
parameters = figarch(y,0,0)
% FIGARCH(1,d,0)
parameters = figarch(y,1,0)
% FIGARCH(0,d,1)
parameters = figarch(y,0,1)
% FIGARCH(1,d,1)
parameters = figarch(y,1,1)
% FIGARCH(1,d,1) with Student’s t Errors
parameters = figarch(y,1,1,’STUDENTST’)
Required Inputs
[outputs] = figarch(EPSILON,P,Q)
Optional Inputs
[outputs] = figarch(EPSILON,P,Q,ERRORTYPE,TRUNCLAG,STARTINGVALS,OPTIONS)
• ERROR_TYPE: The variable specifies the error distribution as a string and can take the values
if omitted or blank, the default is ’NORMAL’. Specifying ’STUDENTST’ or ’GED’ will result in one extra
output (ν ). Specifying ’SKEWT’ will result in 2, ν (first additional output) and λ (second additional
output).
• OPTIONS: A valid fminunc options structure. The defaults are listed in the comments. This options is
useful for preventing output from being displayed if calling the routine many times.
Outputs
[PARAMETERS,LL,HT,VCVROBUST,VCV,SCORES,DIAGNOSTICS] = figarch(inputs)
• VCV: The maximum likelihood covariance matrix (inverse Hessian) of the estimated parameters.
• SCORES: A T by number of parameters matrix of scores of the parameters. Used in some diagnostic
tests.
• DIAGNOSTICS: A structure that contains information about the status of the optimizer. Useful for
checking if there are convergence problems.
Comments
FIGARCH(Q,D,P) parameter estimation for P={0,1} and Q={0,1} with different error distributions:
Normal, Students-T, Generalized Error Distribution, Skewed T
USAGE:
[PARAMETERS] = figarch(EPSILON,P,Q)
[PARAMETERS,LL,HT,VCVROBUST,VCV,SCORES,DIAGNOSTICS]
= figarch(EPSILON,P,Q,ERRORTYPE,STARTINGVALS,OPTIONS)
INPUTS:
EPSILON - T by 1 Column vector of mean zero residuals
P - 0 or 1 indicating whether the autoregressive term is present in the model (phi)
Q - 0 or 1 indicating whether the moving average term is present in the model (beta)
102 Volatility Modeling
OUTPUTS:
PARAMETERS - A 2+p+q column vector of parameters with [omega phi d beta [nu lambda]]’.
LL - The log likelihood at the optimum
HT - The estimated conditional variances
VCVROBUST - Robust parameter covariance matrix
VCV - Non-robust standard errors (inverse Hessian)
SCORES - Matrix of scores (# of params by t)
DIAGNOSTICS - Structure of optimization output information. Useful to check for convergence
problems .
COMMENTS:
The following (generally wrong) constraints are used:
(1) omega > 0
(2) 0<= d <= 1
(3) 0 <= phi <= (1-d)/2
(3) 0 <= beta <= d + phi
(5) nu>2 of Students T and nu>1 for GED
(6) -.99<lambda<.99 for Skewed T
where lambda(i) is a function of the fractional differencing parameter, phi and beta.
Default Options
options = optimset(’fminunc’);
options = optimset(options , ’TolFun’ , 1e-005);
options = optimset(options , ’TolX’ , 1e-005);
options = optimset(options , ’Display’ , ’iter’);
options = optimset(options , ’Diagnostics’ , ’on’);
options = optimset(options , ’LargeScale’ , ’off’);
options = optimset(options , ’MaxFunEvals’ , ’400*numberOfVariables’);
Density Estimation
Kernel density estimation is a useful tool to visualize the distribution of returns which would having to
make strong parametric assumptions. Let {y t }Tt=1 be a set of i.i.d. data. The kernel density around a point x
is defined
t
yt − x
fˆ(x ) =
X
K
h
t =1
where h is the bandwidth, a parameter that controls the width of the window. pltdens supports a number
of Kernels
• Gaussian
1
K (z ) = √ exp(−z 2 /2)
2π
• Epanechnikov (
3
(1 − z 2 ) −1 ≤ z ≤ 1
K (z ) = 4
0 otherwise
• Quartic (Biweight) (
15
(1 − z 2 )2 −1 ≤ z ≤ 1
K (z ) = 16
0 otherwise
• Triweight (
35
(1 − z 2 )3 −1 ≤ z ≤ 1
K (z ) = 32
0 otherwise
1
For i.i.d. data Silverman’s bandwidth, 1.06σ̂2 T − 5 has good properties and is used by default. The func-
tion can be used two ways. The first is to produce the kernel density plot and is simply
pltdens(y)
The second computes the weights but does not produce a plot
[h,f,y] = pltdens(y);
104 Density Estimation
Data on the S&P 500 were used to produce 3 kernel densities, one with Silverman’s BW, on over-smoothed
(h large) and one under-smoothed (h small). The results of this code is contained in figure 7.1.
[h,f,y] = pltdens(SP500);
disp(h)
[hover,fover,yover] = pltdens(SP500,.01);
[hunder,funder,yunder] = pltdens(SP500,.0001);
fig = figure(1);
clf
set(fig,’PaperOrientation’,’landscape’,’PaperSize’,[11 8.5],...
’InvertHardCopy’,’off’,’PaperPositionMode’,’auto’,...
’Position’,[117 158 957 764],’Color’,[1 1 1]);
hfig = plot(y,f,yover,fover,yunder,funder);
axis tight
for i=1:3;set(hfig(i),’LineWidth’,2);end
legend(’Silvermann’,’Over smoothed’,’Under smoothed’)
set(gca,’FontSize’,14)
h =
.0027
Examples
Comments
Silvermann
45
Over smoothed
Under smoothed
40
35
30
25
20
15
10
0
−0.1 −0.08 −0.06 −0.04 −0.02 0 0.02 0.04 0.06 0.08
Figure 7.1: A plot with kernel densities using Silverman’s BW and over- and under- smoothed.
Examples
Required Inputs
[outputs] = jarquebera(DATA)
Optional Inputs
[outputs] = jarquebera(DATA,K,ALPHA)
Outputs
[STATISTIC,PVAL,H] = jarquebera(inputs)
Comments
Computes the Jarque-Bera test for normality using the skewness and kurtosis to determine if a
distribution is normal.
USAGE:
7.2 Distributional Fit Testing 107
[STATISTIC] = jarquebera(DATA)
[STATISTIC,PVAL,H] = jarquebera(DATA,K,ALPHA)
INPUTS:
DATA - A set of data to be tested for deviations from normality
K - [OPTIONAL] The number of dependant variables if any used in constructing the errors
(if omitted K=2)
ALPHA - [OPTIONAL] The level of the test used for the null of normality. Default is .05
OUTPUTS:
STATISTIC - A scalar representing the statistic
PVAL - A scalar pval of the null
H - A hypothesis dummy (0 for fail to reject the null of normality, 1 otherwise)
COMMENTS:
The data entered can be mean 0 or not. In either case the sample mean is subtracted and the
data are standardized by the sample standard deviation before computing the statistic .
EXAMPLES:
J-B test on normal data
x = randn(100,1);
[statistic, pval] = jarquebeta(x);
J-B test on regression errors where there were 4 regressors (4 mean parameters + 1 variance)
x = randn(100,1);
[statistic, pval] = jarquebeta(x, 5)
108 Density Estimation
Examples
Required Inputs
[outputs] = kolmogorov(X)
• X: Data to be tested. X should have been transformed such that it is uniform (under the hypothesized
distribution).
Optional Inputs
[outputs] = kolmogorov(X,ALPHA,DIST,VARARGIN)
• DIST: A string or function handle containing the name of a CDF to use to transform X to be uniform
(under the hypothesized distribution).
Outputs
[STAT,PVAL,H] = kolmogorov(inputs)
Comments
Performs a Kolmogorov-Smirnov test that the data are from a specified distribution
USAGE:
[STAT,PVAL,H] = kolmogorov(X,ALPHA,DIST,VARARGIN)
INPUTS:
X - A set of random variable to be tested for distributional correctness
ALPHA - [OPTIONAL] The size for the test or use for computing H. 0.05 if not entered or
7.2 Distributional Fit Testing 109
empty.
DIST - [OPTIONAL] A char string of the name of the CDF, i.e. ’normcdf’ for the normal,
’stdtcdf’ for standardized Student’s T, etc. If not provided or empty, data are
assumed to have a uniform distribution (i.e. that data have already been fed
through a probability integral transform)
VARARGIN - [OPTIONAL] Arguments passed to the CDF, such as the mean and variance for a normal
or a d.f. for T. The VARARGIN should be such that DIST(X,VARARGIN) is a valid
function with the correct inputs.
OUTPUTS:
STAT - The KS statistic
PVAL - The asymptotic probability of significance
H - 1 for reject the null that the distribution is correct, using the size provided (or
.05 if not), 0 otherwise
EXAMPLES:
Test data for uniformity
stat = kolmogorov(x);
Test standard normal data
[stat,pval] = kolmogorov(x,[],’normcdf’);
Test normal mean 1, standard deviation 2 data
[stat,pval] = kolmogorov(x,[],’normcdf’,1,2);
COMMENTS:
Examples
Required Inputs
[outputs] = berkowitz(X)
• X: Data to be tested. X should have been transformed such that it is uniform (under the hypothesized
distribution).
Optional Inputs
[outputs] = berkowitz(X,TYPE,ALPHA,DIST,VARARGIN)
• TYPE: String either ’TS’ or ’CS’. Determines whether the test statistics looks at the AR(1) coefficient
(’TS’ does, ’CS’ does not). Default is ’TS’.
• DIST: A string or function handle containing the name of a CDF to use to transform X to be uniform
(under the hypothesized distribution).
Outputs
[STAT,PVAL,H] = berkowitz(inputs)
• PVAL: P-value evaluated using the asymptotic χq2 distribution where q = 2 or q = 3, depending on
TYPE.
Comments
USAGE:
7.2 Distributional Fit Testing 111
[STAT,PVAL,H] = berkowitz(X,TYPE,ALPHA,DIST,VARARGIN)
INPUTS:
X - A set of random variable to be tested for distributional correctness
TYPE - [OPTIONAL] A char string, either ’CS’ if the data are cross-sectional or ’TS’ for
time series. The TS checks for autocorrelation in the prob integral transforms
while the CS does not. ’TS’ is the default value.
ALPHA - [OPTIONAL] The size for the test or use for computing H. 0.05 if not entered or
empty.
DIST - [OPTIONAL] A char string of the name of the CDF of X, i.e. ’normcdf’ for the normal,
’stdtcdf’ for standardized Studnet’s T, etc. If not provided or empty, data are
assumed to have a uniform distribution (i.e. that data have already been fed
through a probability integral transform)
VARARGIN - [OPTIONAL] Arguments passed to the CDF, such as the mean and variance for a normal
or a d.f. for T. The VARARGIN should be such that DIST(X,VARARGIN) is a valid
function with the correct inputs.
OUTPUTS:
STAT - The Berkowitz statistic computed as a likelihood ratio of normals
PVAL - The asymptotic probability of significance
H - 1 for reject the null that the distribution is correct using the size provided (or
.05 if not), 0 otherwise
EXAMPLES:
Test uniform data from a TS model
stat = berkowitz(x);
Test standard normal data from a TS model
[stat,pval] = berkowitz(x,’TS’,[],’normcdf’);
Test normal mean 1, standard deviation 2 data from a TS model
[stat,pval] = berkowitz(x,’TS’,[],’normcdf’,1,2);
COMMENTS:
8.1 Bootstraps
Examples
Required Inputs
[BSDATA, INDICES]=block_bootstrap(DATA,B,W)
Outputs
[BSDATA, INDICES]=block_bootstrap(inputs)
Comments
USAGE:
[BSDATA, INDICES]=block_bootstrap(DATA,B,W)
INPUTS:
DATA - T by 1 vector of data to be bootstrapped
B - Number of bootstraps
W - Block length
OUTPUTS:
BSDATA - T by B matrix of bootstrapped data
INDICES - T by B matrix of locations of the original BSDATA=DATA(indexes);
COMMENTS:
To generate bootstrap sequences for other uses, such as bootstrapping vector processes,
simpleset DATA to (1:N)’
Examples
Required Inputs
[BSDATA, INDICES]=stationary_bootstrap(DATA,B,W)
• W: Positive integer containing the average window size. The probability of ending the block is p =
w −1 .
Outputs
[BSDATA, INDICES]=stationary_bootstrap(inputs)
Comments
USAGE:
[BSDATA, INDICES] = stationary_bootstrap(DATA,B,W)
INPUTS:
DATA - T by 1 vector of data to be bootstrapped
B - Number of bootstraps
W - Average block length. P, the probability of starting a new block is defined P=1/W
OUTPUTS:
BSDATA - T by B matrix of bootstrapped data
INDICES - T by B matrix of locations of the original BSDATA=DATA(indexes);
COMMENTS:
To generate bootstrap sequences for other uses, such as bootstrapping vector processes, simply
set DATA to (1:N)’
8.2.1 Reality Check and Test for Superior Predictive Accuracy bsds
Implementation of the White’s (2000) Reality Check and Hansen’s (2005) the Test for Superior Predictive
Accuracy (SPA). BSDS refers to “bootstrap data snooper”.
Examples
% Standard Reality Check with 1000 bootstrap replications and a window size of 12
bench = randn(1000,1).^2;
models = randn(1000,100).^2;
[c,realityCheckPval] = bsds(bench, models, 1000, 12)
% Standard Reality Check with 1000 bootstrap replications, a window size of 12
% and a circular block bootstrap
[c,realityCheckPval] = bsds(bench, models, 1000, 12, ’BLOCK’)
% Hansen’s P-values
SPAPval = bsds(bench, models, 1000, 12)
% Both Pvals on "goods"
bench = .01 + randn(1000,1);
models = randn(1000,100);
[SPAPval,realityCheckPval] = bsds(-bench, -models, 1000, 12)
Required Inputs
[outputs] = bsds_studentized(BENCH,MODELS,B,W)
• W: Scalar integer containing the average window length (stationary bootstrap) or window length (block
bootstrap).
Optional Inputs
[outputs] = bsds_studentized(BENCH,MODELS,B,W,TYPE,BOOT)
• TYPE: String value, either ’STUDENTIZED’ (default) or ’STANDATRD’. Studentized conducts the test us-
ing studentized data and should be more powerful.
• BOOT: String value, either ’STATIONARY’ (default) or ’BLOCK’. Determines the type of bootstrap used.
Outputs
[C,U,L] = bsds_studentized(inputs)
• C: Hansen’s consistent p-val, which adjusts teh Reality Check p-val in the case of high variance but
low mean models.
8.2 Multiple Hypothesis Tests 117
Comments
Calculate Whites and Hansens p-vals for out-performance using unmodified data or studentized
residuals, the latter often providing better power, particularly when the losses functions are
heteroskedastic
USAGE:
[C] = bsds_studentized(BENCH,MODELS,B,W)
[C,U,L] = bsds_studentized(BENCH,MODELS,B,W,TYPE,BOOT)
INPUTS:
BENCH - Losses from the benchmark model
MODELS - Losses from each of the models used for comparrison
B - Number of Bootstrap replications
W - Desired block length
TYPE - String, either ’STANDARD’ or ’STUDENTIZED’. ’STUDENTIZED’ is the default, and
generally leads to better power.
BOOT - [OPTIONAL] ’STATIONARY’ or ’BLOCK’. Stationary is used as the default.
OUTPUTS:
C - Consistent P-val(Hansen)
U - Upper P-val(White) (Original RC P-vals)
L - Lower P-val(Hansen)
COMMENTS:
This version of the BSDS operates on quantities that should be ’bads’, such as losses. The null
hypothesis is that the average performance of the benchmark is as small as the minimum average
performance across the models. The alternative is that the minimum average loss across the
models is smaller than the the average performance of the benchmark.
If the quantities of interest are ’goods’, such as returns, simple call bsds_studentized with
-1*BENCH and -1*MODELS
EXAMPLES:
Standard Reality Check with 1000 bootstrap replications and a window size of 12
bench = randn(1000,1).^2;
models = randn(1000,100).^2;
[c,realityCheckPval] = bsds(bench, models, 1000, 12)
Standard Reality Check with 1000 bootstrap replications, a window size of 12 and a circular
block bootstrap
[c,realityCheckPval] = bsds(bench, models, 1000, 12, ’BLOCK’)
Hansen’s P-values
SPAPval = bsds(bench, models, 1000, 12)
Both Pvals on "goods"
bench = .01 + randn(1000,1);
models = randn(1000,100);
[SPAPval,realityCheckPval] = bsds(-bench, -models, 1000, 12)
Implementation of Hansen, Lunde & Nason’s (2005) Model Confidence Set (MCS).
Examples
% MCS with 5% size, 1000 bootstrap replications and an average block length of 12
losses = bsxfun(@plus,chi2rnd(5,[1000 10]),linspace(.1,1,10));
[includedR, pvalsR] = mcs(losses, .05, 1000, 12)
% MCS on "goods"
gains = bsxfun(@plus,chi2rnd(5,[1000 10]),linspace(.1,1,10));
[includedR, pvalsR] = mcs(-gains, .05, 1000, 12)
% MCS with circular block bootstrap
[includedR, pvalsR] = mcs(losses, .05, 1000, 12, ’BLOCK’)
Required Inputs
[outputs] = mcs(LOSSES,ALPHA,B,W)
• W:Scalar integer containing the average window length (stationary bootstrap) or window length (block
bootstrap).
Optional Inputs
[outputs] = mcs(LOSSES,ALPHA,B,W,BOOT)
• BOOT: String value, either ’STATIONARY’ (default) or ’BLOCK’. Determines the type of bootstrap used.
Outputs
[INCLUDEDR,PVALSR,EXCLUDEDR,INCLUDEDSQ,PVALSSQ,EXCLUDEDSQ] = mcs(inputs)
• PVALSR: P-values of models using R type comparison. The p-values correspond to the the indices in
the order [EXCLUDEDR;INCLUDEDR].
• PVALSSQ: P-values of models using R type comparison. The p-values correspond to the the indices in
the order [EXCLUDEDSQ;INCLUDEDSQ].
Comments
USAGE:
[INCLUDEDR] = mcs(LOSSES,ALPHA,B,W)
[INCLUDEDR,PVALSR,EXCLUDEDR,INCLUDEDSQ,PVALSSQ,EXCLUDEDSQ] = mcs(LOSSES,ALPHA,B,W,BOOT)
INPUTS:
LOSSES - T by K matrix of losses
ALPHA - The final pval to use in the MCS
B - Number of bootstrap replications
W - Desired block length
BOOT - [OPTIONAL] ’STATIONARY’ or ’BLOCK’. Stationary will be used as default.
OUTPUTS:
INCLUDEDR - Included models using R method
PVALSR - Pvals using R method
EXCLUDEDR - Excluded models using R method
INCLUDEDSQ - Included models using SQ method
PVALSSQ - Pvals using SQ method
EXCLUDEDSQ - Excluded models using SQ method
COMMENTS:
This version of the MCS operates on quatities that should be ’bad’, such as losses. If the
quantities of interest are ’goods’, such as returns, simply call MCS with -1*LOSSES
EXAMPLES
MCS with 5% size, 1000 bootstrap replications and an average block length of 12
losses = bsxfun(@plus,chi2rnd(5,[1000 10]),linspace(.1,1,10));
[includedR, pvalsR] = mcs(losses, .05, 1000, 12)
MCS on "goods"
gains = bsxfun(@plus,chi2rnd(5,[1000 10]),linspace(.1,1,10));
[includedR, pvalsR] = mcs(-gains, .05, 1000, 12)
MCS with circular block bootstrap
[includedR, pvalsR] = mcs(losses, .05, 1000, 12, ’BLOCK’)
Helper Functions
The function x2mdate converts Excel dates to MATLAB dates, and is a work-a-like to the Mathworks provided
function of the same name for users who do not have the Finance toolbox.
Examples
mldate =
728960 733960 734960
stringDate =
28-Oct-1995
06-Jul-2009
01-Apr-2012
Required Inputs
[outputs] = x2mdate(XLSDATE)
Optional Inputs
[outputs] = x2mdate(XLSDATE,TYPE)
Outputs
[MLDATE] = x2mdate(inputs)
• MLDATE: Vector with same size as XLSDATE containing MATLAB serial date values.
Comments
X2MDATE provides a simple method to convert between excel dates and MATLAB dates.
USAGE:
[MLDATE] = x2mdate(XLSDATE)
[MLDATE] = x2mdate(XLSDATE, TYPE)
INPUTS:
XLSDATE - A scalar or vector of Excel dates.
TYPE - [OPTIONAL] A scalar or vector of the same size as XLSDATE that describes the Excel
basedate. Can be either 0 or 1. If 0 (default), the base date of Dec-31-1899 is
used. If 1, the base date is Jan 1, 1904.
OUTPUTS:
MLDATE - A vector with the same size as XLSDATE consisting of MATLAB dates.
EXAMPLE:
XLSDATE = [35000 40000 41000];
MLDATE = x2mdate(XLSDATE);
datestr(MLDATE)
28-Oct-1995
06-Jul-2009
01-Apr-2012
COMMENTS:
This is a reverse engineered clone of the MATLAB function x2mdate and should behave the same.
You only need it if you do not have the financial toolbox installed.
The function c2mdate converts CRSP dates to MATLAB dates. CRSP dates are of the form YYYYMMDD and
are numeric.
Examples
mldate =
728960 733960 734960
stringDate =
28-Oct-1995
06-Jul-2009
01-Apr-2012
Required Inputs
[outputs] = c2mdate(CRSPDATE)
Outputs
[MLDATE] = c2mdate(inputs)
• MLDATE: Vector with same size as CRSPDATE containing MATLAB serial date values.
Comments
C2MDATE provides a simple method to convert between CRSP dates provided by WRDS and MATLAB dates.
USAGE:
[MLDATE] = c2mdate(CRSPDATE)
INPUTS:
CRSPDATE - A scalar or vector of CRSP dates.
OUTPUTS:
MLDATE - A vector with the same size as CRSPDATE consisting of MATLAB dates.
EXAMPLE:
CRSPDATE = [19951028 20090706 20120401]’;
MLDATE = c2mdate(CRSPDATE);
datestr(MLDATE)
28-Oct-1995
06-Jul-2009
124 Helper Functions
01-Apr-2012
COMMENTS:
This is provided to make it easy to move between CRSP and MATLAB dates.
Baxter, M. & King, R. G. (1999), ‘Measuring Business Cycles: Approximate Band-Pass Filters For Economic
Time Series’, The Review of Economics and Statistics 81(4), 575–593. 45
Berkowitz, J. (2001), ‘Testing density forecasts, with applications to risk management’, Journal of Business
and Economic Statistics 19, 465–474. 110
Hansen, P. R. (2005), ‘A Test for Superior Predictive Ability’, Journal of Business and Economic Statistics
23(4), 365–380. 116
Hansen, P. R., Lunde, A. & Nason, J. M. (2005), Model confidence sets for forecasting models. Federal Reserve
Bank of Atlanta Working Paper 2005-7. 118
Hodrick, R. J. & Prescott, E. C. (1997), ‘Postwar U.S. Business Cycles: An Empirical Investigation’, Journal of
Money, Credit and Banking 29(1), 1–16. 47
White, H. (2000), ‘A Reality Check for Data Snooping’, Econometrica 68(5), 1097–1126. 116
Index
ARMA, 13 augdf, 55
acf, 37 Augmented Dickey-Fuller test, 55
aicsbic, 29 Automatic lag selections, 58
arma_forecaster, 31 Distributional Testing
armaroots, 26 berkowitz, 110
armaxfilter_simulate, 9 jarquebera, 106
armaxfilter, 13 kolmogorov, 108
heterogeneousar, 19
ljungbox, 41 GARCH, 73, 84
lmtest1, 43 agarch, 94
pacf, 39 aparch_simulate, 78
sacf, 33 egarch_simulate, 76
spacf, 35 egarch, 89
tsresidualplot, 23 figarch_simulate, 81
LM test for serial correlation, 43 figarch, 100
Autocorrelation, 37 igarch, 97
Characteristic Roots, 26 pltdens, 103
Estimation, 13 tarch_simulate, 73
Heterogeneous, 19 tarch, 84, 91
Ljung-Box Q statistic, 41 Generalized Autoregressive Conditional Heteroskedas-
Partial Autocorrelation, 39 ticity, see GARCH
Residual Plotting, 23
Simulation, 9 Information Criteria
Autocorrelation Akaike, 29, 58
ARMA, 37 Schwartz/Bayes, 29, 58
Sample, 33
Ljung-Box Q statistic, 41
Autoregressive Moving Average, see ARMA
Regression, 5
ols, 5
VAR, 61
grangercause, 67
impulseresponse, 70
vectorar, 61
Estimation, 61
Granger Causality, 67
Impulse Response, 70
Vector Autoregression , see VAR
Volatility Modeling, 84
AGARCH, 94
EGARCH, 89
FIGARCH, 100
GARCH, 84, 91
IGARCH, 97
Volatility Simulation, 73
APARCH, 78
EGARCH, 76
FIGARCH, 81
GARCH, 73