The garchx package provides a user-friendly, fast, flexible, and robust framework for the estimation and inference of GARCH(
In the Autoregressive Conditional Heteroscedasticity (ARCH) class of
models proposed by Engle (1982), the variable of interest
Although a large number of specifications of
The most prominent packages on CRAN that are commonly used to estimate
variants of (2) are
tseries
(Trapletti and Hornik 2019),
fGarch
(Wuertz et al. 2020), and
rugarch (Ghalanos 2020).
In tseries, the function garch()
enables estimation of the
GARCH(garch()
include simplicity and speed. With respect
to simplicity, it is appealing that a plain GARCH(1,1) can be estimated
by the straightforward and simple command garch(eps)
, where eps
is
the vector or series in question, i.e., garch()
can be important,
particularly if the number of observations is large or if many models
have to be estimated (as in simulations). A notable limitation of
(3) is that it does not allow for asymmetry terms, e.g.,
In fGarch, asymmetry effects are possible. Specifically, the function
garchFit()
enables estimation of the Asymmetric Power GARCH (APARCH)
specification
garchFit()
is that other densities than the normal can be used in the
ML estimation, for example, the skewed normal or the skewed Student’s
A limitation of fGarch is that it does not allow for additional
covariates (‘X’) in (4). This can be a serious
limitation since additional conditioning variables like
The garchx package
Robustness to dependence. Normal ML estimation is usually
consistent when the
Inference under nullity. In applied work, it is frequently of interest to test whether a coefficient differs from zero. The permissible parameter-space of GARCH models, however, is bounded from below by zero. Accordingly, non-standard inference is required when the value of a null-hypothesis lies on the zero-boundary; see (Francq and Thieu 2018).
The garchx package offers functions to facilitate such tests,
named ttest0()
and waldtest0()
, respectively, based on the
results by (Francq and Thieu 2018).
Zero-constrained coefficients by omission. If one or more
coefficients are indeed zero, then it may be desirable to obtain
estimates under zero-constraints on these coefficients. For example,
if coef
in rugarch to extract the coefficients in the GARCH(4,4)
example above will return a vector of length 9 rather than of length
3, while the coefficient-covariance returned by rugarch will be
coef
is of length 3 and the
coefficient-covariance is
Computation of the asymptotic coefficient-covariance. Knowing the
value of the theoretical, asymptotic coefficient-covariance matrix
is needed for a formal evaluation of an estimator. For GARCH models,
these expressions are not available in explicit form. The garchx
offers a function, garchxAvar()
, that computes them by combining
simulation and numerical differentiation. To illustrate the usage of
garchxAvar()
, a small Monte Carlo study is undertaken. While the
results of the study suggest all four packages return approximately
unbiased estimates in large samples, they also suggest tseries and
rugarch are less robust numerically than fGarch and garchx
under default options. In addition, the simulations reveal the
standard errors of tseries can be substantially biased downwards
when
Table 1 provides a summary of the features offered by the four packages.
The rest of the article is organised as follows.garchxAvar()
function is illustrated by means of a Monte Carlo
study of the large sample properties of the packages. Next, a speed
comparison of the packages is undertaken. While tseries is the fastest
for the specifications it can estimate, garchx is notably faster than
fGarch and rugarch in all the experiments that are conducted.
Finally, the last section concludes.
tseries | fGarch | rugarch | garchx | ||
---|---|---|---|---|---|
GARCH( |
Yes | Yes | Yes | Yes | |
Asymmetry | Yes | Yes | Yes | ||
Power GARCH | Yes | Yes | |||
Covariates (X) | Yes | Yes | |||
Additional GARCH models | Yes | ||||
Non-normality robust vcov |
Yes | Yes | Yes | ||
Dependence robust vcov |
Yes | ||||
Computation of asymptotic vcov |
Yes | ||||
Constrained estimation | Yes | ||||
Zero-constraints by omission | Yes | ||||
Inference under null-restrictions | Yes | ||||
Normal (Q)ML | Yes | Yes | Yes | Yes | |
Non-normal ML | Yes | Yes | |||
ARMA in the mean | Yes | Yes |
Let
Subject to suitable regularity conditions, the normal ML estimator
provides consistent and asymptotically normal estimates of semi-strong
GARCH models; see (Francq and Thieu 2018). Specifically, they show that
"ordinary"
coefficient-covariance. Of course, the expression returned by the
software is the estimate of the finite sample counterpart
"robust"
coefficient-covariance. Again, the expression returned
by the software is the estimate of the finite sample counterpart
For illustration, the spyreal
dataset in the rugarch package is
used, which contains two daily financial time series: The SPDR SP500
index open-to-close return and the realized kernel volatility. The data
are from Hansen et al. (2012) and goes from 2002-01-02 to 2008-08-29.
The following code loads the data and stores the daily return – in
percent – in an object named eps
:
library(xts)
data(spyreal, package = "rugarch")
eps <- spyreal[,"SPY_OC"]*100
Note that the data spyreal
is an object of class
xts (Ryan and Ulrich 2014).
Accordingly, the object eps
is also of class xts
.
The basic interface of garchx is similar to that of garch()
in
tseries. For example, the code
garchx(eps)
estimates a plain GARCH(1,1), and returns a print of the result
(implicitly, print.garchx()
is invoked):
Date: Wed Apr 15 09:19:41 2020
Method: normal ML
Coefficient covariance: ordinary
Message (nlminb): relative convergence (4)
No. of observations: 1661
Sample: 2002-01-02 to 2008-08-29
intercept arch1 garch1
Estimate: 0.005945772 0.05470749 0.93785529
Std. Error: 0.002797459 0.01180603 0.01349976
Log-likelihood: -2014.6588
Alternatively, the estimation result can be stored to facilitate the subsequent extraction of information:
mymod <- garchx(eps)
coef(mymod) #coefficient estimates
fitted(mymod) #fitted conditional variance
logLik(mymod) #log-likelihood (i.e., not the average log-likelihood)
nobs(mymod) #no. of observations
predict(mymod) #generate predictions of the conditional variance
print(mymod) #print of estimation result
quantile(mymod) #fitted quantile(s), the default corresponds to 97.5% value-at-risk
residuals(mymod) #standardized residuals
summary(mymod) #summarise with summary.default
toLatex(mymod) #LaTeX print of result (equation form)
vcov(mymod) #coefficient-covariance
The series returned by fitted
, quantile
, and residuals
are of
class zoo
(Zeileis and Grothendieck 2005).
To control the ARCH, GARCH, and asymmetry orders, the argument order
,
which takes a vector of length 1, 2, or 3, can be used in a similar way
to as in the garch()
function of tseries:
order[1]
controls the GARCH orderorder[2]
controls the ARCH orderorder[3]
controls the asymmetry orderFor example, the following code estimate, respectively, a GARCH(1,1) with asymmetry and a GARCH(2,1) without asymmetry:
garchx(eps, order = c(1,1,1)) #garch(1,1) w/asymmetry
garchx(eps, order = c(1,2)) #garch(2,1)
To illustrate how covariates can be included via the xreg
argument,
the lagged realized volatility from the spyreal
dataset can be used:
x <- spyreal[,"SPY_RK"]*100
xlagged <- lag(x) #this lags, since x is an xts object
xlagged[1] <- 0 #replace NA-value with 0
The code
garchx(eps, xreg = xlagged)
estimates a GARCH(1,1) with the lagged realized volatility as covariate,
i.e.,
Date: Wed Apr 15 09:26:46 2020
Method: normal ML
Coefficient covariance: ordinary
Message (nlminb): relative convergence (4)
No. of observations: 1661
Sample: 2002-01-02 to 2008-08-29
intercept arch1 garch1 SPY_RK
Estimate: 0.01763853 0.00000000 0.71873142 0.28152520
Std. Error: 0.01161863 0.03427413 0.09246282 0.08558003
Log-likelihood: -1970.247
The estimates suggest the ARCH parameter ttest0()
function. Note that if
The "ordinary"
coefficient-covariance is the default. To instead use
the dependence robust coefficient-covariance, set the vcov.type
argument to "robust"
:
garchx(eps, xreg = xlagged, vcov.type = "robust")
The associated print
Date: Wed Apr 15 09:31:12 2020
Method: normal ML
Coefficient covariance: robust
Message (nlminb): relative convergence (4)
No. of observations: 1661
Sample: 2002-01-02 to 2008-08-29
intercept arch1 garch1 SPY_RK
Estimate: 0.01763853 0.00000000 0.7187314 0.2815252
Std. Error: 0.01864470 0.04569981 0.1507067 0.1136347
Log-likelihood: -1970.247
reveals the standard errors change, but not dramatically. If the
estimation result had been stored in an object with, say, the command
mymod <- garchx(eps, xreg = xlagged)
, then the robust
coefficient-covariance could instead have been extracted by the code
vcov(mymod, vcov.type = "robust")
.
If the value of a parameter is zero under the null hypothesis, then it
lies on the boundary of the permissible parameter space. In these cases,
the non-standard inference is required, see (Francq and Thieu 2018). The
garchx package offers two functions to facilitate such non-standard
tests, ttest0()
, and waldtest0()
.
Recall that ttest0()
undertakes the following ttest0
, let us revisit the GARCH(1,1)-X model in
(12):
mymod <- garchx(eps, xreg = xlagged)
In this model, the non-exponential Realized GARCH of
Hansen et al. (2012) is obtained when the ARCH(1)-parameter ttest0(mymod, k = 2)
,
which yields
coef std.error t-stat p-value
arch1 0 0.03427413 0 0.5
In other words, at the most common significance levels, the result
supports the claim that ttest0()
returns a
The function waldtest0()
can be used to test whether one or more
coefficients are zero. Let waldtest0()
, the default is waldtest0()
, let us reconsider the
GARCH(1,1)-X model in (12). Specifically, let
us test whether both the ARCH and GARCH coefficients are zero:
r <- cbind(c(0,0))
R <- rbind(c(0,1,0,0),c(0,0,1,0))
Next, the command waldtest0(mymod, r = r, R = R)
performs the test and
returns a list with the statistic and critical values:
$statistic
[1] 72.95893
$critical.values
10% 5% 1%
41.79952 57.97182 97.15217
In other words, the Wald statistic is level
argument.
The ARCH, GARCH, and asymmetry orders can be specified in two ways.
Either via the order
argument as illustrated above or via the arch
,
garch
, and asym
arguments whose defaults are all NULL
. If any of
their values is not NULL
, then it takes precedence over the
corresponding component in order
. For example, the code
garchx(eps, order = c(0,0), arch = 1, garch = 1)
estimates a GARCH(1,1) since the values of arch
and garch
override
those of order[2]
and order[1]
, respectively. Similarly,
garchx(eps, asym = 1)
estimates a GARCH(1,1) with asymmetry, and
garchx(eps, garch = 0)
estimates a GARCH(1,0) model.
To estimate higher-order models with the arch
, garch
, and asym
arguments, the lags must be provided explicitly. For example, to
estimate the GARCH(2,2) model
garchx(eps, arch = c(1,2), garch = c(1,2))
Zero-coefficient constraints, therefore, can be imposed by simply
omitting the lags in question. For example, to estimate the GARCH(2,2)
model with
garchx(eps, arch = 2, garch = 2)
This returns the print
Date: Wed Apr 15 09:34:04 2020
Method: normal ML
Coefficient covariance: ordinary
Message (nlminb): relative convergence (4)
No. of observations: 1661
Sample: 2002-01-02 to 2008-08-29
intercept arch2 garch2
Estimate: 0.009667606 0.07533534 0.91392791
Std. Error: 0.004494075 0.01636917 0.01899654
Log-likelihood: -2033.7251
To estimate the non-exponential Realized GARCH of Hansen et al. (2012), use
garchx(eps, arch = 0, xreg = xlagged)
The returned print shows that the ARCH(1) term has not been included during the estimation.
Finally, a caveat is in order. The flexibility provided by the arch
,
garch
, and asym
arguments is not always warranted by the underlying
estimation theory. For example, if the ARCH-parameter garchx()
function, nevertheless, tries to
estimate it if the user provides the code garchx(eps, arch = 0)
.
Currently, the function garchx()
does not undertake any checks of
whether the zero-coefficient restrictions are theoretically valid.
The two optimization algorithms in base
R that work best for GARCH
estimation are, in my experience, the "Nelder-Mead"
method in the
optim()
function and the nlminb()
function. The latter enables
bounded optimization, so it is the preferred algorithm here since the
parameters of the GARCH model must be non-negative. The "L-BFGS-B"
method in optim()
also enables bounded optimization, but it does not
work as well in my experience. When using the garchx()
function, the
call to nlminb()
can be controlled and tuned via the arguments
initial.values
, lower
, upper
, and control
. In nlminb()
, the
first argument is named start
, whereas the other three are equal.
Suitable initial parameter values are important for numerical
robustness. In the garchx()
function, the user can set these via the
initial.values
argument. If not, then they are automatically
determined internally. In the case of a GARCH(1,1), the default initial
values are c(0.1, 0.1, 0.7)
works well across
a range of problems. Indeed, the Monte Carlo simulations of the large
sample properties of the packages (see Section 3) reveals that the
numerical robustness of tseries improves when these initial values are
used instead of the default initial values. In the list
returned by
garchx()
, the item named initial.values
contains the values used.
For example, the following code extracts the initial values used in the
estimation of a GARCH(1,1) with asymmetry:
mymod <- garchx(eps, asym = 1)
mymod$initial.values
In each iteration, nlminb()
calls the function garchxObjective()
to
evaluate the objective function. For additional numerical robustness,
checks of the parameters and fitted conditional variance are conducted
within garchxObjective()
at each iteration. The first check is for
whether any of the parameter values at the current iteration are equal
to NA
. The second check is for whether any of the fitted conditional
variances are Inf
, 0, or negative. If either of these checks fails,
then garchxObjective()
returns the value of the logl.penalty
argument in the garchx()
function, whose default value is that
produced by the initial values. To avoid that the term sigma2.min
argument in the garchx()
function.
A drawback with nlminb()
is that it does not return an estimate of the
Hessian at the optimum, which is needed to compute the
coefficient-covariance. To obtain such an estimate, the optimHess()
function is used. In garchx()
, the call to optimHess()
can be
controlled and tuned via the optim.control
argument. Next, the inverse
of the estimated Hessian is computed with solve()
, whose tolerance for
detecting linear dependencies in the columns is determined by the
solve.tol
argument in the garchx()
function.
The function garchxAvar()
returns the asymptotic
coefficient-covariance of a GARCH-X model. Currently (version 1.1), only
non-normality robust versions are available. The aim of this section is
to illustrate how it can be used to check whether the large sample
properties of the packages correspond to those of the underlying
asymptotic estimation theory. Specifically, the aim is to explore
whether large sample estimates from Monte Carlo simulations are
unbiased, whether the empirical standard errors correspond to the
asymptotic ones, and whether the estimate of the non-normality robust
coefficient-covariance is unbiased.
garchxAvar()
functionTo recall, the non-normality robust asymptotic coefficient-covariance is
given by
garchxAvar()
function
combines simulation and numerical differentiation to compute
garchxAvar()
function conducts the simulation with garchxSim()
and the
differentiation with optimHess()
. If we denote the numerically
obtained Hessian as
To obtain an idea about the precision of garchxAvar()
, a numerical
comparison is made for the case where the DGP is an ARCH(1) with
standard normal innovations:
garchxAvar()
) by simply computing the means of the sample
counterparts. For an ARCH(1) with
n <- 10000000
omega <- 1; alpha1 <- 0.1
set.seed(123)
eta <- rnorm(n)
eps <- garchxSim(n, intercept = omega, arch = alpha1, garch = NULL,
innovations = eta)
epslagged2 <- eps[-length(eps)]^2
epslagged4 <- epslagged2^2
J <- matrix(NA, 2, 2)
J[1,1] <- mean( 1/( (omega+alpha1*epslagged2)^2 ) )
J[2,1] <- mean( epslagged2/( (omega+alpha1*epslagged2)^2 ) )
J[1,2] <- J[1,2]
J[2,2] <- mean( epslagged4/( (omega+alpha1*epslagged2)^2 ) )
Eeta4 <- 3
Avar1 <- (Eeta4-1)*solve(J)
computes the asymptotic coefficient-covariance, and stores it in an
object named Avar1
:
Avar1
[,1] [,2]
[1,] 3.475501 -1.368191
[2,] -1.368191 1.686703
With garchxAvar()
, using the same simulated series for
Avar2 <- garchxAvar(c(omega,alpha1), arch=1, Eeta4=3, n=n, innovations=eta)
Avar2
intercept arch1
intercept 3.474903 -1.367301
arch1 -1.367301 1.685338
These are quite similar in relative terms since the ratio Avar2/Avar1
shows each entry in Avar2
is less than 0.1% away from those of
Avar1
.
To illustrate how garchxAvar()
can be used to study the large sample
properties of the packages, a Monte Carlo study is undertaken. The DGP
in the study is a plain GARCH(1,1) with either
n <- 10000000
pars <- c(0.2, 0.1, 0.8)
set.seed(123)
AvarNormal <- garchxAvar(pars, arch=1, garch=1, Eeta4=3, n=n)
eta <- rt(n, df=5)/sqrt(5/3)
Avart5 <- garchxAvar(pars, arch=1, garch=1, Eeta4=9, n=n,
innovations=eta)
computes and stores the asymptotic coefficient-covariances in objects
named AvarNormal
and Avart5
, respectively. They are:
AvarNormal
intercept arch1 garch1
intercept 7.043653 1.1819890 -4.693843
arch1 1.181989 0.7784797 -1.278153
garch1 -4.693843 -1.2781529 3.616365
Avart5
intercept arch1 garch1
intercept 16.234885 3.216076 -11.313749
arch1 3.216076 2.483018 -3.647237
garch1 -11.313749 -3.647237 9.239820
Next, the asymptotic standard errors associated with sample size
sqrt( diag(AvarNormal/10000) )
sqrt( diag(Avart5/10000) )
These values are contained in the columns labelled
Table 2 contains the estimation results
of the Monte Carlo study (1000 replications). For each package, normal
ML estimation is undertaken with default options on initial parameter
values, initial recursion values, and numerical control. The columns
labelled garch(eps)
needs to be modified to
garch(eps, control = garch.control(start = c(0.1, 0.1, 0.7)))
.
nlminb()
, which is the default algorithm in fGarch and the only
option available in garchx, produced more failures and substantially
biased results by rugarch.ugarchfit(data=eps, spec=spec)
to
ugarchfit(data=eps, spec=spec, solver="nlminb")
.
tseries | 0.218 | 0.160 | 0.027 | 0.100 | 0.010 | 0.009 | 0.791 | 0.082 | 0.019 | 0 |
fGarch | 0.203 | 0.027 | 0.027 | 0.100 | 0.009 | 0.009 | 0.799 | 0.019 | 0.019 | 0 |
rugarch | 0.204 | 0.027 | 0.027 | 0.100 | 0.009 | 0.009 | 0.797 | 0.019 | 0.019 | 0 |
garchx | 0.203 | 0.027 | 0.027 | 0.100 | 0.009 | 0.009 | 0.798 | 0.019 | 0.019 | 0 |
tseries | 0.218 | 0.158 | 0.040 | 0.101 | 0.015 | 0.016 | 0.791 | 0.077 | 0.030 | 0 |
fGarch | 0.204 | 0.039 | 0.040 | 0.101 | 0.016 | 0.016 | 0.797 | 0.030 | 0.030 | 0 |
rugarch | 0.201 | 0.037 | 0.040 | 0.100 | 0.014 | 0.016 | 0.799 | 0.027 | 0.030 | 2 |
garchx | 0.201 | 0.037 | 0.040 | 0.100 | 0.015 | 0.016 | 0.799 | 0.028 | 0.030 | 0 |
In each of the 1000 replications of the Monte Carlo study, the estimate of the asymptotic coefficient-covariance is recorded. For fGarch, rugarch, and garchx, the estimate is of the non-normality robust type. For tseries, which does not offer the non-normality robust option, the estimate is under the assumption of normality. Note also that, for tseries, the results reported here are with the numerically more robust non-default initial parameter values alluded to above.
Let
##tseries:
intercept arch1 garch1
intercept 1.0702 1.0489 1.0656
arch1 1.0489 1.0256 1.0366
garch1 1.0656 1.0366 1.0566
##fGarch:
intercept arch1 garch1
intercept 1.0596 1.0335 1.0548
arch1 1.0335 1.0126 1.0229
garch1 1.0548 1.0229 1.0455
##rugarch:
intercept arch1 garch1
intercept 1.0869 1.0723 1.0848
arch1 1.0723 1.0280 1.0501
garch1 1.0848 1.0501 1.0748
##garchx:
intercept arch1 garch1
intercept 1.0630 1.0350 1.0576
arch1 1.0350 1.0142 1.0244
garch1 1.0576 1.0244 1.0479
Three general characteristics are clear. First, the ratios are all greater than 1. In other words, all packages tend to return estimated coefficient-covariances that are too large in absolute terms. In particular, standard errors tend to be too high. Second, the size of the biases is similar across packages. Those of rugarch are slightly higher than those of the other packages, but the difference may disappear if a larger number of replications is used. Third, the magnitude of the relative bias is fairly low since they all lie between 1.26% and 8.69%.
When
##tseries:
intercept arch1 garch1
intercept 0.1082 0.1038 0.1088
arch1 0.1038 0.0952 0.1002
garch1 0.1088 0.1002 0.1070
##fGarch:
intercept arch1 garch1
intercept 0.9088 1.0198 0.9098
arch1 1.0198 1.0721 0.9858
garch1 0.9098 0.9858 0.9062
##rugarch:
intercept arch1 garch1
intercept 0.8423 0.8596 0.8356
arch1 0.8596 0.8361 0.8349
garch1 0.8356 0.8349 0.8263
##garchx:
intercept arch1 garch1
intercept 0.9343 0.9017 0.9200
arch1 0.9017 0.8973 0.8903
garch1 0.9200 0.8903 0.9043
The downwards relative bias of about 90% produced by tseries simply
reflects that a non-normality robust option is not available in that
package. However, the size of the bias is larger than expected. If it
were simply due to
In nominal terms, all four packages are fairly fast. On an average
contemporary laptop, for example, estimation of a plain GARCH(1,1)
usually takes less than a second if the number of observations is 10000
or less. The reason is that all four packages use compiled C/C++ or
Fortran code in the recursion, i.e., the computationally most demanding
part. While the nominal speed difference is almost unnoticeable in
simple models with small
The comparison is undertaken with the
microbenchmark
(Mersmann 2019) package version 1.4-7, and the average estimation-time
of four GARCH models are compared:
Table 3 contains the results of the
comparison in relative terms. A value of 1.0 means the package is the
fastest on average for the experiment in question. A value of 7.15 means
the average estimation time of the package is 7.15 times larger than the
average of the fastest, and so on. The entry is empty if the GARCH
specification cannot be estimated by the package.The overall pattern of
the results is clear: tseries is the fastest among the models it can
estimate, garchx is the second fastest, fGarch is the third fastest,
and rugarch is the slowest. Another salient feature is how much faster
tseries is relative to the other packages. This is particularly
striking for the GARCH(2,2), where the second-fastest package –
garchx – is about 5 to 6 times slower, and the slowest package –
rugarch – is about 28 to 30 times slower. A third notable
characteristic is that the relative differences tend to fall as the
sample size
DGP | tseries | fGarch | rugarch | garchx | ||
---|---|---|---|---|---|---|
1 GARCH( |
1000 | 1.00 | 7.15 | 17.42 | 2.69 | |
2000 | 1.00 | 6.28 | 12.89 | 1.85 | ||
2 GARCH( |
1000 | 1.00 | 10.14 | 29.78 | 5.27 | |
2000 | 1.00 | 14.72 | 27.79 | 6.27 | ||
3 GARCH( |
1000 | 2.26 | 14.72 | 1.00 | ||
2000 | 2.97 | 9.91 | 1.00 | |||
4 GARCH( |
1000 | 5.90 | 1.00 | |||
2000 | 6.36 | 1.00 |
This paper provides an overview of the package garchx and compares it with three prominent CRAN-packages that offer GARCH estimation routines: tseries, fGarch, and rugarch. While garchx does not offer all the GARCH-specifications available in rugarch, it is much more flexible than tseries, and it also offers the important possibility of including covariates. This feature is not available in fGarch. The package garchx also offers additional features that are not available in the other packages: i) A dependence-robust coefficient covariance, ii) functions that facilitate hypothesis testing under nullity, iii) zero-coefficient restrictions by omission, and iv) a function that computes the asymptotic coefficient-covariance of a GARCH model.
In a Monte Carlo study of the packages, the large sample properties of
the normal Quasi ML (QML) estimator were studied. There, it was revealed
that fGarch and garchx are numerically more robust (under default
options) than tseries and rugarch. However, in the case of
tseries, the study also revealed how its numerical robustness could be
improved straightforwardly by simply changing the initial parameter
values. In the case of rugarch, it is less clear how the numerical
robustness can be improved. The study also revealed that the standard
errors of tseries could be substantially biased downwards when
In a relative speed comparison of the packages, it emerged that the least flexible package – tseries – is notably faster than the other packages. Next, garchx is the second fastest (1.85 to 6.27 times slower in the experiments), fGarch is the third fastest, and rugarch is the slowest. The experiments also revealed that the difference could be larger in higher-order models. For example, in the estimation of a GARCH(2,2), rugarch was about 28 times slower than tseries. In estimating a plain GARCH(1,1), by contrast, it was only 13 to 17 times slower. Another finding was that the difference seems to fall in sample size: The larger the sample size, the smaller the difference in speed.
(Francq and Thieu 2018) show that
numericDeriv()
in the vcov.garchx()
function.
tseries, fGarch, rugarch, xts, zoo, microbenchmark
Econometrics, Environmetrics, Finance, MissingData, SpatioTemporal, TimeSeries
This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.
garch(eps)
needs to be modified to
garch(eps, control = garch.control(start = c(0.1, 0.1, 0.7)))
.
[↩]ugarchfit(data=eps, spec=spec)
to
ugarchfit(data=eps, spec=spec, solver="nlminb")
.[↩]Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Sucarrat, "garchx: Flexible and Robust GARCH-X Modeling", The R Journal, 2021
BibTeX citation
@article{RJ-2021-057, author = {Sucarrat, Genaro}, title = {garchx: Flexible and Robust GARCH-X Modeling}, journal = {The R Journal}, year = {2021}, note = {https://rjournal.github.io/}, volume = {13}, issue = {1}, issn = {2073-4859}, pages = {335-350} }