Although linear autoregressive models are useful to practitioners in different fields, often a nonlinear specification would be more appropriate in time series analysis. In general, there are many alternative approaches to nonlinearity modelling, one consists in assuming multiple regimes. Among the possible specifications that account for regime changes in the multivariate framework, smooth transition models are the most general, since they nest both linear and threshold autoregressive models. This paper introduces the starvars package which estimates and predicts the Vector Logistic Smooth Transition model in a very general setting which also includes predetermined variables. In comparison to the existing R packages, starvars offers the estimation of the Vector Smooth Transition model both by maximum likelihood and nonlinear least squares. The package allows also to test for nonlinearity in a multivariate setting and detect the presence of common breaks. Furthermore, the package computes multi-step-ahead forecasts. Finally, an illustration with financial time series is provided to show its usage.
Many economic and financial time series often behave differently during stress periods for the economic activity. For example, during the subprime mortgage financial crisis, the relationship between the financial sector and macroeconomic quantities changed justifying the use of a nonlinear model. The same is also true in the analysis of monetary policy, where positive and negative monetary policy shocks may have asymmetric effects, or in the investigation of the effectiveness of a fiscal policy, where some fiscal policy measures may depend on the phase of the business cycle, see for example (Caggiano et al. 2015). When asymmetric effects are observed, the time series may follow different regimes. In order to understand the dynamics of such processes, Quandt (1958, 1960) firstly proposed a model where the coefficients of a linear model change in relation to the value of an observable stochastic variable. Afterwards, these models have been extended to time series analysis. Tong (1978) and Teräsvirta and Lim (1980) introduced the threshold autoregressive model, while Teräsvirta (1994) imagined that the transition between regimes could be smooth, which leads to the smooth transition autoregressive model (STAR) for univariate time series.
Since researchers are often interested in understanding the dynamics of time series in a multivariate framework, regime-switching models have also been extended to include multiple dependent variables. A vector nonlinear model was introduced by Tsay (1998), who defined a Threshold Vector Autoregressive (TVAR) model with a single threshold variable controlling the switching mechanism in each equation. The first vector model with a smooth transition was the smooth transition vector error-correction model (STVECM) introduced by Rothman et al. (2001). In this model, the same transition function controls the transition in each equation. Camacho (2004) proposed a bivariate logistic smooth transition model with the possibility to include exogenous regressors and specify a different transition variable for each equation. For a recent survey of vector TAR and STAR models, see Hubrich and Teräsvirta (2013). More recently, Teräsvirta and Yang (2014b) presented a modelling strategy for building a Vector Logistic Smooth Transition Regression (VLSTAR). This strategy includes linearity and misspecification tests for the conditional mean, and testing the constancy of the error covariance matrix.
This article summarizes the procedure proposed in Teräsvirta and Yang (2014b) and illustrates the starvars package in R for estimating and testing of the VLSTAR model with a single transition variable. Several packages for the estimation of the univariate logistic autoregressive model (LSTAR) are already present in R. For example, Di Narzo et al. (2020) in their tsDyn package provide functions to estimate and forecast both the STAR and the LSTAR models. Unfortunately, the tsDyn package, which focuses on nonlinear models in general, only allows for the estimation of a multivariate Threshold Vector Autoregressive (TVAR) model and does not allow for the inclusion of exogenous regressors. The RSTAR package, implemented by Balcilar (2016), estimates, forecasts, and analyses the smooth transition autoregressive model in the univariate case. Another possible way to model regime switches in a multivariate framework is through the MSBVAR by (Brandt 2016), capable of estimating a Markov-switching autoregressive model. Still, this package does not permit to evaluate the relationship between the dependent variables and possible explanatory variables.
The here presented R package starvars (Bucci et al. 2022) is conceived for
the nonlinear specification with a VLSTAR model of the relationship of
multivariate time series exhibiting smooth nonlinear relationships with
both their lags and a set of explanatory variables. Even though this
model has been mainly applied in financial setups, it could be used in
all fields in which the nature of the dynamics of the dependent
variables could be conceived somehow nonlinear and, specifically,
following a logistic smooth transition model. The functionalities of the
starvars package include: (i) modelling strategy, such as joint
linearity testing of multivariate time series, or detecting the presence
of co-breaks, (ii) estimation and (iii) prediction of the VLSTAR model,
(iv) construction of realized covariances from high and low-frequency
financial prices or returns. Two datasets (Realized
and techprices
)
are included in the R package starvars. The former entails monthly
observations for realized co-volatilities between the S&P 500, the
Nikkei, the FTSE and the DAX indexes, the growth rate of the dividend
yield and the earning price ratio, and the first difference of the
inflation rate in the U.S., United Kingdom, Japan and Germany. The
latter includes the data used in the example with the daily closing
stock prices of Google, Microsoft and Amazon.
The outline of the paper is as follows. The following sections review the specification of the VLSTAR model, referring to Teräsvirta and Yang (2014b), and illustrate how to estimate and make predictions through the starvars package. We then present an empirical application to stock price data, while the last section concludes.
Assuming an
In the VLSTAR model, each element of
The logistic function in Equation (2) is accordingly
modified as follows
The VLSTAR specification procedure follows several steps. Firstly, the
researcher should test whether the relationship between
The LM type statistic can be computed, as further suggested by Teräsvirta and Yang (2014a), using a multi-step procedure:
estimation of the linear model, i.e. the restricted VLSTAR with
save a collection of the residuals
computation of the residual sum of squares matrix,
regression of
creation of the residual matrix,
computation of the test statistic
In the R package starvars, the joint linearity test can be performed
by using the function VLSTARjoint
, which takes the following
arguments.
y
: a data.frame
or matrix
containing the exo
: an optional argument containing a data.frame
or matrix
of
st
: a vector with the observations of the single transition
variable st.choice
: when the choice of the transition variable among a set
of candidates should be based on the linearity test, this argument
should be set equal to TRUE
. In such a case, the variable in the
matrix st
which results in a higher LM statistics is the one
chosen as the transition variable;alpha
: a decimal value comprised between 0 and 1
(In this case, the residuals VAR
function from R package
vars, with an automatically
selected number of lags,
VLSTARjoint(y, exo, st, st.choice = FALSE, alpha = 0.05)
The function VLSTARjoint
returns a list object with a class attribute
"VLSTARjoint"
, for which print
method exists, with three elements:
the value(s) of the Lagrange Multiplier value (LM), the
Furthermore, the specification of the VLSTAR model foresees the
definition of the number of regimes to be used in the model (see
Appendix A for further details). The function multiCUMSUM
allows
determining the number of common breaks and where they are located.
multiCUMSUM(data, conf.level = 0.95, max.breaks = 7)
The arguments necessary to detect the common breaks are: a matrix of
data
; the confidence
level in conf.level
, set by default at 0.95; the number of maximum
common breaks (between 1 and 7) to be identified, through max.breaks
.
The output is returned in a list with a class attribute "multiCUMSUM"
,
which can be passed through the print
function. The first element of
the returned list object is a matrix with the test statistics
As widely discussed in Teräsvirta and Yang (2014b), a VLSTAR model can be estimated through a nonlinear Least Square (NLS) or a maximum likelihood (ML) model.
In both cases, the optimization algorithm may converge to some local
minima, attributing to the definition of valid starting values of the
estimated parameters a special relevance. If there is no clear
indication of the initial values of
The searching grid algorithm works as follows:
construction of the grid for
estimation of
find the pairs of
estimation of parameters,
estimation of
repeat steps 4 and 5 until convergence.
The starvars package allows the user to implement a searching grid
algorithm to obtain the initial values of startingVLSTAR
function among a set of potential values. For example,
by providing n.combi
The startingVLSTAR
function requires several arguments. A data.frame
or a matrix
of dimension y
. An optional argument, exo
,
contains possible explanatory variables and can be specified as a
data.frame
or a matrix
with the same length of y
and m
, while the transition
variable st
. The
number of cores used to make parallel computation is specified through
the ncores
argument, while the argument singlecgamma
works as
follows:
singlecgamma = TRUE
: it is assumed a common pair of initial values
for the entire model;singlecgamma = FALSE
: a pair of startingVLSTAR(y, exo = NULL, p = 1,
m = 2, st = NULL, constant = TRUE,
n.combi = NULL, ncores = 2,
singlecgamma = FALSE)
The NLS estimator is defined as the solution to the following
optimisation problem
In the aforementioned algorithm, the vectorization of the NLS estimates
of
To estimate a VLSTAR model via ML, it must be assumed that
In the first iteration of the algorithm presented in this section,
In the starvars package, the estimation of a VLSTAR model is handled
with the function VLSTAR
. By fitting such a model via this function, a
list object with a class attribute "VLSTAR"
is obtained. This function
requires the same arguments of the startingVLSTAR
function, except for
the number of combinations. In addition, a list
of data.frame
or
matrix
containing starting values of starting
. The user can choose the method
used to estimate the coefficients among the ‘ML’ and the ‘NLS’ through
the specification of the argument method
. The argument epsilon
is
used as a convergence check while the argument ncores
denotes the
number of cores used in the parallel optimization of the objective
function.
VLSTAR(y, exo = NULL, p = 1, m = 2, st = NULL, constant = TRUE,
starting = NULL,
method = c('ML', 'NLS'),
n.iter = 500, singlecgamma = TRUE,
epsilon = 10^(-3), ncores = NULL)
The summary
method applied to an object derived from the VLSTAR
function returns the sample size, along with the number of estimated
parameters, the multivariate log-likelihood calculated as in Equation
(9), and the estimated coefficients. We also provide other
generic methods, such as plot
, AIC
, BIC
and logLok
. Similar to
what is implemented in the R package vars, the plot
function reports
for each equation in the VLSTAR model the observed values of each time
series, the fitted values and the residuals, as well as the
autocorrelation and partial autocorrelation functions of the residuals.
Since the logistic function plays a crucial role in VLSTAR models, the
plot
function shows also the plot of the logistic function for each
dependent variable.
Time series prediction using nonlinear models has become widespread in
the last few decades, even if the debate on the usefulness of such
forecasts is still open (see Diebold and Nason 1990; Kock and Teräsvirta 2011). The forecasts of
the nonlinear model, for more than one step ahead, can be generalised
via numerical techniques. Given a nonlinear model
When
The nonlinearity in the VLSTAR model makes multi-period forecasting more
complicated. In fact, forecasting two steps ahead is not
straightforward, since we have
The R package starvars can handle both one-step and multi-step-ahead
forecasts of an object with a class attribute "VLSTAR"
. One-step-ahead
forecasts can be easily extended to the multivariate framework by
modifying Equation (3) as follows
predict
in the starvars package allows the user to choose
between these methods through the argument method
. When the naive
method is chosen, the st.new
. The index st.num
,
which denotes the column number of the dependent variable which should
be used as a new transition variable. From Hubrich and Teräsvirta (2013), Kock and Teräsvirta (2011) and
Teräsvirta et al. (2010), we know that these forecasts are biased. Thus, the
practitioner may choose the Monte Carlo
method. In this case,
bootstrap
method
foresees that the multi-step-ahead forecasts are derived from
Monte Carlo
method, the interval
forecasts are derived from the forecast density.
predict(object, ..., n.ahead = 1, conf.lev = 0.95, st.new = NULL,
st.num = NULL, newdata = NULL,
method = c('naive', 'Monte Carlo', 'bootstrap'))
The predict
method returns a list with a class attribute
"vlstarpred"
and two elements: a list denoted with the name
forecasts
containing the predicted values and the interval forecasts
for each of the steps ahead, and the matrix with the values of print
method is applicable to objects of this class and returns the
forecasts with upper and lower interval forecasts. The plot
method
draws the time series plots with the interval forecasts in the
out-of-sample period.
The here applied VLSTAR model is one of the possible ways of modelling
nonlinear relationships. Alternatively, nonlinearity in a multivariate
framework can be modelled through a Threshold Vector Autoregression
(TVAR) or Markov-switching Vector Autoregressive (MSVAR) model. The
VLSTAR and the TVAR models are both based on the assumption that the
variable that defines the regime-switching is observable, while the
MSVAR is mainly based on the assumption that regime-switches are defined
by a latent Markov process. When the practitioner has enough information
on the factors that drive the dynamics of the dependent variables, using
VLSTAR or TVAR models may reduce the uncertainty related to the regimes
and may produce more accurate predictions than an MSVAR model (see Hubrich and Teräsvirta 2013). In other words, the VLSTAR is a model with a continuum of
states where the change between a number of regimes is smooth, the TVAR
is mostly conceived to analyse the dynamics of variables that switch
abruptly between the regimes. The VLSTAR model can be seen as a general
version of the TVAR that allows also for the regimes to change smoothly.
Indeed, when
The starvars package further differs from the tsDyn and the MSBVAR by (Brandt 2016) packages, which permit the estimation of the TVAR and MSVAR models, since it allows the use of exogenous variables in the estimation set. This is a useful tool since practitioners may control for potential explanatory variables different from lags of the dependent variable to obtain parameter estimates and dependent variables predictions.
To illustrate how the R package starvars works in practical
situations, we present an empirical application with multivariate time
series of stock prices. Starting from the prices of techprices
, we model the monthly realized covariances assuming that
their dynamics can be captured by a flexible specification like the
VLSTAR model which nests the linear VAR. First, we construct the
The techprices
dataset used in this example includes the closing
prices from January 1st 2005 to June 16th 2020, for a total of 3,890
observations per series. The dataset can be loaded in the workspace
using
> data("techprices", package = "starvars")
where techprices
is a xts
object containing the
daily prices. As a first step, we calculate the realized covariances of
stock returns and their Cholesky factors. Since we have already daily
prices, we can only build monthly, quarterly, or yearly realized
covariances. To keep the sample of realized covariances quite large, we
calculate monthly realized covariances and their Cholesky factors
through the code (further discussed in Appendix B):
> RCOV <- rcov(techprices, freq = "monthly", make.ret = TRUE, cholesky = TRUE)
from which we obtain a list of two elements in the object RCOV
. We are
just interested in the Cholesky factors of the stock returns, thus we
save the desired data.frame
in the object techchol
with a class
"xts"
.
> techchol <- RCOV$'Cholesky Factors'
which has dimension
The modelling strategy of a VLSTAR model starts with a test for the time
series nonlinearities. As largely explained above, this can be done via
the VLSTARjoint
function. Since no information about which variable
should be used as a transition variable is available, we let the
linearity test choose among a set of potential variables which are equal
to the first lag of the dependent variables. The LM statistics and the
related alpha
(set equal to 0.05 by
default) and for the chosen transition variable can be obtained simply
by running
> st <- lag(techchol,1)[-1]
> VLSTARjoint(techchol[-1,], st = st, st.choice = TRUE)
Joint linearity test (Third-order Taylor expansion)
Transition variable chosen: y5
LM = 158.7 ; p-value = 2.0595e-21
Critical value for alpha = 40.646
The linearity test indicates the presence of nonlinearity in the data,
and that the rejection of the null hypothesis is stronger when the lag
of the fifth Cholesky factor, y5
, is chosen as the transition
variable. At this point, the practitioner should assess the presence of
common breaks among the time series through the test presented in
Appendix A. The test, for a maximum number of breaks equal to 3, is
computed as follows.
> multiCUMSUM(techchol[-1], max.breaks = 3)
============================================================
Break detection in the covariance structure:
Lambda Omega Break Date 1 Break Date 2 Break Date 3
Break 1 11.10 3.93 2009-04-03
Break 2 21.53 9.64 2009-04-03 2007-12-03
Break 3 12.09 6.03 2009-04-03 2007-12-03 2015-07-03
============================================================
Critical values are 2.69 for Lambda and 1.74 for Omega.
2 Break(s) identified with Lambda
2 Break(s) identified with Omega
This function returns significant test statistics for all the breaks for
Given that a nonlinear model would be necessary and that at least a
single break is present in the multivariate time series, a VLSTAR model
can be estimated. Before estimating the parameters, we implement the
searching grid algorithm to find starting values of singlecgamma = FALSE
we are supposing that each equation has its own
parameters. Once executed the code, a progress bar is shown to inform
the user about the completion of the searching grid algorithm.
> starting <- startingVLSTAR(techchol[-1,], p = 1, m = 2, st = st[,5],
+ n.combi = 20, singlecgamma = FALSE, ncores = 4)
We employ an NLS estimation, with the lag of the fifth Cholesky factor
as starting
object. Therefore, we show the
code used to specify the VLSTAR model as well as the summary output, and
the graphic for the equation of the first Cholesky factor, y1
.
> fit.VLSTAR <- VLSTAR(techchol[-1,], p = 1, m = 2, st = st[,5],
+ method = 'NLS', starting = starting, n.iter = 30, ncores = 4)
> summary(fit.VLSTAR)
> plot(fit.VLSTAR, names = "y1")
Model VLSTAR with 2 regimes
Full sample size: 184
Number of estimated parameters: 108 Multivariate log-likelihood: 2272.663
==================================================
Equation y1
Coefficients regime 1
const y1 y2 y3 y4 y5 y6
8.108*** 0.038 0.135 0.123 0.142 -1.379*** 0.330
Coefficients regime 2
const y1 y2 y3 y4 y5 y6
10.613*** 0.411*** -0.067 0.593*** -1.884*** 0.669** 1.762***
Gamma: 3.0809 c: 3.1603
AIC: 769.78 BIC: 814.79 LL: -370.89
Equation y2
Coefficients regime 1
const y1 y2 y3 y4 y5 y6
0.511 -0.019 0.106 0.250** 0.126 -0.005 0.261
Coefficients regime 2
const y1 y2 y3 y4 y5 y6
6.919*** 0.760*** -0.136 0.177* -0.644*** -1.688*** 0.613***
Gamma: 866.3921 c: 3.5162
AIC: 545.65 BIC: 590.66 LL: -258.83
Equation y3
Coefficients regime 1
const y1 y2 y3 y4 y5 y6
1.015* -0.033 0.053 0.389*** 0.003 0.022 0.295
Coefficients regime 2
const y1 y2 y3 y4 y5 y6
-3.503*** 1.419*** -0.123 0.218* -0.580*** -0.895*** -0.425*
Gamma: 110.8034 c: 3.595
AIC: 571.67 BIC: 616.67 LL: -271.83
Equation y4
Coefficients regime 1
const y1 y2 y3 y4 y5 y6
4.270*** -0.034 -0.046 0.058 0.340** -1.114*** 0.096
Coefficients regime 2
const y1 y2 y3 y4 y5 y6
11.561*** 0.127** 0.166. 0.287*** -0.939*** -0.497*** 1.117***
Gamma: 1.1841 c: 3.4705
AIC: 496.2 BIC: 541.21 LL: -234.1
Equation y5
Coefficients regime 1
const y1 y2 y3 y4 y5 y6
0.367 -0.009 0.061 0.096. -0.012 0.200** 0.158
Coefficients regime 2
const y1 y2 y3 y4 y5 y6
7.756*** -0.695*** -0.337*** 0.290*** -0.418*** 0.639*** 1.269***
Gamma: 100 c: 4.1137
AIC: 351.31 BIC: 396.32 LL: -161.66
Equation y6
Coefficients regime 1
const y1 y2 y3 y4 y5 y6
2.693*** -0.005 0.005 0.048 0.120. -0.234*** 0.171.
Coefficients regime 2
const y1 y2 y3 y4 y5 y6
3.648*** 0.383*** -0.138* 0.199*** -0.992*** 0.178** 0.909***
Gamma: 69.405 c: 3.5824
AIC: 324.3 BIC: 369.31 LL: -148.15
==================================================
Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
After the execution of the code, a counter with the number of the
iteration in the estimation algorithm is shown until convergence or the
maximum number of iterations is reached. Using a laptop with an Intel
Corei5-7200U 2.5GHz processor with 16 GB RAM, the searching grid
algorithm takes around 40 seconds to find optimal values of
The results of the plot
function on the Equation of y1
in the
VLSTAR
object are shown in Figure 2. It may be noticed
from the last panel of the Figure reporting the logistic function that
the assumption of a smoothing regime-switching is realistic.
y1
. The first panel shows the observed time series (in black) versus
the fitted time series (in dashed blue). The second panel shows the
residuals and highlights the zero with a red horizontal line. The left
side of the third panel reports the autocorrelation function of the
residuals, while the right side reports the partial autocorrelation
function of the residuals. The fourth panel is about the logistic
function that regulates the regime switches. The residual time series of
Time series models are usually implemented to make out-of-sample
predictions. In our package, this is possible through the predict
method that, applied to objects of class "VLSTAR"
, returns an object
with a class "vlstarpred"
. When using the predict
function, the
argument method = ’bootstrap’
specifies that the aforementioned
“bootstrap” method has been used to make predictions, while the argument
n.ahead = 2
denotes that two-step-ahead predictions are obtained. The
outcome of the plot
method of the out-of-sample forecasts for the
first Cholesky factor is exhibited in Figure 3. The
predictions of the Cholesky factors could be used to obtain a
semidefinite positive predicted covariance matrix by simply inverting
the Cholesky decomposition.
> pred.bootstrap <- predict(fit.VLSTAR, n.ahead = 2, st.num = 5, method = 'bootstrap')
> pred.bootstrap
$y1
fcst lower 95% upper 95%
Step 1 8.370493 7.283483 9.457503
Step 2 20.916559 12.878648 28.649321
$y2
fcst lower 95% upper 95%
Step 1 3.131276 2.540087 3.722465
Step 2 6.188201 4.761677 7.948755
$y3
fcst lower 95% upper 95%
Step 1 3.508982 2.874487 4.143478
Step 2 6.631187 4.822495 9.018994
$y4
fcst lower 95% upper 95%
Step 1 5.188099 4.671238 5.70496
Step 2 12.483377 8.961486 15.73787
$y5
fcst lower 95% upper 95%
Step 1 1.794161 1.445520 2.142802
Step 2 3.293723 2.469695 4.301613
$y6
fcst lower 95% upper 95%
Step 1 3.381696 3.057729 3.705664
Step 2 7.258409 6.307594 8.322091
> plot(pred.bootstrap, type = 'single', names = 'y1')
y1
. The plot
shows the observed time series in-sample (in dashed black), the two-step
ahead out-of-sample predictions (in dashed blue), and their 95%
prediction interval (in dashed red). A vertical grey line denotes the
end of the in-sample observations. The predictions of time series This article introduces the R package starvars for modelling, estimating, and forecasting a Vector Logistic Smooth Transition Autoregressive (VLSTAR) model. We present the model specification in a general way and illustrate the package usage. In particular, we perform an empirical application using financial data.
The package allows practitioners in many scientific areas to perform their applied research using VLSTAR models in a user-friendly environment. The build-in framework permits to analyse nonlinearity of time series and make multi-step-ahead predictions via different methods. Further, the practitioner may use the starvars package to obtain realized covariances at several frequencies and the Cholesky decomposition of the related realized covariance matrices.
It should be reminded that the estimation of the parameters in a VLSTAR model strongly depends on the initial values of the parameter of the logistic. We have observed that sometimes the algorithm underlying the automatic grid search may lead to unrealistic estimates of the logistic parameters and, consequently, to not consistent estimates of coefficients. Moreover, the computational time, when using more than two regimes, might be compromised by a large number of coefficients and a possible local minimum may be found by the maximization of the log-likelihood. Thus, the suggestion is to use a limited number of regimes to keep the model as parsimonious as possible.
The code of the package starvars may be improved by using a different transition variable for each equation or by allowing the estimates of a univariate model. However, in both cases, the estimation would be reduced to a univariate model for each equation and there are already packages able to do this.
The here presented package is written using S4 classes and provides
methodology such as coef
, plot
, AIC
, BIC
, logLik
, summary
and print
to analyze the results. The R package starvars is
available from the Comprehensive R Archive Network (CRAN) at
https://cran.r-project.org/package=starvars and on GitHub at
https://github.com/andbucci/starvars.
If the linearity hypothesis is rejected, the researcher should determine
the number of regimes of the dependent variable. To this end, the
procedure introduced by Bai and Perron (1998, 2003) may be implemented. In
presence of multivariate time series, it may happen that sudden shocks,
such as market crashes, financial crises, or interventions of
policymakers, result in a structural break in the mean of the observed
time series (see Bai et al. 1998). At the same time, the interest of the
researcher may be directed to changes in the structure of the
conditional correlations (Aue et al. 2009; see Barassi et al. 2020). To detect the
presence of structural breaks in the co-movements of the
Let
Provided that
Once the null hypothesis can be rejected, the researcher should find the
location of both the breakpoint and the breakpoint fraction
Along with the specification of a VLSTAR model, the R package starvars
allows the user to calculate a non-parametric measure of volatility in
the multivariate framework, such as the realized volatility (see Andersen et al. 2001; Barndorff-Nielsen and Shephard 2002 for the theoretical fundamentals; Andersen et al. 2003). Given a
vector of stock returns,
The function rcov
in the package starvars returns the lower
triangular of
rcov(data, freq = c('daily', 'monthly', 'quarterly', 'yearly'),
make.ret = TRUE, cholesky = FALSE)
The function consists of several arguments. An object of class "xts"
with the values of stock prices or returns on which the realized
covariances should be calculated. The frequency of daily
, monthly
, quarterly
or yearly
. The boolean argument
make.ret
denotes whether the data passed as input in the argument
data
should be converted to returns, if TRUE
the returns are
calculated. Finally, since a wide strand of the literature relies on the
Cholesky factors of cholesky
equal to TRUE
. If
make.ret
is set equal to TRUE
, the output of the function rcov
contains an element of class "xts"
with the returns.
When cholesky = TRUE
, the output of the rcov
function is a list
containing the xts
object from the vectorization of
the realized covariance matrices, given by
starvars, tsDyn, RSTAR, MSBVAR, vars
Econometrics, Finance, TimeSeries
This article is converted from a Legacy LaTeX article using the texor package. The pdf version is the official version. To report a problem with the html, refer to CONTRIBUTE on the R Journal homepage.
Text and figures are licensed under Creative Commons Attribution CC BY 4.0. The figures that have been reused from other sources don't fall under this license and can be recognized by a note in their caption: "Figure from ...".
For attribution, please cite this work as
Bucci, et al., "starvars: An R Package for Analysing Nonlinearities in Multivariate Time Series", The R Journal, 2022
BibTeX citation
@article{RJ-2022-018, author = {Bucci, Andrea and Palomba, Giulio and Rossi, Eduardo}, title = {starvars: An R Package for Analysing Nonlinearities in Multivariate Time Series}, journal = {The R Journal}, year = {2022}, note = {https://rjournal.github.io/}, volume = {14}, issue = {1}, issn = {2073-4859}, pages = {208-226} }