[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
nep-ets New Economics Papers
on Econometric Time Series
Issue of 2020‒03‒09
twelve papers chosen by
Jaqueson K. Galimberti
Auckland University of Technology

  1. Identifiability and Estimation of Possibly Non-Invertible SVARMA Models: A New Parametrisation By Bernd Funovits
  2. The Cointegrated VAR without Unit Roots: Representation Theory and Asymptotics By James A. Duffy; Jerome R. Simons
  3. An Exploration of Trend-Cycle Decomposition Methodologies in Simulated Data By Robert J. Hodrick
  4. Combining Shrinkage and Sparsity in Conjugate Vector Autoregressive Models By Niko Hauzenberger; Florian Huber; Luca Onorante
  5. Seasonal and Trend Forecasting of Tourist Arrivals: An Adaptive Multiscale Ensemble Learning Approach By Shaolong Suna; Dan Bi; Ju-e Guo; Shouyang Wang
  6. Forecasting Realized Volatility Matrix With Copula-Based Models By Wenjing Wang; Minjing Tao
  7. A Discriminative Approach to Bayesian Filtering with Applications to Human Neural Decoding By Burkhart, Michael C.
  8. Econometric issues with Laubach and Williams' estimates of the natural rate of interest By Daniel Buncic
  9. Generalized Poisson Difference Autoregressive Processes By Giulia Carallo; Roberto Casarin; Christian P. Robert
  10. Markov Switching By Yong Song; Tomasz Wo\'zniak
  11. A study on the leverage effect on financial series using a TAR model: a Bayesian approach By Oscar Espinosa; Fabio Nieto
  12. Do zero and sign restricted SVARs identify unconventional monetary policy shocks in the euro area? By Adam Elbourne; Kan Ji

  1. By: Bernd Funovits
    Abstract: This paper deals with parameterisation, identifiability, and maximum likelihood (ML) estimation of possibly non-invertible structural vector autoregressive moving average (SVARMA) models driven by independent and non-Gaussian shocks. We introduce a new parameterisation of the MA polynomial matrix based on the Wiener-Hopf factorisation (WHF) and show that the model is identified in this parametrisation for a generic set in the parameter space (when certain just-identifying restrictions are imposed). When the SVARMA model is driven by Gaussian errors, neither the static shock transmission matrix, nor the location of the determinantal zeros of the MA polynomial matrix can be identified without imposing further identifying restrictions on the parameters. We characterise the classes of observational equivalence with respect to second moment information at different stages of the modelling process. Subsequently, cross-sectional and temporal independence and non-Gaussianity of the shocks is used to solve these identifiability problems and identify the true root location of the MA polynomial matrix as well as the static shock transmission matrix (up to permutation and scaling).Typically imposed identifying restrictions on the shock transmission matrix as well as on the determinantal root location are made testable. Furthermore, we provide low level conditions for asymptotic normality of the ML estimator. The estimation procedure is illustrated with various examples from the economic literature and implemented as R-package.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.04346&r=all
  2. By: James A. Duffy; Jerome R. Simons
    Abstract: It has been known since Elliott (1998) that efficient methods of inference on cointegrating relationships break down when autoregressive roots are near but not exactly equal to unity. This paper addresses this problem within the framework of a VAR with non-unit roots. We develop a characterisation of cointegration, based on the impulse response function implied by the VAR, that remains meaningful even when roots are not exactly unity. Under this characterisation, the long-run equilibrium relationships between the series are identified with a subspace associated to the largest characteristic roots of the VAR. We analyse the asymptotics of maximum likelihood estimators of this subspace, thereby generalising Johansen's (1995) treatment of the cointegrated VAR with exactly unit roots. Inference is complicated by nuisance parameter problems similar to those encountered in the context of predictive regressions, and can be dealt with by approaches familiar from that setting.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.08092&r=all
  3. By: Robert J. Hodrick
    Abstract: This paper uses simulations to explore the properties of the HP filter of Hodrick and Prescott (1997), the BK filter of Baxter and King (1999), and the H filter of Hamilton (2018) that are designed to decompose a univariate time series into trend and cyclical components. Each simulated time series approximates the natural logarithms of U.S. Real GDP, and they are a random walk, an ARIMA model, two unobserved components models, and models with slowly changing nonstationary stochastic trends and definitive cyclical components. In basic time series, the H filter dominates the HP and BK filters in more closely characterizing the underlying framework, but in more complex models, the reverse is true.
    JEL: E32
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:nbr:nberwo:26750&r=all
  4. By: Niko Hauzenberger; Florian Huber; Luca Onorante
    Abstract: Conjugate priors allow for fast inference in large dimensional vector autoregressive (VAR) models but, at the same time, introduce the restriction that each equation features the same set of explanatory variables. This paper proposes a straightforward means of post-processing posterior estimates of a conjugate Bayesian VAR to effectively perform equation-specific covariate selection. Compared to existing techniques using shrinkage alone, our approach combines shrinkage and sparsity in both the VAR coefficients and the error variance-covariance matrices, greatly reducing estimation uncertainty in large dimensions while maintaining computational tractability. We illustrate our approach by means of two applications. The first application uses synthetic data to investigate the properties of the model across different data-generating processes, the second application analyzes the predictive gains from sparsification in a forecasting exercise for US data.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.08760&r=all
  5. By: Shaolong Suna; Dan Bi; Ju-e Guo; Shouyang Wang
    Abstract: The accurate seasonal and trend forecasting of tourist arrivals is a very challenging task. In the view of the importance of seasonal and trend forecasting of tourist arrivals, and limited research work paid attention to these previously. In this study, a new adaptive multiscale ensemble (AME) learning approach incorporating variational mode decomposition (VMD) and least square support vector regression (LSSVR) is developed for short-, medium-, and long-term seasonal and trend forecasting of tourist arrivals. In the formulation of our developed AME learning approach, the original tourist arrivals series are first decomposed into the trend, seasonal and remainders volatility components. Then, the ARIMA is used to forecast the trend component, the SARIMA is used to forecast seasonal component with a 12-month cycle, while the LSSVR is used to forecast remainder volatility components. Finally, the forecasting results of the three components are aggregated to generate an ensemble forecasting of tourist arrivals by the LSSVR based nonlinear ensemble approach. Furthermore, a direct strategy is used to implement multi-step-ahead forecasting. Taking two accuracy measures and the Diebold-Mariano test, the empirical results demonstrate that our proposed AME learning approach can achieve higher level and directional forecasting accuracy compared with other benchmarks used in this study, indicating that our proposed approach is a promising model for forecasting tourist arrivals with high seasonality and volatility.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.08021&r=all
  6. By: Wenjing Wang; Minjing Tao
    Abstract: Multivariate volatility modeling and forecasting are crucial in financial economics. This paper develops a copula-based approach to model and forecast realized volatility matrices. The proposed copula-based time series models can capture the hidden dependence structure of realized volatility matrices. Also, this approach can automatically guarantee the positive definiteness of the forecasts through either Cholesky decomposition or matrix logarithm transformation. In this paper we consider both multivariate and bivariate copulas; the types of copulas include Student's t, Clayton and Gumbel copulas. In an empirical application, we find that for one-day ahead volatility matrix forecasting, these copula-based models can achieve significant performance both in terms of statistical precision as well as creating economically mean-variance efficient portfolio. Among the copulas we considered, the multivariate-t copula performs better in statistical precision, while bivariate-t copula has better economical performance.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.08849&r=all
  7. By: Burkhart, Michael C.
    Abstract: Given a stationary state-space model that relates a sequence of hidden states and corresponding measurements or observations, Bayesian filtering provides a principled statistical framework for inferring the posterior distribution of the current state given all measurements up to the present time. For example, the Apollo lunar module implemented a Kalman filter to infer its location from a sequence of earth-based radar measurements and land safely on the moon. To perform Bayesian filtering, we require a measurement model that describes the conditional distribution of each observation given state. The Kalman filter takes this measurement model to be linear, Gaussian. Here we show how a nonlinear, Gaussian approximation to the distribution of state given observation can be used in conjunction with Bayes’ rule to build a nonlinear, non-Gaussian measurement model. The resulting approach, called the Discriminative Kalman Filter (DKF), retains fast closed-form updates for the posterior. We argue there are many cases where the distribution of state given measurement is better-approximated as Gaussian, especially when the dimensionality of measurements far exceeds that of states and the Bernstein—von Mises theorem applies. Online neural decoding for brain-computer interfaces provides a motivating example, where filtering incorporates increasingly detailed measurements of neural activity to provide users control over external devices. Within the BrainGate2 clinical trial, the DKF successfully enabled three volunteers with quadriplegia to control an on-screen cursor in real-time using mental imagery alone. Participant “T9” used the DKF to type out messages on a tablet PC. Nonstationarities, or changes to the statistical relationship between states and measurements that occur after model training, pose a significant challenge to effective filtering. In brain-computer interfaces, one common type of nonstationarity results from wonkiness or dropout of a single neuron. We show how a robust measurement model can be used within the DKF framework to effectively ignore large changes in the behavior of a single neuron. At BrainGate2, a successful online human neural decoding experiment validated this approach against the commonly-used Kalman filter.
    Date: 2019–05–01
    URL: http://d.repec.org/n?u=RePEc:osf:thesis:4j3fu&r=all
  8. By: Daniel Buncic
    Abstract: Holston, Laubach and Williams' (2017) estimates of the natural rate of interest are driven by the downward trending behaviour of `other factor' $z_{t}$. I show that their implementation of Stock and Watson's (1998) Median Unbiased Estimation (MUE) to determine the size of $\lambda_{z}$ is unsound. It cannot recover the ratio of interest $\lambda _{z}=a_{r}\sigma _{z}/\sigma _{\tilde{y}}$ from MUE required for the estimation of the full structural model. This failure is due to their Stage 2 model being incorrectly specified. More importantly, the MUE procedure that they implement spuriously amplifies the estimate of $\lambda _{z}$. Using a simulation experiment, I show that their MUE procedure generates excessively large estimates of $\lambda _{z}$ when applied to data simulated from a model where the true $\lambda _{z}$ is equal to zero. Correcting their Stage 2 MUE procedure leads to a substantially smaller estimate of $\lambda _{z}$, and a more subdued downward trending influence of `other factor' $z_{t}$ on the natural rate. This correction is quantitatively important. With everything else remaining the same in the model, the natural rate of interest is estimated to be 1.5% at the end of 2019:Q2; that is, three times the 0.5% estimate obtained from Holston et al.'s (2017) original Stage 2 MUE implementation. I also discuss various other issues that arise in their model of the natural rate that make it unsuitable for policy analysis.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.11583&r=all
  9. By: Giulia Carallo; Roberto Casarin; Christian P. Robert
    Abstract: This paper introduces a new stochastic process with values in the set Z of integers with sign. The increments of process are Poisson differences and the dynamics has an autoregressive structure. We study the properties of the process and exploit the thinning representation to derive stationarity conditions and the stationary distribution of the process. We provide a Bayesian inference method and an efficient posterior approximation procedure based on Monte Carlo. Numerical illustrations on both simulated and real data show the effectiveness of the proposed inference.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.04470&r=all
  10. By: Yong Song; Tomasz Wo\'zniak
    Abstract: Markov switching models are a popular family of models that introduces time-variation in the parameters in the form of their state- or regime-specific values. Importantly, this time-variation is governed by a discrete-valued latent stochastic process with limited memory. More specifically, the current value of the state indicator is determined only by the value of the state indicator from the previous period, thus the Markov property, and the transition matrix. The latter characterizes the properties of the Markov process by determining with what probability each of the states can be visited next period, given the state in the current period. This setup decides on the two main advantages of the Markov switching models. Namely, the estimation of the probability of state occurrences in each of the sample periods by using filtering and smoothing methods and the estimation of the state-specific parameters. These two features open the possibility for improved interpretations of the parameters associated with specific regimes combined with the corresponding regime probabilities, as well as for improved forecasting performance based on persistent regimes and parameters characterizing them.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.03598&r=all
  11. By: Oscar Espinosa; Fabio Nieto
    Abstract: This research shows that under certain mathematical conditions, a threshold autoregressive model (TAR) can represent the leverage effect based on its conditional variance function. Furthermore, the analytical expressions for the third and fourth moment of the TAR model are obtained when it is weakly stationary.
    Date: 2020–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:2002.05319&r=all
  12. By: Adam Elbourne (CPB Netherlands Bureau for Economic Policy Analysis); Kan Ji (CPB Netherlands Bureau for Economic Policy Analysis)
    Abstract: This research re-examines the findings of the existing literature on the effects of unconventional monetary policy. It concludes that the existing estimates based on vector autoregressions in combination with zero and sign restrictions do not successfully isolate unconventional monetary policy shocks from other shocks impacting the euro area economy. In our research, we show that altering existing published studies by making the incorrect assumption that expansionary monetary shocks shrink the ECB’s balance sheet or even ignoring all information about the stance of monetary policy results in the same shocks and, therefore, the same estimated responses of output and prices. As a consequence, it is implausible that the shocks previously identified in the literature are true unconventional monetary policy shocks. Since correctly isolating unconventional monetary policy shocks is a prerequisite for subsequently estimating the effects of unconventional monetary policy shocks, the conclusions from previous vector autoregression models are unwarranted. We show this lack of identification for different specifications of the vector autoregression models and different sample periods.
    JEL: C32 E52
    Date: 2019–02
    URL: http://d.repec.org/n?u=RePEc:cpb:discus:391.rdf&r=all

This nep-ets issue is ©2020 by Jaqueson K. Galimberti. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.