[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
nep-ets New Economics Papers
on Econometric Time Series
Issue of 2018‒02‒26
seven papers chosen by
Yong Yin
SUNY at Buffalo

  1. Indexed Markov Chains for financial data: testing for the number of states of the index process By Guglielmo D'Amico; Ada Lika; Filippo Petroni
  2. Optimal forecast reconciliation for hierarchical and grouped time series through trace minimization By Shanika L. Wickramasuriya; George Athanasopoulos; Rob J. Hyndman
  3. Testing for common breaks in a multiple equations system By Tatsushi Oka; Pierre Perron
  4. Extreme canonical correlations and high-dimensional cointegration analysis By Onatski, A.; Wang, C.
  5. Mixed frequency models with MA components By Foroni, Claudia; Marcellino, Massimiliano; Stevanović, Dalibor
  6. On the cyclical properties of Hamilton's regression filter By Schüler, Yves S.
  7. An approach to increasing forecast-combination accuracy through VAR error modeling By Till Weigt; Bernd Wilfling

  1. By: Guglielmo D'Amico; Ada Lika; Filippo Petroni
    Abstract: A new branch based on Markov processes is developing in the recent literature of financial time series modeling. In this paper, an Indexed Markov Chain has been used to model high frequency price returns of quoted firms. The peculiarity of this type of model is that through the introduction of an Index process it is possible to consider the market volatility endogenously and two very important stylized facts of financial time series can be taken into account: long memory and volatility clustering. In this paper, first we propose a method for the optimal determination of the state space of the Index process which is based on a change-point approach for Markov chains. Furthermore we provide an explicit formula for the probability distribution function of the first change of state of the index process. Results are illustrated with an application to intra-day prices of a quoted Italian firm from January $1^{st}$, 2007 to December $31^{st}$ 2010.
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:arx:papers:1802.01540&r=ets
  2. By: Shanika L. Wickramasuriya; George Athanasopoulos; Rob J. Hyndman
    Abstract: Large collections of time series often have aggregation constraints due to product or geographical groupings. The forecasts for the most disaggregated series are usually required to add-up exactly to the forecasts of the aggregated series, a constraint we refer to as "coherence". Forecast reconciliation is the process of adjusting forecasts to make them coherent. The reconciliation algorithm proposed by Hyndman et al. (2011) is based on a generalized least squares estimator that requires an estimate of the covariance matrix of the coherency errors (i.e., the errors that arise due to incoherence). We show that this matrix is impossible to estimate in practice due to identifiability conditions. We propose a new forecast reconciliation approach that incorporates the information from a full covariance matrix of forecast errors in obtaining a set of coherent forecasts. Our approach minimizes the mean squared error of the coherent forecasts across the entire collection of time series under the assumption of unbiasedness. The minimization problem has a closed form solution. We make this solution scalable by providing a computationally efficient representation. We evaluate the performance of the proposed method compared to alternative methods using a series of simulation designs which take into account various features of the collected time series. This is followed by an empirical application using Australian domestic tourism data. The results indicate that the proposed method works well with artificial and real data.
    Keywords: Aggregation, Australian tourism, Coherent forecasts, contemporaneous error correlation, forecast combinations, spatial correlations.
    Date: 2017
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2017-22&r=ets
  3. By: Tatsushi Oka; Pierre Perron
    Abstract: The issue addressed in this paper is that of testing for common breaks across or within equations of a multivariate system. Our framework is very general and allows integrated regressors and trends as well as stationary regressors. The null hypothesis is that breaks in different parameters occur at common locations and are separated by some positive fraction of the sample size unless they occur across different equations. Under the alternative hypothesis, the break dates across parameters are not the same and also need not be separated by a positive fraction of the sample size whether within or across equations. The test considered is the quasi-likelihood ratio test assuming normal errors, though as usual the limit distribution of the test remains valid with non-normal errors. Of independent interest, we provide results about the rate of convergence of the estimates when searching over all possible partitions subject only to the requirement that each regime contains at least as many observations as some positive fraction of the sample size, allowing break dates not separated by a positive fraction of the sample size across equations. Simulations show that the test has good finite sample properties. We also provide an application to issues related to level shifts and persistence for various measures of inflation to illustrate its usefulness.
    Keywords: change-point, segmented regressions, break dates, hypothesis testing, multiple equations systems.
    JEL: C32
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:msh:ebswps:2018-3&r=ets
  4. By: Onatski, A.; Wang, C.
    Abstract: The simplest version of Johansen's (1988) trace test for cointegration is based on the squared sample canonical correlations between a random walk and its own innovations. Onatski and Wang (2017) show that the empirical distribution of such squared canonical correlations weakly converges to the Wachter distribution as the sample size and the dimensionality of the random walk go to infinity proportionally. In this paper we prove that, in addition, the extreme squared correlations almost surely converge to the upper and lower boundaries of the support of the Wachter distribution. This result yields strong laws of large numbers for the averages of functions of the squared canonical correlations that may be discontinuous or unbounded outside the support of the Wachter distribution. In particular, we establish the a.s. limit of the scaled Johansen's trace statistic, which has a logarithmic singularity at unity. We use this limit to derive a previously unknown analytic expression for the Bartlett-type correction coefficient for Johansen's test in a high-dimensional environment.
    Keywords: High-dimensional random walk, cointegration, extreme canonical correlations, Wachter distribution, trace statistic.
    Date: 2018–01–25
    URL: http://d.repec.org/n?u=RePEc:cam:camdae:1805&r=ets
  5. By: Foroni, Claudia; Marcellino, Massimiliano; Stevanović, Dalibor
    Abstract: Temporal aggregation in general introduces a moving average (MA) component in the aggregated model. A similar feature emerges when not all but only a few variables are aggregated, which generates a mixed frequency model. The MA component is generally neglected, likely to preserve the possibility of OLS estimation, but the consequences have never been properly studied in the mixed frequency context. In this paper, we show, analytically, in Monte Carlo simulations and in a forecasting application on U.S. macroeconomic variables, the relevance of considering the MA component in mixed-frequency MIDAS and Unrestricted-MIDAS models (MIDASARMA and UMIDAS-ARMA). Specifically, the simulation results indicate that the short-term forecasting performance of MIDAS-ARMA and UMIDAS-ARMA is better than that of, respectively, MIDAS and UMIDAS. The empirical applications on nowcasting U.S. GDP growth, investment growth and GDP deflator inflation confirm this ranking. Moreover, in both simulation and empirical results, MIDAS-ARMA is better than UMIDAS-ARMA.
    Keywords: temporal aggregation,MIDAS models,ARMA models
    JEL: E37 C53
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:022018&r=ets
  6. By: Schüler, Yves S.
    Abstract: Hamilton (2017) criticises the Hodrick and Prescott (1981, 1997) filter (HP filter) because of three drawbacks (i. spurious cycles, ii. end-of-sample bias, iii. ad hoc assumptions regarding the smoothing parameter) and proposes a regression filter as an alternative. I demonstrate that Hamilton's regression filter shares some of these drawbacks. For instance, Hamilton's ad hoc formulation of a 2-year regression filter implies a cancellation of two-year cycles and an amplification of cycles longer than typical business cycles. This is at odds with stylised business cycle facts, such as the one-year duration of a typical recession, leading to inconsistencies, for example, with the NBER business cycle chronology. Nonetheless, I show that Hamilton's regression filter should be preferred to the HP filter for constructing a credit-to-GDP gap. The filter extracts the various medium-term frequencies more equally. Due to this property, a regression-filtered credit-to-GDP ratio indicates that imbalances prior to the global financial crisis started earlier than shown by the Basel III creditto-GDP gap.
    Keywords: detrending,spurious cycles,business cycles,financial cycles,Basel III
    JEL: C10 E32 E58 G01
    Date: 2018
    URL: http://d.repec.org/n?u=RePEc:zbw:bubdps:032018&r=ets
  7. By: Till Weigt; Bernd Wilfling
    Abstract: We consider a situation in which the forecaster has available M individual forecasts of a univariate target variable. We propose a 3-step procedure designed to exploit the interrelationships among the M forecast-error series (estimated from a large time-varying parameter VAR model of the errors, using past observations) with the aim of obtaining more accurate predictions of future forecast errors. The refined future forecast-error predictions are then used to obtain M new individual forecasts that are adapted to the information from the estimated VAR. The adapted M individual forecasts are ultimately combined and any potential accuracy gains of the adapted combination forecasts analyzed. We evaluate our approach in an out-of-sample forecasting analysis, using a well-established 7-country data set on output growth. Our 3-step procedure yields substantial accuracy gains (in terms of loss reductions ranging between 6.2% up to 18%) for the simple average and three time-varying-parameter combination forecasts.
    Keywords: Forecast combinations, large time-varying parameter VARs, Bayesian VAR estimation, state-space model, forgetting factors, dynamic model averaging.
    JEL: C53 C32 C11
    Date: 2018–02
    URL: http://d.repec.org/n?u=RePEc:cqe:wpaper:6818&r=ets

This nep-ets issue is ©2018 by Yong Yin. It is provided as is without any express or implied warranty. It may be freely redistributed in whole or in part for any purpose. If distributed in part, please include this notice.
General information on the NEP project can be found at http://nep.repec.org. For comments please write to the director of NEP, Marco Novarese at <director@nep.repec.org>. Put “NEP” in the subject, otherwise your mail may be rejected.
NEP’s infrastructure is sponsored by the School of Economics and Finance of Massey University in New Zealand.