The recent exchange on Error Correction Models in Political Analysis and elsewhere dealt with several important issues involved in time series analysis. While there was much disagreement in the symposium, one common theme was the lack of power due to the few number of observations for much of this work. In this paper we highlight two well known but rarely discussed problems this has for inferences from standard time series techniques. First, one result of low power is inflated standard errors. One issue low powered time series can face is that the confidence interval on a lagged dependent variable, even when the series is stationary, includes values ≥ 1. This is particularly problematic when calculating the confidence interval of the long run multiplier. If the confidence interval of the lagged dependent variable includes 1, the standard error of the long run multiplier will be explosive. Second, the calculation of the long run multiplier is the ratio of coefficients, which makes the calculation of the uncertainty slightly more complicated. Unfortunately, the two standard approaches to calculating the uncertainty in the long run multiplier, the delta method and the Bewley transformation, are asymptotically accurate, but may have difficulties in small samples. As a solution, we suggest using a Bayesian approach. For autoregressive distributed lag models, the Bayesian approach can formalizes the stationarity assumption by using a beta prior that is strictly less than 1. With error correction models, the researcher can easily calculate the credible region of the long run multiplier from the posterior distribution of the ratio of the coefficients. As a result, we obtain theoretically informed estimates of the confidence regions for the distribution of the long run multiplier.
Nieman, Mark David and Peterson, David A. M., "Long Run Confidence: Estimating Confidence Intervals when using Long Run Multipliers" (2019). Political Science Publications. 60.