lanxicy.com

第一范文网 文档专家

第一范文网 文档专家

New Tests of Forecast Optimality Across Multiple Horizons

Andrew J. Patton Duke University Allan Timmermann University of California, San Diego March 26, 2010

Preliminary and

incomplete.

Abstract

We propose new joint tests of forecast optimality that exploit information contained in multi-horizon forecasts. In addition to implying zero bias and zero autocorrelation in forecast errors, we show that forecast optimality under squared error loss also implies testable restrictions on second moments of the data ordered across forecast horizons. In particular, the variance of the forecast error should be increasing in the horizon; the variance of the forecast itself should be decreasing in the horizon; and the variance of forecast revisions should be bounded by twice the covariance of revisions with the target variable. These bounds on second moments can be restated as inequality constraints in a regression framework and tested using the approach of Wolak (1989). Moreover, some of the proposed tests can be conducted without the need for data on the target variable, which is particularly useful in the presence of large measurement errors. We also propose a new univariate optimal revision test that constrains the coe? cients in a regression of the target variable on the long-horizon forecast and the sequence of interim forecast revisions. The size and power of the new tests are compared with those of extant tests through Monte Carlo simulations. An empirical application to the Federal Reserve’ Greenbook forecasts is used to illustrate the tests. s Keywords: Forecast optimality, real-time data, variance bounds, survey forecasts, forecast horizon. J.E.L. Codes: C53, C22, C52.

We thank Tim Bollerslev and Ken West as well as seminar participants at Duke, UCSD, EC 2 conference on real-time econometrics (December, 2009), and the 6th forecasting symposium at the ECB.

1

Introduction

Forecasts recorded at multiple horizons, for example from one to several quarters into the future, are becoming increasingly common in empirical practice. For example, the surveys conducted by the Philadelphia Federal Reserve (Survey of Professional Forecasters), Consensus Economics or Blue Chip and the forecasts produced by the IMF (World Economic Outlook), the Congressional Budget o? ce, the Bank of England and the Board of the Federal Reserve all cover several horizons. Similarly, econometric models are commonly used to generate multi-horizon forecasts, see, e.g., Faust and Wright (2009), Marcellino, Stock and Watson (2006), and Clements (1997). With the availability of such multi-horizon forecasts, there is a growing need for tests of optimality to exploit the information in the complete “term structure” of forecasts recorded across all horizons. By simultaneously exploiting information across several horizons, rather than focusing separately on individual horizons, multi-horizon forecast tests o¤er the potential of drawing more powerful conclusions about the ability of forecasters to produce optimal forecasts. This paper derives a number of novel and simple implications of forecast optimality and compares tests based on these implications with extant tests. A well-known implication of forecast optimality is that, under squared error loss, the mean squared forecast error should be a non-decreasing function of the forecast horizon, see, e.g., Diebold (2001) and Patton and Timmermann (2007a). A similar property holds for the forecasts themselves: Internal consistency of a sequence of optimal forecasts implies that the variance of the forecasts should be a non-increasing function of the forecast horizon. Intuitively, this property holds because, just as the variance of the realized value must be (weakly) greater than the variance of its conditional expectation, the variance of the expectation conditional on a large information set (corresponding to a short horizon) must exceed that of the expectation conditional on a smaller information set (corresponding to a long horizon). It is also possible to show that optimal updating of forecasts implies that the variance of the forecast revision should exceed twice the covariance between the forecast revision and the actual value. It is uncommon to test such variance bounds in empirical practice, in part due to the di? culty in setting up joint tests of these bounds. We suggest and illustrate testing these monotonicity properties via tests of inequality contraints using the methods Gourieroux et al. (1982) and Wolak (1987, 1989). Tests of forecast optimality have conventionally been based on comparing predicted and “re-

1

alized” values of the outcome variable. This severely constrains inference in some cases since, as shown by Croushore (2006), Croushore and Stark (2001) and Corradi, Fernandez and Swanson (2009), revisions to macroeconomic variables can be very considerable. This raises questions that can be di? cult to address such as “what are the forecasters trying to predict?” i.e. …rst-release , data or …nal revisions. We show that variations on both the new and extant optimality tests can be applied without the need for observations on the target variable. These tests are particularly useful in situations where the target variable is not observed (such as for certain types of volatility forecasts) or is measured with considerable noise (as in the case of output forecasts). Conventional tests of forecast optimality regress the realized value of the predicted variable on an intercept and the forecast for a single horizon and test the joint implication that the intercept and slope coe? cient are zero and one, respectively (Mincer and Zarnowitz (1969).) In the presence of forecasts covering multiple horizons, we show that a complete test that imposes internal consistency restrictions on the forecast revisions gives rise to a generalized e? ciency regression. Using a single equation, this test is undertaken by regressing the realized value on an intercept, the long-horizon forecast and the sequence of intermediate forecast revisions. A set of zero-one equality restrictions on the intercept and slope coe? cients are then tested. A key di¤erence from the conventional Mincer-Zarnowitz test is that the joint consistency of all forecasts at di¤erent horizons is tested by this generalized regression. Analysis of forecast optimality is usually predicated on covariance stationarity assumptions. However, we show that the conventional assumption that the target variable and forecast are (jointly) covariance stationary is not needed for some of our tests and can be relaxed provided that forecasts for di¤erent horizons are lined up in “event time” as studied by Nordhaus (1987) and , Clements (1997). In particular, we show that the second moment bounds continue to hold in the presence of structural breaks in the variance of the innovation to the predicted variable. We present a general family of data generating processes for which the variance bounds continue to hold. To shed light on the statistical properties of the variance bound and regression-based tests of forecast optimality, we undertake a set of Monte Carlo simulations. These simulations consider various scenarios with zero, low and high measurement error in the predicted variable and deviations from forecast optimality in a variety di¤erent directions. We …nd that the covariance bound and the single-equation test of joint forecast consistency have good power and size properties. Speci…cally, they are generally better than conventional Mincer-Zarnowitz tests conducted for individual hori2

zons which either tend to be conservative if a Bonferroni bound is used to summarize the evidence across multiple horizons or su¤er from substantial size distortions, if the multi-horizon regressions are estimated as a system. Our simulations suggest that the various bounds and regression tests have complementary properties in the sense that they have power in di¤erent directions and so can identify di¤erent types of suboptimal behavior among forecasters. An empirical application to Greenbook forecasts of GDP growth, changes to the GDP de? ator and consumer price in? ation con…rms the …ndings from the simulations. In particular, we …nd that conventional regression tests often fail to reject the null of forecast optimality. In contrast, the new variance-bounds tests and single equation multi-horizon tests have better power and are able to identify deviations from forecast optimality. The outline of the paper is as follows. Section 2 presents some novel implications of optimality of forecasts across multiple horizons, and descibes hypothesis tests associated with these implications. Section 3 considers regression-based tests of forecast optimality and Section 4 discusses the role of stationarity for …xed-event forecasts. Section 5 presents the results from the Monte Carlo study, while Section 6 provides an empirical application to Federal Reserve Greenbook forecasts. Section 7 concludes.

2

Variance Bounds Tests

In this section we derive variance and covariance bounds that can be used to test the optimality of a sequence of forecasts recorded at di¤erent horizons. These are presented as corollaries to the well-known theorem that the optimal forecast under quadratic loss is the conditional mean. The proofs of these corollaries are straightforward, and are collected in the Appendix.

2.1

Assumptions and background

fYt ; t = 1; 2; :::g, and suppose that forecasts of this variable

h

Consider a univariate time series, Y

are recorded at di¤erent points in time, t = 1; :::; T and at di¤erent horizons, h = h1 ; :::; hH . ^ Forecasts of Yt made h periods previously will be denoted as Ytjt and is thus conditioned on

the available information set at time t h, Ft h , which is taken to be the -…eld generated by n o ~ ~ Zt h k ; k 0 , where Zt h is a vector of predictor variables capturing elements in the forecaster’ s information set at time t

h. Note that the target variable, Yt ; may or may not be an element of 3

~ Zt ; depending on whether this variable is observable to the forecaster or not. Forecast errors are given by etjt h1 < h2 <

h

= Yt

^ Ytjt

h:

We consider an (H

1) vector of multi-horizon forecasts for horizons

< hH ; with generic long and short horizons denoted by hL and hS (hL > hS ):

Note that the forecast horizons, hi ; can be positive, zero or negative, corresponding to forecasting, nowcasting or backcasting, and further note that we do not require the forecast horizons to be equally spaced. We consider tests of forecast optimality under the assumption that the forecaster has squared error loss, and under that assumption we have the following well-known theorem, see Granger (1969) for example. Theorem 1 (Optimal forecast under MSE) Assume that the forecaster’ loss function is s L (y; y ) = (y ^ for all t: Then ^ Ytjt where Y

h

y )2 and that the conditional mean of the target variable, E [Yt jFt ^ h arg min E (Yt

y 2Y ^

h]

is a.s. …nite

y )2 jFt ^

h

R is the set of possible values for the forecast.

i

= E [Yt jFt

h]

(1)

From this result it is simple to show that the associated forecast errors, etjt mean-zero and uncorrelated with any Zt

h

h

= Yt

^ Ytjt

h

are

2 Ft

h:

We next describe a variety of forecast optimality

tests based on corollaries to this theorem. Our analysis does not restrict the predicted “event”to be a single period outcome such as GDP growth in 2011Q4. Instead the predicted outcome could be the cumulated GDP growth over some period, say 20011Q1 through 2011Q4. Only the interpretation of the forecast horizon will change in the latter situation, e.g. if the point of the forecast is 2011Q3, in which case part of the predicted variable may be observed. The tests proposed and studied in this paper take the forecasts as primitive, and if the forecasts are generated by particular econometric models, rather than by a combination of modeling and judgemental information, the estimation error embedded in those models is ignored. In the presence of estimation error the results established here need not hold. In practice forecasters face parameter uncertainty, model uncertainty and model instability issues and, as shown by West and McCracken (1998), parameter estimation error can lead to substantial skews in unadjusted t statistics. While some of these e¤ects can be addressed when comparing the relative precision of two forecasting models evaluated at the pseudo-true probability limit of the model estimates (West (1996)) or when 4

comparing forecasting methods conditionally (Giacomini and White (2006)), it is not in general possible to establish results for the absolute forecasting performance of a forecasting model. For example, under recursive parameter estimation, forecast errors will generally be serially correlated (Timmermann (1993)) and the mean squared forecast error may not be increasing in the forecast horizon (Schmidt (1974), Clements and Hendry (1998)). Existing analytical results are very limited, however, as they assume a particular model (e.g., an AR(1) speci…cation), whereas in practice forecasts from surveys and forecasts reported by central banks re? ect considerable judgmental information. We leave the important extension to incorporate estimation error to future research. Some of the results derived below will make use of a standard covariance stationarity assumtion: Assumption S1: The target variable, Yt ; is generated by a covariance stationary process.

2.2

Monotonicity of mean squared errors

h

~ From forecast optimality under squared-error loss, (1), it follows that, for any Ytjt Et

h

2 Ft

h,

Yt

^ Ytjt

2 h

Et

h

Yt

~ Ytjt

2 h

:

In particular, the optimal forecast at time t with a longer horizon, hL : Et

h

hS must be at least as good as the forecast associated

Yt

^ Ytjt

2 hS

Et

h

Yt

^ Ytjt

2 hL

:

This leads us to the …rst corollary to Theorem 1 (all proofs are contained in the Appendix): Corollary 1 Under the assumptions of Theorem 1 and S1, it follows that M SE (hS ) E Yt ^ Ytjt

2 hS

E

Yt

^ Ytjt

2 hL

M SE (hL ) for any hS < hL : hH , it follows that the mean squared ^ Ytjt

h,

Given a set of forecasts available at horizons h1 error (MSE) associated with an optimal forecast, etjt of the forecast horizon: h i 2 E etjt h1 h 2 E etjt i

h2 ::::

h

Yt i

is a non-decreasing function

h2

:::

The inequalities are strict if more forecast-relevant information becomes available as the forecast horizon shrinks to zero.1 This property is well-known and is discussed by, e.g., Diebold (2001) and

1

h 2 E etjt

hH

! V [Yt ] as hH ! 1;

(2)

For example, for a non-degenerate AR(1) process the MSEs will strictly increase with the forecast horizon, while

for an MA(1) the inequality will be strict only for h = 1 vs. h = 2; while for longer horizons the MSEs will be equal.

5

Patton and Timmermann (2007a). Example 1: To illustrate a violation of this property, consider the case of a “lazy” forecaster, ~ who, in constructing a short-horizon forecast, Ytjt ~ Ytjt

hL ; hS ,

does not update his long-horizon forecast,

with relevant information, and hides this lack of updating by adding a small amount of

zero-mean, independent noise to the long-horizon forecast. In that case: ~ Ytjt We then have V etjt

hS hS

~ = Ytjt

hL

+ ut

hS ;

ut

hS

~ ? Ytjt

hL ; ut hS

? Yt :

(3)

h = V Yt

~ Ytjt

hL

ut

hS

i

= V etjt

hL

ut

hS

= V etjt

hL

+V [ut

hS ]

> V etjt

hL

:

Hence the short-horizon forecast generates a larger MSE than the long-horizon forecast, revealing the sub-optimality of the short-horizon forecast.

2.3

Testing monotonicity in squared forecast errors

The results derived so far suggest testing forecast optimality via a test of the weak monotonicity in the “term structure”of mean squared errors, (2), to use the terminology of Patton and Timmermann (2008). This feature of rational forecasts is relatively widely known, but has generally not been used to test forecast optimality. Capistran (2007) is the only paper we are aware of that exploits this property to develop a test. His test is based on Bonferroni bounds, which are quite conservative in this application. Here we advocate an alternative procedure for testing non-decreasing MSEs at longer forecast horizons that is based on the inequalities in (2). We consider ranking the MSE-values for a set of forecast horizons h = h1 ; h2 ; :::; hH . Denoting the expected (population) value of the MSEs by de…ning the associated MSE di¤erentials as

e j j j 1 e

= [

e ; :::; 1

e ]0 , H

with

e j

E[e2 tjt

hj ],

and

h = E e2 tjt

hj

i

h E e2 tjt

hj

1

i

;

we can rewrite the inequalities in (2) as

e j

0; for j = h2 ; :::; hH :

(4)

Following earlier work on multivariate inequality tests in regression models by Gourieroux, et

6

al. (1982), Wolak (1987, 1989) proposed testing (weak) monotonicity through the null hypothesis: H0 : vs. H1 : where the (H 1)

e e

0; 2 RH

1

(5) ;

e

1 vector of MSE-di¤erentials is given by

[

e ; :::; 2

e ]0 . H

In contrast,

e

is unconstrained under the alternative. Tests can be based on the sample analogs ^ e = ^ j ^ j 1 j 1 PT 2 for ^ j t=1 etjt hj . Wolak (1987, 1989) derives a test statistic whose distribution under the T P null is a weighted sum of chi-squared variables, H 1 !(H 1; i) 2 (i), where !(H 1; i) are the i=1 weights and

2 (i)

is a chi-squared variable with i degrees of freedom. Approximate critical values

for this test can be calculated through Monte Carlo simulation. For further description of this test and other approaches to testing multivariate inequalities, see Patton and Timmermann (2009).

2.4

Monotonicity of mean squared forecasts

We now present a novel implication of forecast optimality that can be tested when data on the h i target variable is not available or not reliable. Recall that, under optimality, Et h etjt h = 0 i h ^ which implies that Cov Ytjt h ; etjt h = 0. Thus we obtain the following corollary: i h h 2 ^ Corollary 2 Under the assumptions of Theorem 1 and S1, we have V [Yt ] = V Ytjt h +E etjt i i h h 2 2 E etjt hL for any hS < hL ; which then yields From Corollary 1 we have E etjt hS h ^ Further, since E Ytjt i h ^ V Ytjt

hS h

i

:

i

h ^ V Ytjt

hL

i

for any hS < hL :

h

= E [Yt ] ; we also obtain an inequality on the mean-squared forecasts: h ^ E Ytjt2

hS

i

h ^ E Ytjt2

hL

i

for any hS < hL :

(6)

This reveals that a weakly increasing pattern in MSE-values as the forecast horizon increases implies a weakly decreasing pattern in the variance of the forecasts themselves. This simple result provides the surprising implication that (one aspect of) forecast optimality may be tested without the need for a measure of the target variable. A test of this implication can again be based on h i0 h i f f f ^ Wolak’ (1989) approach by de…ning the vector f s E Ytjt2 hj 2 ; :::; H , where j h i ^ E Ytjt2 hj 1 and testing the null hypothesis that di¤erences in mean squared forecasts are weakly 7

negative for all forecast horizons: H0 : vs. H1 :

f f

0; 2 RH

1

(7) :

It is worth pointing out some limitations to this type of test. Tests that do not rely on observing the realized values of the target variable are tests of the internal consistency of the forecasts across two or more horizons. For example, forecasts of an arti…cially-generated AR(p) process, independent of the actual series but constructed in a theoretically optimal fashion, would not be identi…ed as suboptimal by this test.2 Example 2: Consider a scenario where all forecasts are contaminated with noise (due, e.g., to estimation error) that is increasing in the forecast horizon: ~ Ytjt ~ Ytjt V [ut

hL hS hL ]

^ = Ytjt ^ = Ytjt

hL hS

+ ut + ut

hL ; hS ,

ut ut

hL hS

^ ? Ytjt

hL hL ; Ytjt hS ; ut hL

^ ? Ytjt

^

> V [ut

hS ] :

^ ^ De…ne the forecast revision from time t hL to t hS as tjhS ;hL Ytjt hS Ytjt hL . Note that by h i ^ forecast optimality we have Cov Ytjt hL ; tjhS ;hL = 0; and so: h i h i h i h i ~ ~ ^ ^ V Ytjt hS V Ytjt hL = V Ytjt hS + V [ut hS ] V Ytjt hL V [ut hL ] h i h i ^ ^ = V Ytjt hL + tjhS ;hL + V [ut hS ] V Ytjt hL V [ut hL ] h i = V tjhS ;hL + V [ut hS ] V [ut hL ] h i < 0 if V [ut hL ] > V tjhS ;hL + V [ut hS ] : Hence, if the contaminating noise in the long-horizon forecast is greater than the sum of the variance of the optimal forecast revision and the variance of the short-horizon noise, the long-horizon forecast will have greater variance than the short-horizon forecast, and a test based on (6) should detect this. Note that the violation of forecast optimality discussed in Example 1, with the short-horizon forecast generated as the long-horizon forecast plus some independent noise, would not be detected as sub-optimal by a test of the monotonicity of the mean-squared forecast. In this case the shorthorizon forecast would indeed be more volatile than the long-horizon forecast, consistent with

2

For tests of internal consistency across point forecasts and density forecasts, see Clements (2009).

8

optimality, and this test would not be able to detect that the source of this increased variation was simply uninformative noise.

2.5

Monotonicity of covariance between the forecast and target variable

An implication of the weakly decreasing forecast variance property established in Corollary 2 is that the covariance of the forecasts with the target variable should be decreasing in the forecast horizon. To see this, note that h ^ Cov Ytjt i h ^ ; Yt = Cov Ytjt h ^ h ; Ytjt

h + etjt h

i

h ^ = V Ytjt

h

i

:

Thus we obtain the following: Corollary 3 Under the assumptions of Theorem 1 and S1, we obtain h ^ Cov Ytjt

h hS ; Yt

h ^ Further, since E Ytjt

i

i i

h ^ Cov Ytjt h ^ E Ytjt

hL ; Yt

i

for any hS < hL :

= E [Yt ] ; we also obtain: h ^ E Ytjt

hS Yt hL Yt

i

for any hS < hL :

null hypothesis:

As for the above cases, this implication can again be tested using Wolak’ (1989) approach by s h i h i ^ ^ de…ning the vector c [ c ; :::; c ]0 , where c E Ytjt hj Yt E Ytjt hj 1 Yt and testing the 2 j H H0 : vs. H1 :

c c

0; 2 RH

1

(8) :

2.6

Monotonicity of mean squared forecast revisions

Monotonicity of mean squared forecasts also implies a monotonicity result for the mean squared forecast revisions. Consider the following decomposition of the short-horizon forecast into the long-horizon forecast plus the sum of forecast revisions: ^ Ytjt

h1

^ = Ytjt ^ Ytjt

hH

^ + Ytjt +

H 1 X j=1

hH

1

^ Ytjt

hH

^ + ::: + Ytjt

h1

^ Ytjt

h2

hH

tjhj ;hj+1 ;

(9)

9

where

tjhS ;hL

^ hS < hL , so Cov Ytjt h

h

^ Ytjt

hS

^ Ytjt

hL

hL ; tjhS ;hL

i

for hS < hL . Under optimality, Et

hL

= 0; and i i 2

H 1 X j=1

h

tjhS ;hL

i

= 0 for all

^ V Ytjt h ^ V Ytjt

h1

^ Ytjt ^ Ytjt

hH

= V4 =

j=1

tjhj ;hj+1

3

h1

hH

1

H 2 X

V

h

tjhj ;hj+1

i

5=

h ^ V Ytjt

H 1 X j=1

V

h

h1

tjhj ;hj+1

i

; i

^ Ytjt

hH

More generally, the following corollary to Theorem 1 holds: Corollary 4 Denote the forecast revision between two dates as hS < hL : Under the assumptions of Theorem 1 and S1, we have V Further, since E h

tjhS ;hL tjhS ;hL

^ Ytjt

hS

^ Ytjt

hL

for any

h h

tjhS ;hL

i

i i

V

h h

tjhS ;hM

i i

for any hS < hM < hL :

= 0; we also obtain:

2 tjhS ;hL

E

E

2 tjhS ;hM

for any hS < hM < hL :

(10)

Considering the forecast revisions between each horizon and the shortest horizon, this implies that V h i h i h i

tjh1 ;h2

V

tjh1 ;h3

V

tjh1 ;hH

:

(11)

= E

Again Wolak’ (1987, 1989) testing framework can be applied here: De…ne the vector of means i i h h 0 E 2 j 1 E 2 1 ;hj ; :::; H , where squared forecast revisions 3 j tj1;h tjh ^ Ytjt

h1

^ Ytjt

2

hj

E

^ Ytjt

h1

^ Ytjt

2

hj

1

. Then we can test the null hypothesis

that the di¤erences in mean-squared forecast revisions are weakly positive for all forecast horizons: H0 : vs. H1 : 0; 2 RH

2

(12) :

Example 3: Consider forecasts with either “sticky” updating or, conversely, “overshooting” : ^ Ytjt

h

^ = Ytjt

h

+ (1

^ )Ytjt

h 1,

for h = 1; 2; :::; H: > 1. Moreover,

h

“Sticky” forecasts correspond to

2 [0; 1), while “overshooting” occurs when

1

suppose the underlying data generating process is an AR(1), Yt = Yt 10

^ + "t , j j < 1, so Ytjt

=

h

Yt

h.

Then we have

tjt h;t h 1

= = =

^ (Ytjt (

h h

h h h

^ Ytjt

h 1)

+ (1

h 1)

^ )(Ytjt + (1

h 1:

h 1

^ Ytjt Yt

h 2) h+2

Yt

h+1

Yt )

)(

h+1

h 1

Yt

h 2)

"t

+ (1

h+1

"t

It follows that the variances of the one- and two-period forecast revisions are V( V(

tj1;2 ) tj1;3 )

= =

2 2 2 2

+ (1 +

4

)2 + (1

4

2 ";

)2

6

2 ":

We can then have a violation of the inequality in (11), if (1 There is clearly no violation if )2 > 1 + (1 )2

2

: is close to one. However, if is far

= 1 (full optimality) or if

from one, representing either very sticky forecasts (e.g.,

= 0:5) or overshooting (e.g.,

= 1:5).

2.7

Bounds on covariances of forecast revisions

Combining the inequalities contained in the above corollaries, it turns out that we can place an upper bound on the variance of the forecast revision, as a function of the covariance of the revision with the target variable. The intuition behind this bound is simple: if little relevant information arrives between the updating points, then the variance of the forecast revisions must be low. Corollary 5 Denote the forecast revision between two dates as hS < hL : Under the assumptions of Theorem 1 and S1, we have V Further, since E h

tjhS ;hL tjhS ;hL

^ Ytjt

hS

^ Ytjt

hL

for any

i

h

tjhS ;hL

i i

h 2Cov Yt ; h 2E Yt

tjhS ;hL

i

for any hS < hL :

= 0; we also obtain: h

2 tjhS ;hL tjhS ;hL

E

i

for any hS < hL :

(13)

Note also that this result implies (as one would expect) that the covariance between the target variable and the forecast revision must be positive; when forecasts are updated to re? ect new information, the change in the forecast should be positively correlated with the target variable. 11

The above bound can be tested by re-writing it as h E 2Yt and forming the vector

b b ; :::; 2 tjhS ;hL b 0, H 2 tjhS ;hL b j

i

0: h E 2Yt

tjhj ;hj 2 tjhj ;hj

(14) i ; for j =

where

1

1

2; :::; H and then testing the null hypothesis that this variable is weakly positive for all forecast horizons H0 : vs. H1 :

b b

0 2 RH

1

:

Example 1, continued: Consider once again the case where the short-horizon forecast is equal to the long-horizon forecast plus noise: ~ Ytjt

hS

~ = Ytjt

hL

+ ut

hS ;

ut

hS

~ ? Ytjt

hL :

The di¤erence between the variance of the forecast revision and twice the covariance of the revision with the target variable (which is negative under forecast optimality) now equals the variance of the noise: V ~tjhS ;hL h i 2Cov Yt ; ~tjhS ;hL h i h ~ = V Ytjt

hL

+ ut

hS

~ Ytjt

hL

= V [ut

hS ]

> 0:

i

h ~ 2Cov Yt ; Ytjt

hL

+ ut

hS

~ Ytjt

hL

i

Here the bound is violated: The extra noise in the short-horizon forecast contributes to the variance of the forecast revision without increasing the covariance of the revision with the target variable. Example 2, continued: Consider again the case where all forecasts are contaminated with ~ noise, Ytjt

h

^ = Ytjt

hL

+ utjh ; whose variance is increasing in the forecast horizon, V utjhL >

V utjhS . In this case we …nd: h i V ~tjhS ;hL h i h i h i 2Cov Yt ; ~tjhS ;hL = V tjhS ;hL + utjhS utjhL 2Cov Yt ; tjhS ;hL + utjhS utjhL n h i h io = V tjhS ;hL 2Cov Yt ; tjhS ;hL + V utjhS + V utjhL :

Under forecast optimality we know that the term in braces is negative, but if the sum of the shorthorizon and long-horizon noise is greater in absolute value than the term in braces, we will observe a violation of the bound, and a test of this bound can be used to reject forecast optimality. 12

2.8

Variance bounds tests without data on the target variable

The “real time” macroeconomics literature has demonstrated the presence of large and prevalent measurement errors a¤ecting a variety of macroeconomic variables, see Croushore (2006), Croushore and Stark (2001), Faust, Rogers and Wright (2005), and Corradi, Fernandez and Swanson (2009). In such situations it is useful to have tests that do not require data on the target variable. Corollaries 2 and 4 presented two testable implications of forecast optimality that do not require data on the target variable, and in this section we present further tests of multi-horizon forecast optimality that can be employed when data on the target variable is not available or is not reliable. The tests in this section exploit the fact that, under the null of forecast optimality, the shorthorizon forecast can be taken as a proxy for the target variable, from the stand-point of longerhorizon forecasts, in the sense that the inequality results presented above all hold when the shorthorizon forecast is used in place of the target variable. Importantly, unlike standard cases, the proxy in this case is smoother rather than noisier than the actual variable. This turns out to have bene…cial implications for the …nite-sample performance of these tests when the measurement error is sizeable or the predictive R2 of the forecasting model is low. The result that corresponds to that in Corollary 1 is presented in Corollary 4. The corresponding results for Corollaries 3 and 5 are presented below: Corollary 6 Under the assumptions of Theorem 1 and S1 we obtain: (a) h ^ ^ Cov Ytjt hM ; Ytjt h ^ ^ E Ytjt hM Ytjt

hS hS

(b) Denote the forecast revision between two dates as Then V E h h

tjhM ;hL 2 tjhM ;hL

i

i

h i ^ ^ Cov Ytjt hL ; Ytjt hS , and h i ^ ^ E Ytjt hL Ytjt hS for any hS < hM < hL

tjh;k

(15)

^ Ytjt

h

^ Ytjt

k

for any h < k:

i i

As for Corollaries 3 and 5, an inevitable side-e¤ect of testing forecast optimality without using data on the target variable is that such a test only examines the internal consistency of the forecasts across the di¤erent horizons; an internally consistent set of forecasts that are not optimal for a given target variable will not be detected using such tests. 13

h i ^ 2Cov Ytjt hS ; tjhM ;hL , and h i ^ 2E Ytjt hS tjhM ;hL for any hS < hM < hL :

(16)

2.9

Illustration for an AR(1) process

This section illustrates the above results for the special case of an AR(1) process. Let: Yt = Yt where "t W N (0;

2 ), " 1

+ "t ;

j j < 1;

(17)

so

2 y

=

2 =(1 "

2

). Rewriting this as

h

Yt = ^ we have Ytjt Corollary 1, =

h

Yt

h

+

i

h

Yt

h,

and so etjt i

2 "

h

=

2h 2

V etjt

Moreover, consistent with Corollary 2 the variance of the forecast is increasing in h : h i h i 2(h+1) 2 ^ ^ = V Ytjt h 1 : V Ytjt h = 2h 2 y y The covariance between the outcome and the h period forecast is " h 1 h i X i ^ Cov Yt ; Ytjt h = Cov h Yt h + "t i ; h Yt

i=0

h

h

=

1 1

!

Ph

h 1 X i=0

i

"t i ;

1 i=0

"t i . From this it follows that, consistent with 1 1

2(h+1) 2

2 "

!

h = V etjt

h 1

i

:

h

#

=

2h 2 y;

^ ^ which is decreasing in h, consistent with Corollary 3. Also, noting that Ytjt hS = Ytjt PhL 1 i PhL 1 i "t i , the forecast revision can be written as tjhS ;hL = i=hS "t i , and so i=hS ! i h 2(hL hS ) 1 2 2hS V tjhS ;hL = " ; 2 1 of the revision is bounded by twice the covariance of the actual value and the revision: 2 3 2 3 hL 1 hL 1 h i X X i i 2Cov Yt ; tjhS ;hL = 2V 4 "t i 5 > V 4 "t i 5 = tjhS ;hL :

i=hS i=hS

hL

+

which is increasing in hL hS , consistent with Corollary 4. Consistent with Corollary 5, the variance

The implications of forecast rationality presented in Corollary 6 for this AR(1) example are: h i h i ^ ^ ^ ^ Cov Ytjt hM ; Ytjt hS = Cov Ytjt hM ; Ytjt hM + tjhS ;hM 2 3 hX1 M i = Cov 4 hM Yt hM ; hM Yt hM + "t i 5

i=hS

=

2hM

V [Yt

hM ] =

2hM

2 "

2hL

2 "

1 14

2

1

2

h ^ = Cov Ytjt

^ hL ; Ytjt

hS

i

and ^ Cov Ytjt h

hS ; tjhM ;hL

i

= Cov 4 2 2

2

hS

Yt

hS ;

i=hM

hL 1 X

i

^ = Cov 4Ytjt = V4 = V4 2

i=hM hL 1 X

hL

+

i

while V

h

tjhM ;hL

i

h ^ 2Cov Ytjt

i=hM

hL 1 X

"t i 5 = "t i 5 =

hS ;

3

hX1 M i=hS 2 "

"t i 5

i

3 +

hL 1 X i

i

"t

"t i ;

i=hM hL 1 X hL 1 X 2i

i=hM 2 2hM "

hL 1 X

i

=

1 1 1 1

2(hL hM ) 2

"t i 5

3

i

3

i=hM 2 "

2i

=

2 2hM "

2(hL hM ) 2

i=hM

tjhM ;hL

i

3

Regression Tests of Forecast Rationality

Conventional Mincer-Zarnowitz (MZ) regression tests form a natural benchmark against which the performance of our new optimality tests can be compared, both because they are in widespread use and because they are easy to implement. Such regressions test directly if forecast errors are orthogonal to variables contained in the forecaster’ information set. For a single forecast horizon, s h, the standard Mincer-Zarnowitz (MZ) regression takes the form: Yt =

h

+

h Ytjt h

^

+ utjt

h;

(18)

while forecast optimality can be tested through an implication of optimality that we summarize in the following corollary to Theorem 1: Corollary 7 Under the assumptions of Theorem 1 and S1, the population values of the parameters in the Mincer-Zarnowitz regression in equation (18) satisfy

h H0 : h

=0\

h

= 1, for each horizon h.

The MZ regression in (18) is usually applied separately to each forecast horizon. A simultaneous test of optimality across all horizons requires developing a di¤erent approach. We next present two standard ways of combining these results.

15

3.1

Bonferroni bounds on MZ regressions

One approach, adopted in Capistrán (2007), is to run MZ regressions (18) for each horizon, h = h1 ; :::; hH . For each forecast horizon, h, we can obtain the p-value from a chi-squared test with two degrees of freedom. A Bonferroni bound is then used to obtain a joint test. In particular, we reject forecast optimality if the minimum p-value across all H tests is less than the desired size divided by H, =H: This approach is often quite conservative.

3.2

Vector MZ tests

An alternative to the Bonferroni-bounds approach is to stack the MZ equations for each horizon and estimate the model as a system: 2 6 6 6 Yt+h2 6 6 6 Yt+h3 6 6 . . 6 . 4 Yt+hH Yt+h1 3 7 6 7 6 7 6 7 6 7 6 7=6 7 6 7 6 7 6 5 4 2

h1 h2 h3

3

. . .

hH

7 6 7 6 7 6 0 7 6 7 6 7+6 0 7 6 7 6 . 7 6 . 5 4 . 0

2

h1

0

h2

0 0

h3

0 0 .. 0 . . .

hH

32 76 76 76 76 76 76 76 76 76 54

0 . . . 0

. . . 0

.

7 6 7 6 ^ Yt+h2 jt 7 6 e2 2 7 6 t+h ^t+h jt 7 + 6 e3 Y 3 7 6 t+h3 7 6 7 6 . . . . 7 6 . . 5 4 ^ Yt+hH jt eH H t+h

^ Yt+h1 jt

3

2

e1 1 t+h

3

7 7 7 7 7 7: 7 7 7 5

(19)

The relevant hypothesis is now H0 : H1 :

h1 h1

= ::: =

hH

=0\

hH

h1

= ::: =

h1

hH

=1

hH

(20) 6= 1:

6= 0 [ ::: [

6= 0 [

6= 1 [ ::: [

For h > 1, the residuals in (19) will, even under the null of optimality, exhibit autocorrelation and will typically also exhibit cross-autocorrelation, so a HAC estimator of the standard errors is required.

3.3

Univariate Optimal Revision Regression

We next propose a new approach to test optimality that utilizes the complete set of forecasts in the context of univariate regressions. The approach is to estimate a univariate regression of the ^ target variable on the longest-horizon forecast, Ytjt

tjh1 ;h2 ,..., tjhH

1 ;hH

hH ,

and all the intermediate forecast revisions,

. To derive this test, notice that we can represent a short-horizon forecast as

16

a function of a long-horizon forecast and the intermediate forecast revisions: ^ Ytjt ^ Ytjt +

H 1 X j=1 tjhj ;hj+1 :

h1

hH

Rather than regressing the outcome variable on the one-period forecast, we proposed the following “optimal revision” regression: Yt = ^ + H Ytjt +

H 1 X j=1

hH

j tjhj ;hj+1

+ ut :

(21)

Corollary 8 Under the assumptions of Theorem 1 and S1, the population values of the parameters in the optimal revision regression in equation (21) satisfy H0 : =0\

1

= ::: =

H

= 1.

The regression in equation (21) can be re-written as the target variable on all of the forecasts, from h1 to hH ; and the parameter restrictions given in Corollary 8 are then that the intercept is zero, the coe? cient on the short-horizon forecast is one, and the coe? cients on all longer-horizon forecasts are zero. This univariate regression tests both that agents optimally and consistently revise their forecasts at the interim points between the longest and shortest forecast horizons and also that the long-run forecast is unbiased. Hence it generalizes the conventional Mincer-Zarnowitz regression (18) which only considers a single horizon.

3.4

Regression tests without the target variable

All three of the above regression-based tests above can be applied with the short-horizon forecast used in place of the target variable. That is, we can undertake a Mincer-Zarnowitz regression of the short-horizon forecast on a long-horizon forecast ^ Ytjt Similarly, we get a vector 2 ^ Y 6 t+h2 jt+h2 1 . 6 . 6 . 4 ^ Yt+hH jt+hH 1

h1

^ = ~ j + ~ j Ytjt

hj

+ vtjt

hj

for all hj > h1 :

(22)

MZ test that uses 3 2 3 2 ~2 7 6 7 6 7 6 . 7 6 7=6 . 7+6 . 5 4 5 4 ~H

forecasts as target variables: 32 3 2 ~ ^ 0 Yt+h2 jt v 2 76 7 6 tjt h2 . .. . 76 . . 7 6 . . 76 . . 7+6 . . . . . 54 5 4 ~ ^ 0 Yt+hH jt vtjt hH H 17

3

7 7 7: 5

(23)

And …nally we can estimate a version of the optimal revision regression: ^ Ytjt

h1

^ = ~ + ~ H Ytjt

hH

+

H 1 X j=2

~

j tjhj ;hj+1

+ ut ;

(24)

The parameter restrictions implied by forecast optimality are the same as in the standard cases, and are presented in the following corollary: Corollary 9 Under the assumptions of Theorem 1 and S1, the population values of the parameters in (a) Mincer-Zarnowitz regression by proxy in equation (22) satisfy

h H0 : ~ h = 0 \ ~ h = 1, for each horizon h > h1 ,

and (b) the population values of the parameters in the optimal revision regression by proxy, in equation (24) satisfy H0 : ~ = 0 \ ~ 2 = ::: = ~ H = 1. This result exploits the fact that under optimality (and squared error loss) each forecast can be considered a conditionally unbiased proxy for the (unobservable) target variable, where the conditioning is on the information set available at the time the forecast is made. That is, if h i ^ ^ Ytjt hS = Et hS [Yt ] for all hS , then Et hL Ytjt hS = Et hL [Yt ] for any hL > hS ; and so the

short-horizon forecast is a conditionally unbiased proxy for the realization. If forecasts from multiple horizons are available, then we can treat the short-horizon forecast as a proxy for the actual variable, and use it to “test the optimality” of the long-horizon forecast. In fact, this regression tests the internal consistency of the two forecasts, and thus tests an implication of the null that both forecasts are rational.

4

Stationarity and Tests of Forecast Optimality (incomplete)

The literature on forecast evaluation conventionally assumes that the underlying data generating process is covariance stationary. Under this assumption, the Wold decomposition applies and so ~ Yt = f (t; ) + Y0 + Yt ; where f (t; ) captures deterministic parts (e.g. seasonality or trends); Y0 represents the initial ~ condition and Yt is the covariance stationary component which has the Wold representation ~ Yt =

t X i=0 i "t i ;

(25)

18

where "t

i

W N (0; 1) is serially uncorrelated white noise and limt!1

~ analysis often focuses on the covariance stationary component, Yt : If the underlying data is non^ To see the role played by the covariance stationarity assumption, let Yt+hjt ^ Yt+hjt

2 j ) ].

Pt

2 i=0 i

< 1. Forecast

stationary, typically stationarity is recovered by appropriately …rst- or second-di¤erencing the data.

j

= arg minY ^

t+hjt j

Et

j [(Yt+h

By optimality, we must have Et [(Yt+h ^ Yt+hjt

2 j) ]

Et [(Yt+h

^ Yt+hjt )2 ] for j

1:

(26)

Then, by the law of iterated expectations, E[(Yt+h ^ Yt+hjt

2 j) ]

E[(Yt+h

^ Yt+hjt )2 ] for j

1:

(27)

This result compares the variance of the error in predicting the outcome at time t + h given information at time t against the prediction error given information at an earlier date, t j.

Usually, however, forecast comparisons are based on forecasts made at the same date, t, and hence conditional on the same information set, Ft , but for di¤erent forecast horizons, corresponding to predicting Yt+h+j and Yt+h given Ft . Provided that (Yt+h follows from (27) that E[(Yt+h+j ^ Yt+h+jjt )2 ] E[(Yt+h ^ Yt+hjt )2 ] for j 1: (28) ^ Yt+hjt

j)

is covariance stationary, it

The covariance stationarity assumption is clearly important here. (28) does not follow from (27) if, say, there is a deterministic reduction in the variance of Y between periods t + h and t + h + j. Suppose for example that Y = 8 < :

+ " + 2"

for for

t+h >t+h

;

(29)

where " is zero-mean white noise. This could be a stylized example of the “Great Moderation” . ^ ^ Clearly (28) is now violated as Yt+h+jjt = Yt+hjt = , and so3 E[(Yt+h+j ^ Yt+h+jjt )2 ] =

2

4

<

2

= E[(Yt+h

^ Yt+hjt )2 ] for j

1:

(30)

For example, in the case of the Great Moderation, which is believed to have occurred around 1984, a one-year-ahead forecast made in 1982 (i.e. for GDP growth in 1983, while volatility was still high) could well be associated with greater errors than, say, a three-year-ahead forecast (i.e. for GDP growth in 1985, after volatility has come down).

3

Notice here that the expectation, E[:], is taken under the assumption that we know the break in the variance

since this is assumed to be deterministic.

19

4.1

Fixed event forecasts

h

^ Under covariance stationarity, studying the precision of a sequence of forecasts Ytjt

is equivalent

^ to comparing the precision of Yt+hjt for di¤erent values of h: However, this equivalence need not hold when the predicted variable, Yt , is not covariance stationary. One way to deal with nonstationarities such as the break in the variance in (29) is to hold the forecast ‘ event’ …xed, while varying the time to the event, h. In this case the forecast optimality test gets based on (27) rather than (28). Forecasts where the target date, t, is kept …xed, while the forecast horizon varies are commonly called …xed-event forecasts, see Clements (1997) and Nordhaus (1987). To see how this works, notice that, by forecast optimality, Et

hS [(Yt

^ Ytjt

2 hL ) ]

Et

hS [(Yt

^ Ytjt

2 hS ) ]

for hL

hS :

(31)

Moreover, by the law of iterated expectations, E[(Yt ^ Ytjt

2 hL ) ]

E[(Yt

^ Ytjt

2 hS ) ]

for hL

hS :

(32)

hL

^ This result is quite robust. For example, with a break in the variance, (29), we have Ytjt ^ Ytjt

hS

=

= , and ^ Y

2 hL ) ] = E[(Y

E[(Y

j

^ Y

j

2 hS ) ] =

8 < :

2 2 =4

for for

t+h >t+h

.

As a second example, suppose we let the mean of a time-series be subject to a probabilistic break that only is known once it has happened. To this end, de…ne an absorbing state process, s 2 F , such that s0 = 0 and Pr(s = 0js y = Suppose we condition on st Then ^ Ytjt and so we have the expected loss Et

j [(Yt j h 1

= 0) =

for all . Consider the following process: )" ; " (0; 1): h + 1.

+s

+( +s

h+1

= 0, st

= 1, so the permanent break happens at time t + + for j for j h h 1

=

8 < :

,

^ Ytjt

8 < (1 2 j) ] = :

)2

2

+( + )2

)2

for j for j h

h 1

:

( + 20

Once again, monotonicity continues to hold for the …xed-event forecasts, i.e. for hL > hS and for all t: E[(Yt ^ Ytjt

2 hL ) ]

E[(Yt

^ Ytjt

hS )

2

]:

4.2

A general class of non-stationary processes

Provided that a …xed-event setup is used, we next show that the variance bound results pertain to a more general class of stochastic processes that do not require covariance stationarity. Assumption S2: The target variable, Yt ; is generated by Yt = f (t; ) +

1 X i=0 it "t i :

(33)

where f (t; ) captures deterministic parts (e.g. seasonality or trends), Y0 represents the initial is serially uncorrelated mean-zero white noise, and, for all t, P sequence of deterministic coe? cients such that 1 2 < 1 for all t: i=0 it cients in the representation are not functions of t, i.e.,

it

condition, "t

W N (0;

2) "

it

is a

Assumption S1 (covariance stationarity) is su? cient for S2, and further implies that the coe? =

i

8t: However Assumption S2 allows

for deviations from covariance stationarity that can be modeled via deterministic changes in the usual Wold decomposition weights,

i:

For example, it may be that, due to a change in economic

policy or the economic regime, the impulse response function changes after a certain date. For the example in (29), we get

0;

=

8 < :

for

2

t+h >t+h

;

(34)

for

while

i;

= 0 for all i

1.

It is possible to show that the natural extensions of the inequality results established in Corollaries 1, 2, 3, 4 and 5 also hold for this class of processes:

21

Proposition 1 De…ne the following variables M SE T (h) M SF T (h) C T (h) M SF RT (hS ; hL ) B T (h)

T 1X M SEt (h) , where M SEt (h) T

E

Yt

^ Ytjt

2 h

1 T 1 T 1 T 1 T

t=1 T X t=1 T X t=1 T X t=1 T X t=1

M SFt (h) , where M SFt (h) Ct (h) , where Ct (h) h ^ E Ytjt

i h ^ E Ytjt2 h ;

h Yt

i

M SF Rt (h) , where M SF Rt (hS ; hL ) Bt (h) , where Bt (hS ; hL ) h E Yt

E i

h

2 tjhS ;hL

i

tjhS ;hL

Under the conditions of Theorem 1 and S2, for any hS < hM < hL we then obtain the following : (a) (b) (c) (d) (e) M SE T (hS ) M SF T (hS ) C T (hS ) M SF RT (hS ; hM ) M SF RT (hS ; hL ) M SE T (hL ) M SF T (hL ) C T (hL ) M SF RT (hS ; hL ) 2B T (hS ; hL )

The inequalities for averages of unconditional moments presented in Proposition 1 can be tested by drawing on a central limit theorem for heterogeneous, serially dependent processes, see Wooldridge and White (1988) and White (2001) for example. The following proposition provides conditions under which these quantities can be estimated. Proposition 2 De…ne dht dt Then assume: (i) dt = of size r=2 (r 1), r Yt ^ Ytjt

2 h

Yt

[d2t ; :::; dHt ]0 , ^ T +

t,

, for h = h2 ; :::; hH " # T T 1X 0 1 X 0 dt , VT V p dt T T t=1 t=1

(h 1)

^ Ytjt

2

for t = 1; 2; :::; of size r= (r

2 RH

1;

(ii)

t

is a mixing sequence with either

2 or

2) ; r > 2; (iii) E [ t ] = 0 for t = 1; 2; :::;(iv)

E [j it jr ] < C < 1 for i = 1; 2; :::; H ^ VT

1=2

1; (v) VT is uniformly positive de…nite; (vi) There exists a VT !p 0: Then:

^ ^ VT that is symmetric and positive de…nite such that VT p T ^T

) N (0; I) as T ! 1: 22

Thus we can estimate the average of unconditional moments with the usual sample average, with the estimator of the covariance matrix suitably adjusted, and then conduct the test of inequalities using Wolak’ (1989) approach. s

4.3

Model misspeci…cation

A natural question to ask is whether the forecast optimality tests presented so far are really tests of forecast optimality or, rather, test that forecasters use consistent models in updating their forecasts as the horizon changes. As we indicated earlier, the optimality tests that rely exclusively on the forecasts and thus exclude information on the outcome variable test for consistency in forecasters’ revisions. This raises the question whether our tests are valid if forecasters use di¤erent and possibly misspeci…ed models at di¤erent horizons a situation that might arise if forecasters use the ‘ direct’ approach of …tting separate models to each forecast horizon as opposed to the ‘ iterated’approach where a single model is …tted to the shortest horizon and then iterated on to obtain multi-step forecasts. The variance bounds remain valid in situations where forecasters use misspeci…ed models. They require, however, that forecasters realize if they are using a suboptimal short-horizon model whose predictions are dominated by the forecasts from a long-horizon model. Consider, for example, the hypothetical situation where the (misspeci…ed) one-step forecasting model delivers less precise forecasts than, say, a two-step forecasting model. The variance bound results then require that forecasters realize this and either switch to using the two-period forecasts outright (as in the example below) or improve upon their one-step forecasting model. To illustrate this, consider a simple MA(2) speci…cation:4 Yt = "t + "t where "t (0;

2) " 2;

(35)

is white noise, and suppose that forecasters generate h step forecasts by reIn particular, one-period forecasts are based on the model Yt =

1 yt 1

gressing Yt on Yt

h.

+ ut :

1

^ It is easily veri…ed for this process that p lim( ^ 1 ) = 0, so Ytjt

4

^ = 0, and E[(Yt Ytjt

2 1) ]

=

2 (1+ 2 ): "

We are grateful to Ken West for proposing this example.

23

Turning next to the two-period forecast horizon, suppose the forecaster regresses Yt on Yt Yt = where p lim( ^ 2 ) = =(1 + E[(Yt ^ Ytjt

2 2 Yt 2

2,

+ ut ;

2

^ ). This means that Ytjt 2 = ("t 2 + "t 4 )=(1 + " 2 (1 + 2 ) 2 "t 2 "t "t + 2) ] = E 1+ 2 1+ 2 =

2 "

4

), so # 2

1+

+ 4 (1 + 2 )2

6

=

2 "

4

1+

1+

2

:

It is easily seen that, in this case, E[(Yt ^ Ytjt

2) 2

]

E[(Yt

^ Ytjt

2 1 ) ];

seemingly in contradiction of the MSE inequality (2). The reason the MSE inequality fails in this example is that optimizing forecasters should realize, at time t 1, that they are using a misspeci…ed ^ model and that in fact the forecast from the previous period, Ytjt in this example a better forecast at time t ^ 1 is Ytjt

1= 2,

produces a lower MSE. Hence,

2

^ Ytjt

2

= ("t

+ "t

4 )=(1

+

2

).

Notice also that the mean squared forecast variance bound (6) is violated here since ^ var(Ytjt

1)

^ = 0 < var(Ytjt

2)

=

2 2 " =(1

+

2

):

Thus, clearly the forecaster in this example is not producing optimal forecasts. This example also illustrates that our variance bounds can be used to identify suboptimal forecasts and hence help to improve on misspeci…ed models. In some special situations, by virtue of being weak inequalities, the variance inequality tests may not have power to detect a sub-optimal forecast. This situation arises, for example, when all ^ forecasts are constant, i.e. Ytjt

h

= c for all h. For this “broken clock” forecast, MSE-values are

constant across horizons, forecast variances are zero as are the forecast revisions and the covariance between forecasts and actual values. This is a very special case, however, that rules out any variation in the forecast and so can be deemed empirically irrelevant.

5

Monte Carlo Simulations

There is little existing evidence on the …nite sample performance of forecast optimality tests, particularly when multiple forecast horizons are simultaneously involved. Moreover, we have proposed 24

a set of new optimality tests which take the form of bounds on second moments of the data and require using the Wolak (1989) test of inequality constraints which also has not been widely used so far.5 For these reasons it is important to shed light on the …nite sample performance of the various forecast optimality tests. Unfortunately, obtaining analytical results on the size and power of these tests for realistic sample sizes and types of alternatives is not possible To overcome this, we use Monte Carlo simulations of a variety of di¤erent scenarios.We next describe the simulation design and then present the size and power results.

5.1

Simulation design

To capture persistence in the underlying data, we consider a simple AR(1) model for the data generating process: Yt =

y

+

Yt

1 2 "

y

+ "t , t = 1; 2; :::; T = 100

(36)

"t s iid N 0;

:

We calibrate the parameters to quarterly US CPI in? ation data: = 0:5;

2 y

= 0:5;

y

= 0:75:

Optimal forecasts for this process are given by: ^ Ytjt

h

= Et =

y

h [Yt ]

+

h

Yt

h

y

We consider all horizons between h = 1 and H; and we set H 2 f 4 ; 8 g. 5.1.1 Measurement error

The performance of optimality tests that rely on the target variable versus tests that only use forecasts is likely to be heavily in? uenced by measurement errors in the underlying target variable, ~ Yt . To study the e¤ect of this, we assume that the target variable, Yt , is observed with error, ~ Yt = Yt +

5

t

t;

t

s iid N 0;

2

:

One exception is Patton and Timmermann (2009) who provide some evidence on the performance of the Wolak

test in the context of tests of …nancial return models.

25

We consider three values for the magnitude of the measurement error, the standard deviation of the underlying variable, medium, 5.1.2 =

y y,

, calibrated relative to = 0 (as for CPI); (ii) =

y

namely (i) zero,

= 0:65 (as for GDP growth …rst release data);6 and (iii) high;

= 1.

Sub-optimal forecasts

To study the power of the optimality tests, we consider a variety of ways in which the forecasts can be suboptimal. First, we consider forecasts that are contaminated by the same level of noise at all horizons: ^ Ytjt where error. Forecasts may alternatively be a¤ected by noise whose standard deviation is increasing in the horizon, ranging from zero for the short-horizon forecast to 2 horizon (H = 8):

;h ;h h

^ = Ytjt

h

+

;h Zt;t h ,

Zt;t

h

s iid N (0; 1) ;

= 0:65

y

for all h and thus has the same magnitude as the medium level measurement

0:65

y

for the longest forecast

=

2 (h 1) 7

0:65

y,

for h = 1; 2; :::; H

8:

Forecasts a¤ected by noise whose magnitude is decreasing in the horizon (from zero for h = 8 to 2 0:65

y

for h = 1) take the form:

;h

=

2 (8 7

h)

0:65

y,

for h = 1; 2; :::; H

8:

Finally, consistent with example 3 we consider forecasts with either “sticky” updating or, conversely, “overshooting” : ^ Ytjt

h

^ = Ytjt

h

+ (1

^ )Ytjt

h 1,

for h = 1; 2; :::; H:

To capture “sticky” forecasts we set = 1:5.

= 1=2, whereas for “overshooting” forecasts we set

Tests based on forecast revisions may have better …nite-sample properties than tests based on the forecasts themselves, particularly when the underlying process is highly persistent.

6

The “medium” value is calibrated to match US GDP growth data, as reported by Faust, Rogers and Wright

(2005).

26

5.2

Results from the simulation study

Table 1 reports the size of the various tests for a nominal size of 10%. Results are based on 1,000 Monte Carlo simulations and a sample of 100 observations. The variance bounds tests are clearly under-sized, particularly for H = 4, where none of the tests have a size above 4%. In contrast, the MZ Bonferroni bound is over-sized.7 The vector MZ test is also hugely oversized, while the size of the univariate optimal revision regression is close to the nominal value of 10%. Because of the clear size distortions to the MZ Bonferroni bound and the vector MZ regression, we do not further consider those tests in the simulation study. Turning to the power of the various forecast optimality tests, Table 2 reports the results of our simulations, using the three measurement noise scenarios (constant, increasing and decreasing noise) and the sticky updating and overshooting schemes, respectively. In the …rst scenario with equal noise across di¤erent horizons (Panel A), neither the MSE, MSF, MSFR or decreasing covariance bounds have much power to detect deviations from forecast optimality. This holds across all three levels of measurement error. In contrast, the covariance bound on forecast revisions has excellent power to detect this type of deviation from optimality close to 100% particularly when the short^ horizon forecast, Ytjt

1,

which is not a¤ected by noise, is used as the dependent variable.8 The power

is somewhat weaker when the covariance bound test is adopted on the actual variable, although it improves by roughly 10% when the measurement error is reduced from the high value to zero. The univariate optimal revision regression (21) also has excellent power properties, particularly when the dependent variable is the short-horizon forecast. The scenario with additive measurement noise that increases in the horizon, h, is ideal for the decreasing MSF test since now the variance of the long-horizon forecast is arti…cially in? ated in contradiction of (6). Thus, as expected, Panel B of Table 2 shows that this test has very good power under this scenario: 42% in the case with four forecast horizons, rising to 100% in the case with eight forecast horizons. The MSE and MSFR bounds have zero power for this type of deviation from forecast optimality. The covariance bound based on the predicted variable has power around

7

Conventionally, Bonferroni bounds tests are conservative and tend to be undersized. Here, the individual MZ

regression tests are even more oversized than shown here, so the Bonferroni bound leads to a reduction in the size of the test which is still oversized. 8 The covariance bound (14) works so well because noise in the forecast increases E[ E[Yt

tjhS ;hL ], 2 tjhS ;hL ]

without a¤ecting

thereby making it less likely that E[Yt

tjhS ;hL

2 tjhS ;hL ]

0 holds.

27

15% when H = 4, which increases to a power of 90-95% when H = 8. The covariance bound with the actual value replaced by the short-run forecast, (16), has the highest power among all tests, with power of 70% when H = 4 and power of 100% when H = 8. This is substantially higher than the power of the univariate optimal revision regression test (21) which has power close to 10-15% when conducted on the actual values and power of 60% when the short-run forecast is used as the dependent variable.

9

Panel C of Table 2 shows that the scenario with noise in the forecast that decreases as a function of the forecast horizon, h, gives rise to high power for the increasing MSE test and also for the MSFR test with power again being much higher when H = 8 compared to when H = 4: However, once again the covariance bound test and the univariate optimal revision regression (21) have superior power. The univariate optimal revision regression (21) and the covariance bound test have stronger power in the scenarios with noise that is either constant or decreasing in the forecast horizon, h, because the precision of the forecasts is much better at short horizons. Hence, the greater the noise that is added to the short horizon forecasts, the better these tests are able to detect ine? ciency of the forecast. The next scenario assumes sticky updating in the forecasts. In this case, shown in Panel D of Table 2, only the univariate optimal revision regression (21) seems to have much power to detect deviations from a fully rational forecast. The power lies between 27% and 60% for this test when the regression is based on the actual variable, and grows stronger as a result of reducing the measurement error from the high value to zero. Interestingly, power is close to 100% when the univariate optimal revision regression is based on the short-term forecasts. In the …nal scenario with overshooting, shown in Panel E of Table 2, none of the tests has particularly high power. However, the univariate optimal revision regression dominates with power ranging between 23% and 58%, depending on the level of the measurement error and on how many horizons are included. In this case power is actually stronger, the fewer constraints (H) are considered. We also consider using a Bonferroni bound to combine various tests based on actual values, forecasts only or all tests. Results for these tests are shown at the bottom of tables 1 and 2.

9

^ For this case, Ytjt

hH

is very poor, but this forecast is also very noisy and so deviations from rationality can be

relatively di? cult to detect.

28

In all cases we …nd that the size of the tests falls well below the nominal size, as expected for a Bonferroni-based test, although the power seems to be quite high and comparable to the best among the individual tests. In conclusion, viewed across all four scenarios, the covariance bound test performs best among all the second-moment bounds. Interestingly, it generally performs much better than the MSE bound which is the most commonly known variance bound. Among the regression tests, excellent performance is found for the univariate optimal revision regression, particularly when the test uses the short-run forecast as the dependent variable. This test tends to have superior power properties and performs well across most deviations from forecast e? ciency. Either the covariance test or the univariate optimal revision regression have the highest power in all the experiments considered here with the covariance bound test being best in the realistic case where the noise in the forecasts increases with the horizon. Bonferroni bound tests conducted on the regression and second moment tests are also found to have good properties, pooling the information across various individual tests.

6

Empirical Application

As an empirical illustration of the forecast optimality tests, we next evaluate the Federal Reserve “Greenbook” forecasts of GDP growth, the GDP de? ator and CPI in? ation. Data are from Faust and Wright (2009), who carefully extracted the Greenbook forecasts and actual values from realtime Fed publications.10 We use quarterly observations of the target variable over the period from 1982Q1 to 2000Q4. Forecasts begin with the current quarter and run up to eight quarters ahead in time. However, since the forecasts have many missing observations at the longest horizons and we are interested in aligning the data in “event time” we only study horizons up to …ve quarters, , i.e., h = 0; 1; 2; 3; 4; 5. A few quarterly observations are missing, leaving a total of 69 observations. Empirical results are reported in Table 3. The key …ndings are as follows. For GDP growth we observe a strong rejection of internal consistency via the univariate optimal revision regression that uses the short-run forecast as the target variable, (24), and a mild violation of the increasing mean-squared forecast revision test, (11). Turning to the GDP de? ator, we …nd that several tests reject forecast optimality. In particular, the tests for a decreasing covariance, the covariance bound on forecast revisions, a decreasing

10

We are grateful to Jonathan Wright for providing the data.

29

mean squared forecast, and the univariate optimal revision regression test all lead to rejections. Figure 1 illustrates the rejection of the variance bound based on the forecasts and shows that, in contradiction with (6) the MSF is not weakly decreasing in the horizon, h. In fact, the MSF is higher for h = 5 than for h = 0. Finally, for the CPI in? ation rate we …nd a violation of the bound on the variance of the revisions, (11), and a rejection through the univariate optimal revision regression. For all three variables, the Bonferroni-based combination test rejects multi-horizon forecast optimality at the 5% level. The type of rejections gives some clues as to possible sources of suboptimality. The source of some of the rejections of forecast optimality is further illustrated in Figures 1-3. For each of the series, Figure 1 plots the mean squared errors and variance of the forecasts on top of each other. Under the null of forecast optimality, the forecast and forecast error should be orthogonal and the sum of these two components should be constant across horizons. Clearly, this does not hold here, particularly for the GDP de? ator and CPI in? ation series. As shown in Figure 2 which plots the mean squared error and forecast variances separately the variance of the forecast in fact increases in the horizon for the GDP de? ator, and it follows an inverse U shaped pattern for CPI in? ation, both in apparent contradiction of the decreasing forecast variance property established earlier. Figure 3 plots mean squared errors and mean squared forecast revisions against the forecast horizon. Whereas the mean squared forecast revisions are mostly increasing as a function of the forecast horizon for the two in? ation series, for GDP growth we observe the opposite pattern, namely a very high mean squared forecast revision at the one-quarter horizon, followed by lower values at longer horizons. This is the opposite of what we would expect and so explains the (weak) rejection of forecast optimality for this case. The Monte Carlo simulations are closely in line with our empirical …ndings. Rejections of forecast optimality come mostly from the covariance bound (14), (16) and the univariate optimal revision regressions (21), (24). Moreover, for GDP growth, rejections tend to be stronger when only the forecasts are used. This makes sense since this variable is likely to be most a¤ected by data revisions and measurement errors.

30

7

Conclusion

In this paper we propose several new tests of forecast optimality that exploit information from multi-horizon forecasts. Our new tests are based on (weak) monotonicity properties of second moment bounds that must hold across forecast horizons and so are joint tests of optimality across several horizons. We show that monotonicity tests, whether conducted on the squared forecast errors, squared forecasts, squared forecast revisions or the covariance between the target variable and the forecast revision can be restated as inequality constraints on regression models and that econometric methods proposed by Gourieroux et al. (1982) and Wolak (1987, 1989) can be adopted. Suitably modi…ed versions of these tests conducted on the sequence of forecasts or forecast revisions recorded at di¤erent horizons can be used to test the internal consistency properties of an optimal forecast, thereby side-stepping the issues that arise for conventional tests when the target variable is either missing or observed with measurement error. Simulations suggest that the new tests are more powerful than extant ones and also have better …nite sample size. In particular a new covariance bound test that constrains the variance of forecast revisions by their covariance with the outcome variable and a univariate joint regression test that includes the long-horizon forecast and all interim forecast revisions generally have good power to detect deviations from forecast optimality. These results show the importance of testing the joint implications of forecast rationality across multiple horizons when data is available. An empirical analysis of the Fed’ Greenbook forecasts of in? s ation and output growth corroborates the ability of the new tests to detect evidence of deviations from forecast optimality. Our analysis in this paper assumed squared error loss. However, many of the results can be extended to allow for more general loss functions with known shape parameters. For example, the MSE bound is readily generalized to a bound based on non-decreasing expected loss as the horizon grows, see Patton and Timmermann (2007a). Similarly, the orthogonality regressions can be extended to use the generalized forecast error, which is essentially the score associated with the forecaster’ …rst order condition, see Patton and Timmermann (2010). Allowing for the case with s a parametric loss function but unknown (estimated) parameters is more involved and is a topic we leave for future research.

31

8

Appendix: Proofs

^ By the optimality of Ytjt Et

h

Proof of Corollary 1. Et

hS

hS

^ and that Ytjt Yt

hL

Yt

^ Ytjt

2 hS

Yt

^ Ytjt

2 hL

2 Ft

2 hS

hS

we have Yt ^ Ytjt

2 hL

, which implies E

^ Ytjt

E

by the LIE, and so M SE (hS )

M SE (hL ) :

in h:

^ Proof of Corollary 2. Forecast optimality under MSE loss implies Ytjt h = Et h [Yt ]. Thus h i h i h i h i ^ ^ Et h etjt h Et h Yt Ytjt h = 0, which implies E etjt h = 0 and Cov Ytjt h ; etjt h = 0, h i h i h i h i 2 2 ^ ^ and so V [Yt ] = V Ytjt h + E etjt h ; or V Ytjt h = V [Yt ] E etjt h : Corollary 1 showed that i h i h 2 ^ E etjt h is weakly increasing in h; which implies that V Ytjt h must be weakly decreasing in h. h i h i h i2 h i h i ^ ^ ^ ^ ^ E Ytjt h = E Ytjt2 h E [Yt ]2 , since E Ytjt h = Finally, note that V Ytjt h = E Ytjt2 h i i h h ^ ^ E [Yt ] : Thus if V Ytjt h is weakly decreasing in h we also have that E Ytjt2 h is weakly decreasing i h ^ Proof of Corollary 3. As used in the above proofs, forecast optimality implies Cov Ytjt h ; etjt h = h i h i h i ^ ^ ^ ^ 0 and thus Cov Ytjt h ; Yt = Cov Ytjt h ; Ytjt h + etjt h = V Ytjt h : Corollary 2 showed that h i h i ^ ^ V Ytjt h is weakly decreasing in h; and thus we have that Cov Ytjt h ; Yt is also weakly decreasing i h i h i h i h ^ ^ ^ ^ E [Yt ]2 ; in h. Further, since Cov Ytjt hS ; Yt = E Ytjt hS Yt E Ytjt hS E [Yt ] = E Ytjt hS Yt h i ^ we also have that E Ytjt hS Yt is weakyl decreasing in h. Proof of Corollary 4. ^ Ytjt

hM tjhS ;hL

^ Ytjt

hS

^ Ytjt

hL

=

tjhS ;hL

^ Ytjt

hS

^ Ytjt

hM

+

Under the assumption that hS < hM < hL note that i ^ ^ Et hM tjhS ;hM = Et hM Ytjt hS Ytjt hM = 0 by the law of iterated expectations. Thus h i h i h i h i h i Cov tjhS ;hM ; tjhM ;hL = 0 and so V tjhS ;hL = V tjhS ;hM + V tjhM ;hL V tjhS ;hM : h i h i h i Further, since Et h tjh;k = 0 for any h < k we then have E tjh;k = 0 and thus E 2 S ;hL tjh h i 2 E tjhS ;hM : h

h iL tjhS ;hM

^ Ytjt

h

+

tjhM ;hL :

32

Proof of Corollary 5. For any hS < hL ; Corollary 1 showed h i h i ^ ^ V Yt Ytjt hL V Yt Ytjt hS i h i i h i h h ^ ^ ^ ^ 2Cov Yt ; Ytjt hS 2Cov Yt ; Ytjt hL V [Yt ] + V Ytjt hS so V [Yt ] + V Ytjt hL h i h i h i h i ^ ^ ^ ^ and V Ytjt hL 2Cov Yt ; Ytjt hL V Ytjt hS 2Cov Yt ; Ytjt hS h i h ^ ^ = V Ytjt hL + tjhS ;hL 2Cov Yt ; Ytjt hL + i h h i ^ = V Ytjt hL + V tjhS ;hL h i h i ^ 2Cov Yt ; Ytjt hL 2Cov Yt ; tjhS ;hL : h i h i Thus V tjhS ;hL 2Cov Yt ; tjhS ;hL .

tjhS ;hL

i

i h i h ^ ^ ^ ^ Proof of Corollary 6. (a) Let h < k, then Cov Ytjt k ; Ytjt h = Cov Ytjt k ; Ytjt k + tjh;k = i i h i h h ^ ^ ^ V Ytjt k , since Cov Ytjt k ; tjh;k = 0: From Corollary 2 we have that V Ytjt k is decreasing h i h i ^ ^ ^ in k and thus Cov Ytjt k ; Ytjt h is decreasing in k: Since E Ytjt k = EV [Yt ] for all k; thus also h i ^ ^ implies that E Ytjt k Ytjt h is decreasing in k. h i h i h i ^ (b) From Corollary 5 we have V tjhM ;hL 2Cov Yt ; tjhM ;hL = 2Cov Ytjt hS + etjt hS ; tjhM ;hL = h i h i h i ^ 2Cov Ytjt hS ; tjhM ;hL since Cov etjt hS ; tjhM ;hL = 0: Further, since E tjhM ;hL = 0 this also i i h h ^ 2E Ytjt hS tjhM ;hL : implies that E 2 M ;hL tjh h i h i ^ ^ Proof of Corollary 7. The population value of h is Cov Ytjt h ; Yt =V Ytjt h , which under h i h i h i h i ^ ^ ^ ^ ^ optimality equals h = Cov Ytjt h ; Yt =V Ytjt h = Cov Ytjt h ; Ytjt h + etjt h =V Ytjt h = h i h i ^ ^ V Ytjt h =V Ytjt h = 1: The population value of h under optimality equals h = E [Yt ] i i h h ^ ^ ^ E Ytjt h = E [Yt ] E Ytjt h = 0 by the LIE since Ytjt h = Et h [Yt ] : h individual forecasts, using the fact that Yt = + = +

H Ytjt hH tjhj ;hj+1

Proof of Corollary 8. First, we re-write the regression in equation (21) as a function of the ^ Ytjt ^ Ytjt + ut + ::: +

H Ytjt hH H H 1 hj

^ Ytjt +

2

hj+1

^

+

1

1

^ Ytjt ^ Ytjt

h1 hH

h2

^ Ytjt

h2

^ Ytjt

h3

+ :::

H 1

^ Ytjt

hH

+ +

1 Ytjt h1 1 Ytjt h1

^

+( +

2

1 ) Ytjt h2

^

^ Ytjt

hH

^

2 Ytjt h2

^

+ ::: +

^

We next use the Frisch-Waugh-Lovell theorem (see Davidson and MacKinnon, 1993, for example) to show that

1

= 1 and

=

2

= ::: =

H

= 0: Under the null of forecast optimality we

33

^ have Ytjt

h1

= Et

h1

^ [Yt ] : Consider a …rst-stage regression of Yt on Ytjt ^ Ytjt ^ Next consider a regression of Ytjt

h1

(without a constant).

Following the steps in the proof of Corollary 7, this yields a coe? cient of one, and regression residuals are etjt

hS

Yt

hS :

hj

^ on Ytjt

h1

(again without

a constant) for each j = 2; 3; :::; H; and let the slope coe? cients from these regressions be denoted

j:

Finally consider a regression of the residuals from the …rst regression on the matrix of residuals 1 regressions, namely: =

0

from the latter H etjt

h1

+

2

^ Ytjt

h2

2 Ytjt h1 h1

^

+ ::: + i

H

^ Ytjt

hH 0

H Ytjt h1

^

+

t: H

h Since forecast optimality implies Et h1 etjt values of ( ; = 0;

1 2 ; ::; 2 H)

= 0; we then have

0

=

2 H:

= ::: =

= 0: By

the Frisch-Waugh-Lovell theorem we know that

= ;

2

=

2 ; :::; H 1

=

Thus the population

are zero, and the population value of

H

is thus one. This implies that i

= 1;

= 1;...,

= 1; as claimed.

hj

h i h ^ ^ ^ Proof of Corollary 9. (a) Under optimality, ~ h = Cov Ytjt h1 ; Ytjt hj =V Ytjt i h i h i h i h h ^ ^ ^ ^ ^ ^ Cov Ytjt hj + tjh1 ;hj ; Ytjt hj =V Ytjt hj = V Ytjt hj =V Ytjt hj = 1, and ~ = E Ytjt i h ~E Y ^ E [Yt ] = 0. tjt hj = E [Yt ]

h1

i

=

^ (b) Follows using the same steps as the proof of part (b) of Corollary 8, noting that Ytjt h2 = h i h i h i ^ ^ ^ Et h2 [Yt ] = Et h2 Ytjt h1 by the LIE, and that Et h2 Ytjt h1 Ytjt h2 Et h2 tjh1 ;h2 = 0: Proof of Proposition 1. Throughout the proof we will use the fact that " # 1 1 X X ^ Ytjt h = Et h [Yt ] = Et h f (t; ) + it "t i = f (t; ) +

i=0 i=h

it "t i

and etjt

h

Yt

^ Ytjt

h

=

h 1 X i=0

it "t i

(a) From above we have h 2 E etjt

2 E etjt h

h 2 thus E etjt

hL

i

h

hS

i

i

= V etjt =

2 " hL 1 X i=0

h

h

i

2 it

=V

"h 1 X

2 " i=0

it "t

i

i=0 hS 1 X

#

2 "

=

2 "

2 it

=

hL 1 X i=hS

h 1 X i=0 2 it

2 it

0

which implies that M SEt (hS ) M SE T (hS )

M SEt (hL ) for all t: This further implies that

T 1X M SEt (hL ) T t=1

T 1X M SEt (hS ) T t=1

M SE T (hL ) for all T .

34

The proofs of (b)-(e) all follow similar extensions as used in part (a) and are omitted in the interests of brevity. Proof of Proposition 2. Follows from Exercise 5.21 of White (2001).

References

[1] Capistran, Carlos, 2007, Optimality Tests for Multi-Horizon Forecasts. Manuscript no 2007-14, Banco de Mexico. [2] Clements, Michael P., 1997, Evaluating the Rationality of Fixed-Event Forecasts. Journal of Forecasting 16, 225-239. [3] Clements, Michael P., 2009, Internal consistency of survey respondents’ forecasts: Evidence based on the Survey of Professional Forecasters. In The Methodology and Practice of Econometrics. A Festschrift in Honour of David F. Hendry. eds. Jennifer L. Castle and Neil Shephard. Oxford University Press. Chapter 8, 206 - 226. [4] Clements, Michael P., and David F. Hendry, 1998, Forecasting Economic Time Series. Cambridge. Cambridge University Press. [5] Corradi, Valentina, Andres Fernandez, and Norman R. Swanson, 2009, Information in the Revision Process of Real-Time Datasets, Journal of Business Economics and Statistics 27, 455-467. [6] Croushore, Dean, 2006, Forecasting with Real-Time Macroeconomic Data. Pages 961-982 in G. Elliott, C. Granger and A. Timmermann (eds.) Handbook of Economic Forecasting, North Holland: Amsterdam. [7] Croushore, Dean and Tom Stark, 2001, A Real-Time Data Set for Macroeconomists. Journal of Econometrics 105, 111-130. [8] Davies, A. and K. Lahiri, 1995, A New Framework for Analyzing Survey Forecasts using Three-Dimensional Panel Data, Journal of Econometrics, 68, 205-227. [9] Diebold, Francis X., 2001, Elements of Forecasting. 2nd edition. Ohio: South-Western. [10] Diebold, Francis X. and Glenn D. Rudebusch, 1991, Forecasting Output with the Composite Leading Index: A Real-Time Analysis, Journal of the American Statistical Association, 86, 603-610. [11] Faust, Jon, John Rogers and Jonathan Wright, 2005, News and Noise in G-7 GDP Announcements. Journal of Money, Credit and Banking 37, 403-419. [12] Faust, Jon and Jonathan Wright, 2009, Comparing Greenbook and Reduced Form Forecasts using a Large Realtime Dataset. Journal of Business and Economic Statistics 27, 468-479.

35

[13] Giacomini, Ra¤aella and Halbert White, 2006, Tests of Conditional Predictive Ability. Econometrica 74, 6, 1545-1578. [14] Gourieroux, C., A. Holly and A. Monfort, 1982, Likelihood Ratio Test, Wald Test, and KuhnTucker Test in Linear Models with Inequality Constraints on the Regression Parameters. Econometrica 50, 63-80. [15] Marcellino, Massimiliano, James H. Stock and Mark W. Watson, 2006, A comparison of direct and iterated multistep AR methods for forecasting macroeconomic time series.’ Journal of Econometrics 135, 499– 526. [16] Mincer, Jacob, and Victor Zarnowitz, 1969, The Evaluation of Economic Forecasts, in J. Mincer (ed.) Economic Forecasts and Expectations, National Bureau of Economic Research, New York. [17] Moon, Roger, Frank Schorfheide, Eleonora Granziera and Mihye Lee, 2009, Inference for VARs identi…ed with sign restrictions. Mimeo, University of Southern California and University of Pennsylvania. [18] Newey, Whitney K., and Kenneth D. West, 1987, A Simple, Positive Semide…nite, Heteroskedasticity and Autocorrelation Consistent Covariance Matrix, Econometrica, 55, 703-708. [19] Nordhaus, William D., 1987, Forecasting E? ciency: Concepts and Applications. Review of Economics and Statistics 69, 667-674. [20] Patton, Andrew J., and Allan Timmermann, 2007a, Properties of optimal forecasts under asymmetric loss and nonlinearity. Journal of Econometrics, 140(2), 884-918. [21] Patton, Andrew J., and Allan Timmermann, 2007b, Testing Forecast Optimality under Unknown Loss. Journal of the American Statistical Association 102, 1172-1184. [22] Patton, Andrew J., and Allan Timmermann, 2008, Predictability of output growth and in? ation: A multi-horizon survey approach. Unpublished manuscript, Duke and UCSD. [23] Patton, Andrew J., and Allan Timmermann, 2009, Monotonicity in Asset Returns; New Tests with Applications to the Term Structure, the CAPM, and Portfolio Sorts. Forthcoming in Journal of Financial Economics. [24] Patton, Andrew J. and Allan Timmermann, 2010, Generalized Forecast Errors, A Change of Measure and Forecast Optimality. Forthcoming in T. Bollerslev, J. Russell, and M. Watson, (eds.), Volatility and Time Series Econometrics: Essays in Honour of Robert F. Engle, Oxford University Press. [25] Schmidt, Peter, 1974, The Asymptotic Distribution of Forecasts in the Dynamic Simulation of an Econometric Model. Econometrica 42, 303-309. [26] Timmermann, Allan, 1993, How Learning in Financial Markets Generates Excess Volatility and Predictability in Stock Prices. Quarterly Journal of Economics 108(4), 1135-1145. [27] West, Kenneth D., 1996, Asymptotic Inference about Predictive Ability. Econometrica 64, 1067-84. 36

[28] West, Kenneth D., and Michael W. McCracken, 1998, Regression Based Tests of Predictive Ability. International Economic Review 39, 817-840. [29] White, Halbert, 2001, Asymptotic Theory for Econometricians, Second Edition, Academic Press, San Diego. [30] Wolak, Frank A., 1987, An Exact Test for Multiple Inequality and Equality Constraints in the Linear Regression Model. Journal of the American Statistical Association 82, 782-793. [31] Wolak, Frank A., 1989, Testing Inequality Constraints in Linear Econometric Models. Journal of Econometrics 31, 205-235. [32] Wooldridge, Je¤rey M., and Halbert White, 1988, Some Invariance Principles and Central Limit Theorems for Dependent Heterogeneous Processes, Econometric Theory, 4, 210-230.

37

Table 1: Monte Carlo simulation of size of the inequality tests and regression-based tests of forecast optimality H=4 Meas. error variance: Inc MSE Dec COV COV bound Dec MSF Inc MSFR Dec COV, h=1 COV bound, h=1 Inc MSE & Dec MSF Inc MSE & Inc MSFR Univar MZ, Bonferroni Univar MZ, Bonferroni, h=1 Vector MZ Vector MZ, h=1 Univar opt. revision regr. Univar opt. revision regr., h=1 Bonf, using actuals Bonf, using forecasts only Bonf, all tests High 1.9 1.1 2.2 2.1 0.4 0.9 3.6 1.5 1.1 13.8 16.0 39.8 25.2 11.3 12.0 3.9 3.0 3.6 Med 1.7 1.1 1.2 2.1 0.4 0.9 3.6 1.3 0.8 15.0 16.0 38.0 25.2 11.5 12.0 4.2 3.0 3.5 Zero 1.1 0.8 0.4 2.1 0.4 0.9 3.6 0.8 0.6 17.8 16.0 31.2 25.2 11.0 12.0 3.6 3.0 2.2 High 7.8 8.4 2.3 5.3 5.5 6.4 4.6 8.3 7.2 19.5 19.2 63.0 52.4 12.4 11.3 7.4 6.6 7.6 H=8 Med 6.4 7.3 1.4 5.3 5.5 6.4 4.6 8.2 6.7 19.4 19.2 62.0 52.4 11.8 11.3 7.6 6.6 7.5 Zero 8.3 7.2 0.8 5.3 5.5 6.4 4.6 9.1 6.5 20.3 19.2 58.8 52.4 11.0 11.3 8.0 6.6 6.2

Notes: This table presents the outcome of 1,000 Monte Carlo simulations of the size of various forecast optimality tests. Data is generated by a …rst-order autoregressive process with parameters calibrated to quarterly US CPI in? ation data, i.e. = 0:5; 2 = 0:5 and y = 0:75: We consider y three levels of error in the measured value of the target variable (high, median and zero). Optimal forecasts are generated under the assumption that this process (and its parameter values) are known to forecasters. The simulations assume a sample of 100 observations and a nominal size of 10%. The inequality tests are based on the Wolak (1989) test and use simulated critical values based on a mixture of chi-squared variables. Rows with ‘ = 1’refer to cases where the one-period forecast h is used in place of the predicted variable.

38

Table 2: Monte Carlo simulation of power of the inequality tests and regression-based tests of forecast optimality

H=4 Meas. error variance: High Med Zero High

H=8 Med Zero

PANEL A: Equal noise across all Inc MSE 7.5 6.8 Dec COV 7.3 6.4 COV bound 73.7 79.6 Dec MSF 5.8 5.8 Inc MSFR 9.9 9.9 Dec COV, h=1 8.9 8.9 COV bound, h=1 98.0 98.0 Inc MSE & Dec MSF 7.8 7.6 Inc MSE & Inc MSFR 8.2 7.3 Univar opt. revision regr. 91.9 98.1 Univar opt. revision regr., h=1 100.0 100.0 Bonf, using actuals 83.4 94.0 Bonf, using forecasts only 100.0 100.0 Bonf, all tests 100.0 100.0

forecast horizons 6.8 13.4 12.4 6.1 13.0 13.5 83.2 74.8 78.7 5.8 15.0 15.0 9.9 14.8 14.8 8.9 15.4 15.4 98.0 99.1 99.1 6.9 27.3 26.7 7.0 23.4 23.3 99.6 85.3 95.9 100.0 100.0 100.0 98.5 78.7 89.8 100.0 99.9 99.9 100.0 100.0 100.0

12.6 12.2 83.3 15.0 14.8 15.4 99.1 26.0 23.0 99.0 100.0 96.7 99.9 100.0

Notes: This table presents the outcome of 1,000 Monte Carlo simulations of the size of various forecast optimality tests. Data is generated by a …rst-order autoregressive process with parameters calibrated to quarterly US CPI in? ation data, i.e. = 0:5; 2 = 0:5 and y = 0:75: We consider y three levels of error in the measured value of the target variable (high, median and zero). Optimal forecasts are generated under the assumption that this process (and its parameter values) are known to forecasters. Power is then studied against sub-optimal forecasts obtained as follows: A: forecasts are contaminated by the same level of noise across all horizons; B: forecasts are contaminated by noise that increases in the horizon; C: forecasts are contaminated by noise that decreases in the horizon; D: Forecasts are updated in a sticky manner; E: forecasts overshoot their optimal values. The simulations assume a sample of 100 observations and a nominal size of 10%. Rows with ‘ = 1’ h refer to cases where the one-period forecast is used in place of the predicted variable.

39

Table 2: Monte Carlo simulation of power of the inequality tests and regression-based tests of forecast optimality

H=4 Meas. error variance: High Med Zero High

H=8 Med Zero

PANEL B: Noise increases with the horizon Inc MSE 0.2 0.2 0.0 0.0 Dec COV 3.3 3.0 2.8 13.5 COV bound 12.9 14.5 14.9 90.7 Dec MSF 42.3 42.3 42.3 100.0 Inc MSFR 0.0 0.0 0.0 0.0 Dec COV, h=1 4.9 4.9 4.9 12.9 COV bound, h=1 69.2 69.2 69.2 100.0 Inc MSE & Dec MSF 25.5 25.3 23.2 99.8 Inc MSE & Inc MSFR 0.2 0.0 0.0 0.0 Univar opt. revision regr. 11.7 12.3 11.9 13.1 Univar opt. revision regr., h=1 63.6 63.6 63.6 54.6 Bonf, using actuals 7.9 9.2 9.2 80.1 Bonf, using forecasts only 63.0 63.0 63.0 100.0 Bonf, all tests 54.7 54.7 54.4 100.0 PANEL C: Noise decreases with the horizon Inc MSE 71.3 79.8 87.1 100.0 Dec COV 6.7 5.8 6.1 13.6 COV bound 99.5 99.8 99.9 99.5 Dec MSF 0.4 0.4 0.4 0.1 Inc MSFR 55.9 55.9 55.9 100.0 Dec COV, h=1 10.6 10.6 10.6 17.6 COV bound, h=1 99.8 99.8 99.8 99.2 Inc MSE & Dec MSF 50.8 59.6 69.1 100.0 Inc MSE & Inc MSFR 79.6 83.1 88.8 100.0 Univar opt. revision regr. 100.0 100.0 100.0 100.0 Univar opt. revision regr., h=1 100.0 100.0 100.0 100.0 Bonf, using actuals 100.0 100.0 100.0 100.0 Bonf, using forecasts only 100.0 100.0 100.0 100.0 Bonf, all tests 100.0 100.0 100.0 100.0 Notes: See notes to Panel A of this table on previous page.

0.0 12.8 93.3 100.0 0.0 12.9 100.0 99.8 0.0 13.6 54.6 84.1 100.0 100.0

0.0 12.2 95.2 100.0 0.0 12.9 100.0 99.8 0.0 12.9 54.6 86.6 100.0 100.0

100.0 12.5 99.8 0.1 100.0 17.6 99.2 100.0 100.0 100.0 100.0 100.0 100.0 100.0

100.0 13.1 99.9 0.1 100.0 17.6 99.2 100.0 100.0 100.0 100.0 100.0 100.0 100.0

40

Table 2: Monte Carlo simulation of power of the inequality tests and regression-based tests of forecast optimality

H=4 Meas. error variance: High Med Zero High

H=8 Med Zero

PANEL D: Sticky updating Inc MSE 2.0 1.4 1.1 Dec COV 2.1 2.1 2.1 COV bound 0.9 0.7 0.4 Dec MSF 4.8 4.8 4.8 Inc MSFR 0.0 0.0 0.0 Dec COV, h=1 2.4 2.4 2.4 COV bound, h=1 2.7 2.7 2.7 Inc MSE & Dec MSF 2.6 2.4 2.1 Inc MSE & Inc MSFR 0.8 0.9 0.9 Univar opt. revision regr. 33.2 44.8 59.1 Univar opt. revision regr., h=1 99.5 99.5 99.5 Bonf, using actuals 15.2 22.0 37.4 Bonf, using forecasts only 97.5 97.5 97.5 Bonf, all tests 96.3 96.2 96.2 PANEL Inc MSE Dec COV COV bound Dec MSF Inc MSFR Dec COV, h=1 COV bound, h=1 Inc MSE & Dec MSF Inc MSE & Inc MSFR Univar opt. revision regr. Univar opt. revision regr., h=1 Bonf, using actuals Bonf, using forecasts only Bonf, all tests E: Over-shooting 2.9 2.4 2.2 1.1 0.5 0.7 5.4 4.6 4.8 1.0 1.0 1.0 2.5 2.5 2.5 1.0 1.0 1.0 7.7 7.7 7.7 1.2 0.9 0.7 2.0 2.1 1.9 29.8 41.8 57.9 32.3 32.3 32.3 13.5 23.3 38.0 0.0 0.0 0.0 10.3 17.1 31.3

6.8 9.2 1.2 8.2 4.8 7.4 4.3 11.0 7.9 27.5 98.9 14.9 93.3 89.4

6.0 8.7 0.8 8.2 4.8 7.4 4.3 11.0 7.0 35.2 98.9 19.1 93.3 89.4

7.6 8.5 0.5 8.2 4.8 7.4 4.3 10.7 8.3 49.9 98.9 28.9 93.3 89.4

8.8 5.6 4.9 3.6 8.1 6.6 8.5 6.6 10.1 23.8 27.9 8.7 13.7 11.9

8.0 5.7 4.7 3.6 8.1 6.6 8.5 7.3 9.5 32.4 27.9 14.8 13.7 14.5

7.3 5.6 4.9 3.6 8.1 6.6 8.5 6.5 7.1 48.0 27.9 28.2 13.7 22.1

Notes: See notes to Panel A of this table on previous page.

41

Table 3: Forecast optimality tests for Greenbook forecasts Series: Inc MSE Dec COV COV bound Dec MSF Inc MSFR Dec COV, h=1 COV bound, h=1 Inc MSE & Dec MSF Inc MSE & Inc MSFR Univar opt. revision regr. Univar opt. revision regr., h=1 Bonf, using actuals Bonf, using forecasts only Bonf, all tests Growth 0.599 0.898 0.498 0.898 0.084 0.802 0.216 0.934 0.250 0.709 0.000 1.000 0.000 0.000 De?ator 0.964 0.058 0.000 0.026 0.936 0.075 0.010 0.126 0.992 0.000 0.009 0.000 0.047 0.001 In?ation 0.644 0.991 0.009 0.725 0.624 0.795 0.656 0.616 0.749 0.001 0.022 0.004 0.108 0.012

Note: This table presents p-values from inequality- and regression tests of forecast optimality applied to quarterly Greenbook forecasts of GDP growth, the GDP de? ator and CPI In? ation. The sample covers the period 1982Q1-2000Q4. Six forecast horizons are considered, i.e., h = 0, 1, 2, 3, 4, 5 and the forecasts are aligned in event time. The inequality tests are based on the Wolak (1989) test and use simulated critical values based on a mixture of chi-squared variables. Rows with ‘ = 1’refer to cases where the one-period forecast is used in place of the predicted variable. h

42

Forecasts and f orecast errors 1.4 MSE V[f orecast] V[actual]

1.2

1

Variance

0.8

0.6

0.4

0.2

0

-5

-4

-3 -2 Forecast horizon

-1

0

Figure 1: Theoretical mean squared errors and forecast variances for an AR(1) process with unconditonal variance of 1 and autoregressive coe? cient of 0.8.

43

GDP deflator 4 3

Variance

MSE V[forecast] V[actual]

2 1 0

-5

-4

-3 -2 Forecast horizon

-1

0

CPI inflation 5 4

Variance

3 2 1 0 -5 -4 -3 -2 Forecast horizon -1 0

GDP grow th 8 6

Variance

4 2 0

-5

-4

-3 -2 Forecast horizon

-1

0

Figure 2: Mean squared errors and forecast variances, for US GDP de?ator, CPI in?ation and GDP growth.

44

Mean-squared forecast errors 7 6 5 4 3 2 1 0 7 6 5 4 3 2 1 0

Mean-squared forecast revisions GDP growth CPI inflation GDP deflator

-5

-4

-3 -2 forecast horizon

-1

0

-5

-4

-3 -2 forecast horizon

-1

0

Figure 3: Mean squared errors (left panel) and mean-squared forecast revisions (right panel), for US GDP de?ator, CPI in?ation and GDP growth.

45

相关文章:

更多相关标签:

- Porosity of Devonian and Mississippian New Albany Shale across a maturation gradient Insights from
- Polygonal Approximation of Closed Curves across Multiple Views
- Investigating the Performance of Automatic New Topic Identification Across Multiple Dataset
- new horizons in the study of language and mind
- New Horizons Data Management and Archiving Plan Page iii TABLE OF CONTENTS
- New Approaches to the Green Economy of China in the Multiple Crises
- Realization and Comparison of a New Push-pull Direct-Connected Multiple-input Converter Family
- On The Optimality of Block Orthogonal Transforms for Multiple Description Coding of Gaussia
- Effects of Multiple-Choice and Short-Answer Tests on Delayed Retention Learning
- Optimality of Beamforming in Multiple Transmitter Multiple Receiver Communication Systems
- 罐头制品
- 中国车牌号的识别大全(包括军车和政府高官
- UML基础教程 (老师的课件)很好
- 生猪结构图
- 花卉全年管理手册(pdf版)