Free Essay

Oil Price

In:

Submitted By yungmils
Words 16813
Pages 68
Boston College Economics The Stata Journal (yyyy) Working Paper Number ii, pp. 1–38 vv, No. 667

Enhanced routines for instrumental variables/GMM estimation and testing
Christopher F. Baum Mark E. Schaffer Boston College Heriot–Watt University Steven Stillman Motu Economic and Public Policy Research
Abstract. We extend our 2003 paper on instrumental variables (IV) and GMM estimation and testing and describe enhanced routines that address HAC standard errors, weak instruments, LIML and k-class estimation, tests for endogeneity and RESET and autocorrelation tests for IV estimates. Keywords: st0001, instrumental variables, weak instruments, generalized method of moments, endogeneity, heteroskedasticity, serial correlation, HAC standard errors, LIML, CUE, overidentifying restrictions, Frisch–Waugh–Lovell theorem, RESET, Cumby-Huizinga test

1

Introduction

In an earlier paper, Baum et al. (2003), we discussed instrumental variables (IV) estimators in the context of Generalized Method of Moments (GMM) estimation and presented Stata routines for estimation and testing comprising the ivreg2 suite. Since that time, those routines have been considerably enhanced and additional routines have been added to the suite. This paper presents the analytical underpinnings of both basic IV/GMM estimation and these enhancements and describes the enhanced routines. Some of these features are now also available in Stata 10’s ivregress, while others are not. The additions include: • Estimation and testing that is robust to, and efficient in the presence of, arbitrary serial correlation. • A range of test statistics that allow the user to address the problems of underidentification or weak identification, including statistics that are robust in the presence of heteroskedasticity, autocorrelation or clustering. • Three additional IV/GMM estimators: the GMM continuously updated estimator (CUE) of Hansen et al. (1996); limited-information maximum likelihood (LIML); and k-class estimators. • A more intuitive syntax for GMM estimation: the gmm2s option requests the twostep feasible efficient GMM estimator, which reduces to standard IV/2SLS if no robust covariance matrix estimator is also requested. The cue option requests c yyyy StataCorp September, 2007 LP st0001

2

Enhanced routines for IV/GMM estimation and testing the continuously-updated GMM estimator, which reduces to standard LIML if no robust covariance matrix estimator is also requested. • A more intuitive syntax for a “GMM distance” or C test of the endogeneity of regressors. • An option that allows the user to “partial out” regressors: something which is particularly useful when the user has a rank-deficient estimate of the covariance matrix of orthogonality conditions (common with the cluster option and singleton dummy variables). • Several advanced options, including options that will speed up estimation using ivreg2 by suppressing the calculation of various checks and statistics. • A version of the RESET regression specification test, ivreset, that (unlike official Stata’s ovtest) is appropriate for use in an instrumental variables context. • A test for autocorrelation in time-series errors, ivactest, that (unlike official Stata’s estat bgodfrey) is appropriate for use in an instrumental variables context.

We review the definitions of the method of instrumental variables and IV-GMM in the next section to set the stage. The following sections of the paper discuss each of these enhancements in turn. The last two sections provide a summary of ivreg2 estimation options and syntax diagrams for all programs in the extended ivreg2 suite.

2

IV and GMM estimation

The Generalized Method of Moments was introduced by Lars Hansen in his celebrated 1982 paper. It is now a mainstay of both econometric practice and econometrics textbooks. We limit our exposition here to the linear case, which is what ivreg2 handles. The exposition here draws on Hayashi (2000). Alternatively, for more detail and references see our earlier paper (Baum et al. (2003)) and Chapter 8 of Baum (2006).

2.1

Setup

The equation to be estimated is, in matrix notation, y = Xβ + u with typical row yi = X i β + u i (2) (1)

The matrix of regressors X is n × K, where n is the number of observations. Some of the regressors are endogenous, so that E(Xi ui ) = 0. We partition the set of regressors

Christopher F. Baum, Mark E. Schaffer and Steven Stillman

3

into [X1 X2 ], with the K1 regressors X1 assumed under the null to be endogenous and the K2 ≡ (K − K1 ) remaining regressors X2 assumed exogenous, giving us y = [X1 X2 ][β1 β2 ] + u (3)

The set of instrumental variables is Z and is n × L. This is the full set of variables that are assumed to be exogenous, i.e., E(Zi ui ) = 0. We partition the instruments into [Z1 Z2 ], where the L1 instruments Z1 are excluded instruments and the remaining L2 ≡ (L − L1 ) instruments Z2 ≡ X2 are the included instruments/exogenous regressors: Regressors X = [X1 X2 ] = [X1 Z2 ] = [Endogenous Exogenous]

Instruments Z = [Z1 Z2 ] = [Excluded Included] The order condition for identification of the equation is L ≥ K implying there must be at least as many excluded instruments (L1 ) as there are endogenous regressors (K1 ) as Z2 is common to both lists. If L = K, the equation is said to be exactly identified by the order condition; if L > K, the equation is overidentified. The order condition is necessary but not sufficient for identification; see Section 7 for a full discussion.

2.2

The Generalized Method of Moments

The assumption that the instruments Z are exogenous can be expressed as E(Zi ui ) = 0. We are considering linear GMM only, and in this case the L instruments give us a set of L moments: (4) gi (β) = Zi ui = Zi (yi − Xi β) where gi is L × 1. The exogeneity of the instruments means that there are L moment conditions, or orthogonality conditions, that will be satisfied at the true value of β: E(gi (β)) = 0 (5)

Each of the L moment equations corresponds to a sample moment. For some given ˆ estimator β, we can write these L sample moments as ˆ g(β) = 1 n 1 ˆ gi (β) = n i=1 n 1 ˆ Zi (yi − Xi β) = Z u ˆ n i=1

n

(6)

ˆ The intuition behind GMM is to choose an estimator for β that brings g(β) as close to zero as possible. If the equation to be estimated is exactly identified, so that L = K, then we have as many equations—the L moment conditions—as we do unknowns: the ˆ ˆ ˆ K coefficients in β. In this case it is possible to find a β that solves g(β) = 0, and this GMM estimator is in fact a special case of the IV estimator as we discuss below.

4

Enhanced routines for IV/GMM estimation and testing

If the equation is overidentified, however, so that L > K, then we have more equaˆ tions than we do unknowns. In general it will not be possible to find a β that will set all L sample moment conditions exactly to zero. In this case, we take an L × L weighting matrix W and use it to construct a quadratic form in the moment conditions. This gives us the GMM objective function: ˆ ˆ ˆ J(β) = ng(β) W g(β) ˆ ˆ A GMM estimator for β is the β that minimizes J(β): ˆ ˆ ˆ ˆ βGM M ≡ arg min J(β) = ng(β) W g(β)
ˆ β

(7)

(8)

In the linear case we are considering, deriving and solving the K first order conditions ˆ ∂J(β) 1 ˆ = 0 (treating W as a matrix of constants) yields the GMM estimator: ∂β ˆ βGM M = (X ZW Z X)−1 X ZW Z y (9)

The GMM estimator is consistent for any symmetric positive definite weighting matrix W , and thus there are there are as many GMM estimators as there are choices of weighting matrix W . Efficiency is not guaranteed for an arbitrary W , so we refer to the estimator defined in Equation (9) as the possibly inefficient GMM estimator. We are particularly interested in efficient GMM estimators: GMM estimators with minimum asymptotic variance. Moreover, for any GMM estimator to be useful, we must be able to conduct inference, and for that we need estimates of the variance of the estimator. Both require estimates of the covariance matrix of orthogonality conditions, a key concept in GMM estimation.

2.3

Inference, efficiency, and the covariance matrix of orthogonality conditions

Denote by S the asymptotic covariance matrix of the moment conditions g: S = AV ar(g(β)) = lim n→∞ 1 E(Z uu Z) n

(10)

1 where S is an L × L matrix and g(β) = n Z u. That is, S is the variance of the limiting √ distribution of n g (Hayashi (2000), p. 203).

The asymptotic distribution of the possibly inefficient GMM estimator can be written as follows. Let QXZ ≡ E(Xi Zi ). The asymptotic variance of the inefficient GMM estimator defined by an arbitrary weighting matrix W is given by: ˆ V (βGM M ) = (QXZ W QXZ )−1 (QXZ W SW QXZ )(QXZ W QXZ )−1 (11)

1. The results of the minimization, and hence the GMM estimator, will be the same for weighting matrices that differ by a constant of proportionality.

Christopher F. Baum, Mark E. Schaffer and Steven Stillman

5

Under standard assumptions (see Hayashi (2000), pp. 202–203, 209) the inefficient GMM √ estimator is “ n-consistent”. That is, √ ˆ ˆ n (βGM M − β) → N [0, V (βGM M )] (12) where → denotes convergence in distribution. √ ˆ Strictly speaking, therefore, we should perform hypothesis tests on n βGM M , using equation (11) for the variance-covariance matrix. Standard practice, however, is to transform the variance-covariance matrix (11) rather than the coefficient vector (9). ˆ This is done by normalizing V (βGM M ) by 1/n, so that the variance-covariance matrix reported by statistical packages such as Stata is in fact V 1 ˆ √ βGM M n = 1 (Q W QXZ )−1 (QXZ W SW QXZ )(QXZ W QXZ )−1 n XZ (13)

The efficient GMM estimator (EGM M ) makes use of an optimal weighting matrix W which minimizes the asymptotic variance of the estimator. This is achieved by choosing W = S −1 . Substitute this into Equation (9) and Equation (13) and we obtain the efficient GMM estimator ˆ βEGM M = (X ZS −1 Z X)−1 X ZS −1 Z y with asymptotic variance ˆ V (βEGM M ) = (QXZ S −1 QXZ )−1 Similarly, ˆ ˆ n (βEGM M − β) → N [0, V (βEGM M )] √ ˆ and we perform inference on n βEGM M by using V 1 ˆ √ βEGM M n = 1 (Q S −1 QXZ )−1 n XZ √ (15) (14)

(16)

(17)

ˆ as the variance-covariance matrix for βEGM M . Obtaining an estimate of QXZ is straightforward: we simply use the sample analog 1 n n Xi Zi = i=1 1 X Z. n

(18)

If we have an estimate of S, therefore, we can conduct asymptotically correct inference for any GMM estimator, efficient or inefficient. An estimate of S also makes the efficient GMM estimator a feasible estimator. In two-step feasible efficient GMM estimation an estimate of S is obtained in the first step, and in the second step we calculate the estimator and its asymptotic variance using Equations (14) and (17).

6

Enhanced routines for IV/GMM estimation and testing

2.4

Estimating the covariance matrix of orthogonality conditions

The first-step estimation of the matrix S requires the residuals of a consistent GMM ˜ estimator β. Efficiency is not required in the first step of two-step GMM estimation, which simplifies the task considerably. But to obtain an estimate of S we must make some further assumptions. We illustrate this using the case of independent but possibly heteroskedastic disturbances. If the errors are independent, E(gi gj ) = 0 for i = j, and so S = AV ar(¯) = E(gi gi ) = E(u2 Zi Zi ) g i (19)

This matrix can be consistently estimated by an Eicker–Huber–White robust covariance estimator n 1 1 ˆ ˆ (20) u2 Z Zi = (Z ΩZ) ˆ S= n i=1 i i n ˜ ˆ where Ω is the diagonal matrix of squared residuals u2 from β, the consistent but not ˆi necesxsarily efficient first-step GMM estimator. In the ivreg2 implementation of twoˆ step efficient GMM, this first-step estimator is βIV , the IV estimator. The resulting ˆ estimate S can be used to conduct consistent inference for the first-step estimator using Equation (11), or it can be used to obtain and conduct inference for the efficient GMM estimator using Equations (14) and (17). In the next section we discuss how the two-step GMM estimator can be applied when the errors are serially correlated.

2.5

Using ivreg2 for GMM estimation

The ivreg2 command is included in the electronic supplement to this issue. The latest version of ivreg2 can always be downloaded from the SSC Archive with the command ssc describe ivreg2. We summarize the command’s options and syntax in Sections 11 and 12, respectively. The commands below illustrate how to use ivreg2 to obtain the coefficient and variance-covariance estimators discussed above. The example uses the dataset provided in Wooldridge (2003). The first command requests the standard IV/2SLS estimator and a variance-covariance matrix that assumes conditionally homoskedastic and independent errors. In this case, IV/2SLS is the efficient GMM estimator. The second requests the IV/2SLS estimator and a variance-covariance estimator that is robust to heteroskedasticity based on ˆ an estimate of S as in equation (20); here, IV/2SLS is an inefficient GMM estimator. The third command requests the two-step feasible efficient GMM estimator and correˆ sponding variance-covariance matrix. S is again based on equation (20). The fourth command is equivalent to the first, illustrating that the two-step efficient GMM estimator reduces to two-stage least squares when the disturbance is assumed to be i.i.d. and S can be consistently estimated by a classical non-robust covariance matrix estimator.

Christopher F. Baum, Mark E. Schaffer and Steven Stillman 1. ivreg2 lwage exper expersq (educ=age kidslt6 kidsge6) 2. ivreg2 lwage exper expersq (educ=age kidslt6 kidsge6), robust

7

3. ivreg2 lwage exper expersq (educ=age kidslt6 kidsge6), gmm2s robust 4. ivreg2 lwage exper expersq (educ=age kidslt6 kidsge6), gmm2s

3

GMM and HAC standard errors

In Equation (20), we illustrated how the asymptotic covariance matrix of the GMM estimator could be derived in the presence of conditional heteroskedasticity. We now further extend the estimator to handle the case of non-independent errors in a time series context. We correspondingly change our notation so that observations are indexed by t and s rather than i. In the presence of serial correlation, E(gt gs ) = 0, t = s. In order to derive consistent estimates of S, we define Γj = E(gt gt−j ) as the autocovariance matrix for lag j. We may then write the long-run covariance matrix


S = AV ar(¯) = Γ0 + g j=1 (Γj + Γj )

(21)

which may be seen as a generalization of Equation (20), with Γ0 = E(gi gi ) and Γj = E(gt gt−j ), j = ±1, ±2, . . . . (22)

As gt is defined as the product of Zt and ut , the autocovariance matrices may be expressed as Γj = E(ut ut−j Zt Zt−j ) (23)

As usual, we replace the ut , ut−j by consistent residuals from first-stage estimation to ˆ compute the sample autocovariance matrices Γj , defined as 1 ˆ Γj = n n−j gt gt−j = ˆˆ t=1 1 n

n−j

Zt ut ut−j Zt−j ˆ ˆ t=1 (24)

We obviously do not have an infinite number of sample autocovariances to insert into the infinite sum in Equation (21). Less obviously, we also cannot simply insert all the autocovariances from 1 through n, because this would imply that the number of sample orthogonality conditions gi is going off to infinity with the sample size, which ˆ precludes obtaining a consistent estimate of S.2 The autocovariances must converge to zero asymptotically as n increases.
2. Although a consistent estimate cannot be obtained with bandwidth equal to sample size, Hall (2005), pp. 305–310 points out that it is possible to develop an asymptotic framework providing inference about the parameters.

8

Enhanced routines for IV/GMM estimation and testing

The usual way this is handled in practice is for the summation to be truncated at a specified lag q. Thus the S matrix can be estimated by q ˆ ˆ S = Γ0 + j=1 κ

j qn

ˆ ˆ (Γj + Γj )

(25)

where ut , ut−j are replaced by consistent estimates from first-stage estimation. The kernel function, κ(j/qn ), applies appropriate weights to the terms of the summation, with qn defined as the bandwidth of the kernel (possibly as a function of n).3 In many kernels, consistency is obtained by having the weight fall to zero after a certain number of lags. The best-known approach to this problem in econometrics is that of Newey and ˆ West (1987b), which generates S using the Bartlett kernel function and a user-specified value of q. For the Bartlett kernel, κ(·) = [1 − j/qn ] if j ≤ qn − 1, 0 otherwise. These estimates are said to be HAC: heteroskedasticity- and autocorrelation-consistent, as they incorporate the standard sandwich formula (Equation (20)) in computing Γ0 . HAC estimates can be calculated by ivreg2 using the robust and bw() options with the kernel function’s bandwidth (the bw() option) set to q.4 The bandwidth may also be chosen optimally by specifying bw(auto) using the automatic bandwidth selection criterion of Newey and West (1994).5,6 By default, ivreg2 uses the Bartlett kernel function.7 If the equation contains endogenous regressors, these options will cause the IV estimates to be HAC. If the equation is overidentified and the robust, gmm2s and bw() options are specified, the resulting GMM estimates will be both HAC and more efficient than those produced by IV. The Newey–West (Bartlett kernel function) specification is only one of many feasible HAC estimators of the covariance matrix. Andrews (1991) shows that in the class of ˆ positive semidefinite kernels, the rate of convergence of S → S depends on the choice of kernel and bandwidth. The Bartlett kernel’s performance is bettered by those in a subset of this class, including the Quadratic Spectral kernel. Accordingly, ivreg2 provides a menu of kernel choices, including (abbreviations in parentheses): Quadratic Spectral (qua or qs), Truncated (tru); Parzen (par); Tukey–Hanning (thann); Tukey–Hamming (thamm); Daniell (dan); and Tent (ten). In the cases of the Bartlett, Parzen, and Tukey– Hanning/Hamming kernels, the number of lags used to construct the kernel estimate equals the bandwidth (bw) minus one.8 If the kernels above are used with bw(1), no lags are used and ivreg2 will report the usual Eicker–Huber–White “sandwich” heteroskedastic–robust variance estimates. Most, but not all, of these kernels guarantee
3. For more detail on this GMM estimator, see Hayashi (2000), pp. 406–417. 4. For the special case of OLS, Newey–West standard errors are available from [TS] newey with the maximum lag (q − 1) specified by newey’s lag() option. 5. This implementation is identical to that provided by Stata’s [R] ivregress. 6. Automatic bandwidth selection is only available for the Bartlett, Parzen and Quadratic spectral kernels; see below. 7. A common choice of bandwidth for the Bartlett kernel function is T 1/3 . 8. A common choice of bandwidth for these kernels is (q − 1) ≈ T 1/4 (Greene (2003), p. 200). A value related to the periodicity of the data (4 for quarterly, 12 for monthly, etc.) is often chosen.

Christopher F. Baum, Mark E. Schaffer and Steven Stillman

9

ˆ that the estimated S is positive definite and therefore always invertible; the truncated kernel, for example, was proposed in the early literature in this area but is now rarely ˆ used because it can generate an noninvertible S. For a survey covering various kernel estimators and their properties, see Cushing and McGarvey (1999) and Hall (2005), pp. 75–86. Under conditional homoskedasticity the expression for the autocovariance matrix simplifies: Γj = E(ut ut−j Zt Zt−j ) = E(ut ut−j )E(Zt Zt−j ) (26)

and the calculations of the corresponding kernel estimators also simplify; see Hayashi (2000), pp. 413–14. These estimators may perform better than their heteroskedasticrobust counterparts in finite samples. If the researcher is satisfied with the assumption of homoskedasticity but wants to deal with autocorrelation of unknown form, she should use the AC correction without the H correction for arbitrary heteroskedasticity by omitting the robust option. ivreg2 allows selection of H, AC, or HAC V CEs by combining the robust, bw() and kernel options. Thus both robust and bw() must be specified to calculate a HAC V CE of the Newey–West type, employing the default Bartlett kernel.9 To illustrate the use of HAC standard errors, we estimate a quarterly time-series model relating the change in the U.S. inflation rate (D.inf) to the unemployment rate (UR) for 1960q3–1999q4. As instruments, we use the second lag of quarterly GDP growth and the lagged values of the Treasury bill rate, the trade-weighted exchange rate and the Treasury medium-term bond rate.10 We first estimate the equation with standard IV under the assumption of i.i.d. errors.
. use http://fmwww.bc.edu/ec-p/data/stockwatson/macrodat . generate inf = 100 * log( CPI / L4.CPI ) (4 missing values generated) . generate ggdp = 100 * log( GDP / L4.GDP ) (10 missing values generated) . ivreg2 D.inf (UR=L2.ggdp L.TBILL L.ER L.TBON) IV (2SLS) estimation Estimates efficient for homoskedasticity only Statistics consistent for homoskedasticity only Number of obs F( 1, 156) Prob > F Centered R2 Uncentered R2 Root MSE z P>|z| = = = = = = 158 10.16 0.0017 0.1914 0.1915 .5543

Total (centered) SS Total (uncentered) SS Residual SS D.inf

= = =

60.04747699 60.05149156 48.55290564 Std. Err.

Coef.

[95% Conf. Interval]

9. It should also be noted that Stata’s official [TS] newey does not allow gaps in time-series data. As there is no difficulty in computing HAC estimates with gaps in a regularly spaced time series, ivreg2 handles this case properly. 10. These data accompany Stock and Watson (2003).

10
UR _cons

Enhanced routines for IV/GMM estimation and testing
-.155009 .9380705 .0483252 .2942031 -3.21 3.19 0.001 0.001 -.2497246 .361443 -.0602933 1.514698 58.656 0.0000 22.584 16.85 10.27 6.71 5.34 24.58 13.96 10.26 8.31

Underidentification test (Anderson canon. corr. LM statistic): Chi-sq(4) P-val = Weak identification test (Cragg-Donald Wald F statistic): Stock-Yogo weak ID test critical values: 5% maximal IV relative 10% maximal IV relative 20% maximal IV relative 30% maximal IV relative 10% maximal IV size 15% maximal IV size 20% maximal IV size 25% maximal IV size Source: Stock-Yogo (2005). Reproduced by permission.

bias bias bias bias

Sargan statistic (overidentification test of all instruments): Chi-sq(3) P-val = Instrumented: UR Excluded instruments: L2.ggdp L.TBILL L.ER L.TBON

5.851 0.1191

In these estimates, the negative coefficient on the unemployment rate is consistent with macroeconomic theories of the natural rate. In that context, lowering unemployment below the natural rate will cause an acceleration of price inflation. The Sargan statistic implies that the test of overidentifying restrictions cannot reject its null hypothesis. An absence of autocorrelation in the error process is unusual in time series analysis, so we test the equation using ivactest, as discussed below in Section 10. Using the default value of one lag, we consider whether the error process exhibits AR(1) behavior. The test statistic implies that the errors do not exhibit serial independence:
. ivactest Cumby-Huizinga test with H0: errors nonautocorrelated at order 1 Test statistic: 25.909524 Under H0, Chi-sq(1) with p-value: 3.578e-07

Given this strong rejection of the null of independence, we reestimate the equation with HAC standard errors, choosing a bandwidth (bw) of 5 (roughly T 1/3 ) and the robust option. By default, the Bartlett kernel is used, so that these are Newey–West two-step efficient GMM estimates.
. ivreg2 D.inf (UR=L2.ggdp L.TBILL L.ER L.TBON), gmm2s robust bw(5) 2-Step GMM estimation Estimates efficient for arbitrary heteroskedasticity and autocorrelation Statistics robust to heteroskedasticity and autocorrelation kernel=Bartlett; bandwidth=5 time variable (t): date Number of obs = F( 1, 156) = 158 2.46

Christopher F. Baum, Mark E. Schaffer and Steven Stillman
Prob > F Centered R2 Uncentered R2 Root MSE = = = = 0.1185 0.1548 0.1548 .5668

11

Total (centered) SS Total (uncentered) SS Residual SS

= = =

60.04747699 60.05149156 50.75430293 Robust Std. Err. .0634562 .372403

D.inf UR _cons

Coef. -.1002374 .5850796

z -1.58 1.57

P>|z| 0.114 0.116

[95% Conf. Interval] -.2246092 -.144817 .0241344 1.314976 7.954 0.0933 7.362 16.85 10.27 6.71 5.34 24.58 13.96 10.26 8.31

Underidentification test (Kleibergen-Paap rk LM statistic): Chi-sq(4) P-val = Weak identification test (Kleibergen-Paap rk Wald F statistic): Stock-Yogo weak ID test critical values: 5% maximal IV relative bias 10% maximal IV relative bias 20% maximal IV relative bias 30% maximal IV relative bias 10% maximal IV size 15% maximal IV size 20% maximal IV size 25% maximal IV size Source: Stock-Yogo (2005). Reproduced by permission. NB: Critical values are for Cragg-Donald F statistic and i.i.d. errors. Hansen J statistic (overidentification test of all instruments): Chi-sq(3) P-val = Instrumented: UR Excluded instruments: L2.ggdp L.TBILL L.ER L.TBON

3.569 0.3119

It appears that by generating HAC estimates of the covariance matrix, the statistical significance of the unemployment rate in this equation is now questioned. One important statistic is also altered: the test for overidentification, denoted as the Sargan test in the former estimates, is on the borderline of rejecting its null hypothesis at the 90% level. When we reestimate the equation with HAC standard errors, various summary statistics are “robustified” as well: in this case, the test of overidentifying restrictions, now denoted Hansen’s J. That statistic is now far from rejection of its null, giving us greater confidence that our instrument set is appropriate.

4
4.1

CUE, LIML and k-class estimation
CUE and LIML

Again consider the two-step feasible efficient GMM estimator. In the first step, a consis˜ tent but inefficient GMM estimator, β, is used to estimate S, the covariance matrix of orthogonality conditions. In the second step, the GMM objective function is maximized using S −1 as the weighting matrix. If we write S as a function of the first-step estimator ˜ β, the minimization problem in the second step of two-step efficient GMM estimation

12

Enhanced routines for IV/GMM estimation and testing

that defines the estimator is ˆ ˆ ˆ ˜ ˆ β2SEGM M ≡ arg min J(β) = ng(β) (S(β))−1 g(β)
ˆ β

(27)

˜ As noted earlier, the second-step minimization treats the weighting matrix W = (S(β))−1 as a constant matrix. Thus the residuals in the estimate of S are the first-stage residuals ˜ defined by β, whereas the residuals in the orthogonality conditions g are the second-stage ˆ residuals defined by β. The minimization problem that defines the GMM “continuously updated estimator” (CUE) of Hansen et al. (1996) is, by contrast, ˆ ˆ ˆ ˆ ˆ βCU E ≡ arg min J(β) = ng(β) (S(β))−1 g(β)
ˆ β

(28)

Here, the weighting matrix is a function of the β being estimated. The residuals in S are the same residuals that are in g, and estimation of S is done simultaneously with the estimation of β. In general, solving this minimization problem requires numerical methods. Both the two-step efficient GMM and CUE GMM procedures reduce to familiar estimators under linearity and conditional homoskedasticity. In this case, S = E(gi gi ) = E(u2 Zi Zi ) = E(u2 )E(Zi Zi ) = σ 2 QZZ . As usual, QZZ is estimated by its sample i i 1 counterpart n Z Z. In two-step efficient GMM under homoskedasticity, the minimization becomes u(β) PZ u(β) ˆ ˆ ˆ ˆ ˆ ˆ βIV ≡ arg min J(β) = (29) 2 ˆ σ ˆ β ˆ where u(β) ≡ (y−X β) and PZ ≡ Z(Z Z)−1 Z is the projection matrix. In the minimizaˆ ˆ tion, the error variance σ 2 is treated as a constant and hence doesn’t require first-step ˆ ˆ estimation, and the β that solves (29) is the IV estimator βIV = (X PZ X)−1 X PZ y.11 With CUE GMM under conditional homoskedasticity, the estimated error variance is a function of the residuals σ 2 = u (β)ˆ(β)/n and the minimization becomes ˆ ˆ ˆu ˆ u(β) PZ u(β) ˆ ˆ ˆ ˆ ˆ ˆ βLIM L ≡ arg min J(β) = ˆ β u(β) u(β)/n ˆ ˆ ˆ ˆ (30)

ˆ The β that solves (30) is defined as the limited information maximum likelihood (LIML) estimator. Unlike CUE estimators in general, the LIML estimator can be derived analytically and does not require numerical methods. This derivation is the solution to an eigenvalue problem (see Davidson and MacKinnon (1993), pp. 644–49). The LIML estimator was first derived by Anderson and Rubin (1949), who also provided the first test of overidentifying restrictions for estimation of an equation with endogenous regressors. This Anderson–Rubin statistic (not to be confused with the test discussed below under
11. The error variance σ 2 , required for inference, is calculated at the end using the IV residuals. ˆ

Christopher F. Baum, Mark E. Schaffer and Steven Stillman

13

“weak identification”) follows naturally from the solution to the eigenvalue problem. If we denote the minimum eigenvalue by λ, then the Anderson–Rubin likelihood ratio test statistic for the validity of the overidentifying restrictions (orthogonality conditions) is n log(λ). Since LIML is also an efficient GMM estimator, the value J of the minimized GMM objective function also provides a test of overidentifying restrictions. The J test of the same overidentifying restrictions is closely related to the Anderson-Rubin test; 1 the minimized value of the LIML GMM objective function is in fact J = n 1−λ . Of 1 course, n log(λ) ≈ n 1−λ . Although CUE and LIML provide no asymptotic efficiency gains over two-step GMM and IV, recent research suggests that their finite-sample performance may be superior. In particular, there is evidence suggesting that CUE and LIML perform better than IV-GMM in the presence of weak instruments (Hahn et al. (2004)). This is reflected, for example, in the critical values for the Stock–Yogo weak instruments test discussed below in Section 7.3.12 The disadvantage of CUE in general is that it requires numerical optimization; LIML does not, but does require the often rather strong assumption of i.i.d. disturbances. In ivreg2, the cue option combined with the robust, cluster, and/or bw options generates coefficient estimates that are efficient in the presence of the corresponding deviations from i.i.d. disturbances. Specifying cue with no other options is equivalent to the combination of the options liml and coviv (“covariance-IV”: see below). The implementation of the CUE estimator in ivreg2 uses Stata’s ml routine to minimize the objective function. The starting values are either IV or two-step efficient GMM coefficient estimates. These can be overridden with the cueinit option, which takes a matrix of starting values of the coefficient vector β as its argument. The cueoptions option passes its contents to Stata’s ml command. Estimation with the cue option can be slow and problematic when the number of parameters to be estimated is substantial, and it should be used with caution.

4.2

k-class estimators

LIML, IV and OLS (but not CUE or two-step GMM) are examples of k-class estimators. A k-class estimator can be written as follows (Davidson and MacKinnon (1993), p. 649): βk = (X (I − kMZ )X)−1 X (I − kMZ )y (31)

where M denotes the annihilation matrix I − P . LIML is a k-class estimator with k=λ, the LIML eigenvalue; IV is a k-class estimator with k=1; and OLS is a k-class estimator with k=0. Estimators based on other values of k have been proposed. Fuller’s modified LIML (available with the fuller(#) option) sets k = λ− (N α where λ is the −L) LIML eigenvalue, L = number of instruments (included and excluded), and the Fuller parameter α is a user-specified positive constant. The value of α = 1 has been suggested
12. With one endogenous regressor and four excluded instruments, the critical value for the Cragg– Donald statistic for 10% maximal size distortion is 24.58 in the case of IV but only 5.44 in the case of LIML.

14

Enhanced routines for IV/GMM estimation and testing

as a good choice; see Fuller (1977) or Davidson and MacKinnon (1993), pp. 649–50. Nagar’s bias-adjusted 2SLS estimator can be obtained with the kclass(#) option by setting k = 1 + (L−K) , where (L − K) is the number of overidentifying restrictions and N N is the sample size; see Nagar (1959). Research suggests that both of these k-class estimators have a better finite-sample performance than IV in the presence of weak instruments, though like IV, none of these k-class estimators is robust to violations of the i.i.d. assumption. ivreg2 also provides Stock–Yogo critical values for the Fuller version of LIML. The default covariance matrix reported by ivreg2 for the LIML and general k-class estimators is (Davidson and MacKinnon (1993), p. 650): σ 2 (X (I − kMZ )X)−1 ˆ In fact, the usual IV-type covariance matrix σ 2 (X (I − MZ )X)−1 = σ 2 (X PZ X)−1 ˆ ˆ (33) (32)

is also valid, and can be obtained with the coviv option. With coviv, the covariance matrix for LIML and the other general k-class estimators will differ from that for the IV estimator only because the estimate of the error variance σ 2 will differ. ˆ

4.3

Example of CUE-LIML estimation

We illustrate the use of CUE-LIML estimation using the same equation we employed in our discussion of HAC standard errors.
. ivreg2 D.inf initial: rescale: Iteration 0: Iteration 1: Iteration 2: Iteration 3: Iteration 4: CUE estimation (UR=L2.ggdp neg GMM obj neg GMM obj neg GMM obj neg GMM obj neg GMM obj neg GMM obj neg GMM obj L.TBILL L.ER L.TBON ), cue robust bw(5) function -J = -3.285175 function -J = -2.8716146 function -J = -2.8716146 function -J = -2.793201 function -J = -2.7931805 function -J = -2.7931798 function -J = -2.7931798

Estimates efficient for arbitrary heteroskedasticity and autocorrelation Statistics robust to heteroskedasticity and autocorrelation kernel=Bartlett; bandwidth=5 time variable (t): date Number of obs = 158 F( 1, 156) = 0.55 Prob > F = 0.4577 Total (centered) SS = 60.04747699 Centered R2 = 0.0901 Total (uncentered) SS = 60.05149156 Uncentered R2 = 0.0901 Residual SS = 54.6384785 Root MSE = .5881 Robust Std. Err.

D.inf

Coef.

z

P>|z|

[95% Conf. Interval]

Christopher F. Baum, Mark E. Schaffer and Steven Stillman
UR _cons -.0483119 .2978451 .0644743 .3804607 -0.75 0.78 0.454 0.434 -.1746792 -.4478442 .0780555 1.043534 7.954 0.0933 7.362 5.44 3.87 3.30 2.98

15

Underidentification test (Kleibergen-Paap rk LM statistic): Chi-sq(4) P-val = Weak identification test (Kleibergen-Paap rk Wald F statistic): Stock-Yogo weak ID test critical values: 10% maximal LIML size 15% maximal LIML size 20% maximal LIML size 25% maximal LIML size Source: Stock-Yogo (2005). Reproduced by permission. NB: Critical values are for Cragg-Donald F statistic and i.i.d. errors. Hansen J statistic (overidentification test of all instruments): Chi-sq(3) P-val = Instrumented: UR Excluded instruments: L2.ggdp L.TBILL L.ER L.TBON

2.793 0.4246

When this estimator is employed, the magnitude of the point estimate of the UR coefficient falls yet farther, and it is no longer significantly different from zero at any reasonable level of significance.

5

GMM distance tests of endogeneity and exogeneity

The value J of the GMM objective function evaluated at the efficient GMM estimator ˆ βEGM M is distributed as χ2 with (L − K) degrees of freedom under the null hypothesis that the full set of orthogonality conditions are valid. This is known variously as the Sargan statistic, Hansen J statistic, Sargan-Hansen J test or simply a test of overidentifying restrictions.13 A C or GMM distance test can be used to test the validity of a subset of orthogonality conditions. Say the investigator wishes to test the validity of LB orthogonality conditions. Denote J as the value of the GMM objective function for the efficient GMM estimator that uses the full set of orthogonality conditions and JA as the value of the efficient GMM estimator that uses only the LA = L − LB orthogonality conditions that the investigator is not questioning. Then under the null that the LB suspect orthogonality conditions are actually satisfied, the test statistic (J − JA ) ∼ χ2 with LB degrees ˆ of freedom. If the S matrix from the estimation using the full set of orthogonality conditions is used to calculate both GMM estimators, the test statistic is guaranteed to be nonnegative in finite samples. Our 2003 paper discusses how ivreg2’s orthog option can be used to conduct a C test of the exogeneity of one or more regressors or instruments. To recapitulate, the
13. If the test statistic is required for an inefficient GMM estimator (e.g., an overidentifying restrictions test for the IV estimator that is robust to heteroskedasticity), ivreg2 reports the J statistic for the corresponding efficient GMM estimator; see our 2003 paper. This J statistic is identical to that produced by estat overid following official Stata’s ivregress gmm.

16

Enhanced routines for IV/GMM estimation and testing

orthog option takes as its argument the list of exogenous variables ZB whose exogeneity is called into question. If the exogenous variable being tested is an instrument, the efficient GMM estimator that does not use the corresponding orthogonality condition simply drops the instrument. This is illustrated in the following pair of estimations where the second regression is the estimation implied by the orthog option in the first: ivreg2 y x1 x2 (x3 = z1 z2 z3 z4), orthog(z4) ivreg2 y x1 x2 (x3 = z1 z2 z3)

If the exogenous variable that is being tested is a regressor, the efficient GMM estimator that does not use the corresponding orthogonality condition treats the regressor as endogenous, as below; again, the second estimation is implied by the use of orthog in the former equation: ivreg2 y x1 x2 (x3 = z1 z2 z3 z4), orthog(x2) ivreg2 y x1 (x2 x3 = z1 z2 z3)

Sometimes the researcher wishes to test whether an endogenous regressor can be treated as exogenous. This is commonly termed an “endogeneity test”, but as we discussed in our earlier paper (Baum et al. (2003), pp. 24–27), it is equivalent to estimating the same regression but treating the regressor as exogenous, and then testing the corresponding orthogonality condition using the orthog option. Although the procedure described there is appropriate, it is not very intuitive. To address this, we have added a new ivreg2 option, endog, to conduct endogeneity tests of one or more endogenous regressors. Under the null hypothesis that the specified endogenous regressors can actually be treated as exogenous, the test statistic is distributed as χ2 with degrees of freedom equal to the number of regressors tested. Thus, in the following estimation, ivreg2 y x1 x2 (x3 = z1 z2 z3 z4), endog(x3)

the test statistic reported for the endogeneity of x3 is numerically equal to the test statistic reported for the orthog option in ivreg2 y x1 x2 x3 ( = z1 z2 z3 z4), orthog(x3)

The endog option is both easier to understand and more convenient to use. Under conditional homoskedasticity, this endogeneity test statistic is numerically equal to a Hausman test statistic: see Hayashi (2000), pp. 233–34 and Baum et al. (2003), pp. 19–22. The endogeneity test statistic can also be calculated after ivreg or ivreg2 by the command ivendog. Unlike the Durbin–Wu–Hausman versions of the endogeneity test reported by ivendog, the endog option of ivreg2 can report test statistics that are robust to various violations of conditional homoskedasticity. The ivendog option unavailable in ivreg2 is the Wu–Hausman F -test version of the endogeneity test. To illustrate this option, we use a data set provided in Wooldridge (2003). We estimate the log of females’ wages as a function of the worker’s experience, (experience)2 and years of education. If the education variable is considered endogenous, it is instrumented with the worker’s age and counts of the number of pre-school children and older children in the household. We test whether the educ variable need be considered endogenous in this equation with the endog option:

Christopher F. Baum, Mark E. Schaffer and Steven Stillman
. use http://fmwww.bc.edu/ec-p/data/wooldridge/mroz.dta . ivreg2 lwage exper expersq (educ=age kidslt6 kidsge6), endog(educ) IV (2SLS) estimation Estimates efficient for homoskedasticity only Statistics consistent for homoskedasticity only Number of obs F( 3, 424) Prob > F Centered R2 Uncentered R2 Root MSE z 1.18 3.04 -1.98 -0.38 P>|z| 0.236 0.002 0.048 0.704 = = = = = = 428 7.49 0.0001 0.1556 0.7727 .6638

17

Total (centered) SS Total (uncentered) SS Residual SS lwage educ exper expersq _cons

= = =

223.3274513 829.594813 188.5780571 Std. Err. .0814278 .0138831 .0004204 1.011551

Coef. .0964002 .042193 -.0008323 -.3848718

[95% Conf. Interval] -.0631952 .0149827 -.0016563 -2.367476 .2559957 .0694033 -8.33e-06 1.597732 12.816 0.0051 4.342 13.91 9.08 6.46 5.39 22.30 12.83 9.54 7.80

Underidentification test (Anderson canon. corr. LM statistic): Chi-sq(3) P-val = Weak identification test (Cragg-Donald Wald F statistic): Stock-Yogo weak ID test critical values: 5% maximal IV relative 10% maximal IV relative 20% maximal IV relative 30% maximal IV relative 10% maximal IV size 15% maximal IV size 20% maximal IV size 25% maximal IV size Source: Stock-Yogo (2005). Reproduced by permission.

bias bias bias bias

Sargan statistic (overidentification test of all instruments): Chi-sq(2) P-val = -endog- option: Endogeneity test of endogenous regressors: Chi-sq(1) P-val = Regressors tested: educ Instrumented: educ Included instruments: exper expersq Excluded instruments: age kidslt6 kidsge6

0.702 0.7042 0.019 0.8899

In this context, we estimate the equation treating educ as endogenous, and merely name it in the endog varlist to perform the C (GMM distance) test. The test cannot reject its null that educ may be treated as exogenous. In contrast, we may calculate this same test statistic with the earlier orthog option: ivreg2 lwage exper expersq educ (=age kidslt6 kidsge6), orthog(educ)

Using orthog, we again list educ in the option’s varlist, but we must estimate the equation with that variable treated as exogenous: an equivalent but perhaps a less intuitive way to perform the test.

18

Enhanced routines for IV/GMM estimation and testing

6

The FWL theorem and a rank-deficient S matrix

According to the Frisch–Waugh–Lovell (FWL) theorem (Frisch and Waugh (1933), Lovell (1963)) the coefficients estimated for a regression in which some exogenous regressors, say X2A , are partialled out from the dependent variable y, the endogenous regressors X1 , the other exogenous regressors X2B , and the excluded instruments Z1 will be the same as the coefficients estimated for the original model for certain estimators. That is, if we denote a partialled-out variable with a tilde so that y ≡ M2A y, the ˜ coefficients estimated for the partialled-out version of the model ˜ ˜ y = [X1 X2B ][β1 β2B ] + u ˜ ˜ (34)

˜ ˜ with instruments Z1 and X2B will be the same as the shared coefficients estimated for the original model y = [X1 X2 ][β1 β2 ] + u (35) with instruments Z1 and X2 . It is even possible to partial-out the full set of included exogenous variables X2 , so that the partialled-out version of the model becomes ˜ y = X1 β1 + u ˜ ˜ (36)

ˆ ˜ with no exogenous regressors and only excluded instruments Z1 , and the estimated β1 will be the same as that obtained when estimating the full set of regressors. The FWL theorem is implemented in ivreg2 by the new partial(varlist) option, which requests that the exogenous regressors in the varlist should be partialled out from all the other variables (other regressors and excluded instruments) in the estimation. If the equation includes a constant, it is automatically partialled out as well. The partial option is most useful when the covariance matrix of orthogonality conditions S is not of full rank. When this is the case, efficient GMM and overidentification tests are infeasible as the optimal GMM weighting matrix W = S −1 cannot be calculated. In some important cases, partialling out enough exogenous regressors can make the covariance matrix of the remaining orthogonality conditions full rank, and efficient GMM becomes feasible. The invariance of the estimation results to partialling-out applies to one- and twostep estimators such as OLS, IV, LIML and two-step GMM, but not to CUE or to GMM iterated more than two steps. The reason is that the latter estimators update the estimated S matrix. An updated S implies different estimates of the coefficients on the partialled-out variables, which imply different residuals, which in turn produce a different estimated S. Intuitively, partialling-out uses OLS estimates of the coefficients on the partialled-out variables to generate the S matrix, whereas CUE would use more efficient HOLS (“heteroskedastic OLS”) estimates.14 Partialling out exogenous regressors that are not of interest may still be desirable with CUE estimation, however, because reducing the number of parameters estimated makes the CUE numerical optimization faster and more reliable.
14. We are grateful to Manuel Arellano for helpful discussions on this point. On HOLS, see our 2003 paper.

Christopher F. Baum, Mark E. Schaffer and Steven Stillman

19

One common case calling for partialling-out arises when using cluster and the number of clusters is less than L, the number of (exogenous regressors + excluded instruments). This causes the matrix S to be rank deficient (Baum et al. (2003), pp. 9– 10). The problem can be addressed by using partial to remove enough exogenous regressors for S to have full rank. A similar problem arises if a robust covariance matrix is requested when the regressors include a variable that is a singleton dummy, i.e., a variable with one value of 1 and (N − 1) values of zero or vice versa. The singleton dummy causes the robust covariance matrix estimator to be less than full rank. In this case, partialling out the variable with the singleton dummy solves the problem. The partial option has two limitations: it cannot be used with time-series operators, and post-estimation [R] predict can be used only to generate residuals.

7
7.1

Underidentification, weak identification, and instrument relevance
Identification and the rank condition

For Equation (1) to be estimable, it must be identified. The order condition L ≥ K is necessary but not sufficient; the rank condition must also be satisfied. The rank condition states that the matrix QXZ ≡ E(Xi Zi ) is of full column rank, i.e., QXZ must have rank K. Since X2 ≡ Z2 , we can simplify by partialling them out from X1 and Z1 , and the rank condition becomes ρ(QX1 Z1 ) = K1 . There are several ways of interpreting ˜ ˜ this condition. One interpretation is in terms of correlations: the excluded instruments must be correlated with the endogenous regressors. In the simplest possible case of a single endogenous regressor, a single excluded instrument, and partialling-out any exogenous regressors including the constant, L1 = K1 = 1 and QX1 Z1 is a scalar. As the constant ˜ ˜ has been partialled out, E(Xi ) = E(Zi ) = 0 and QX1 Z1 is a covariance. The rank ˜ ˜ ˜ condition in this simple case requires that the correlation or covariance between X1 and ˜1 is nonzero. Z This interpretation can be extended to the general case of L1 , K1 ≥ 1 using canonical correlations (Anderson (1984), Chapter 12; Hall et al. (1996), p. 287; [MV] canon). The ˜ ˜ canonical correlations ri between X1 and Z1 , i = 1, . . . . , K1 represent the correlations ˜ between linear combinations of the K1 columns of X1 and linear combinations of the ˜ L1 columns of Z1 .15 In the special case of L1 = K1 = 1 (and partialling-out the ˜ ˜ constant), the canonical correlation between X1 and Z1 is the usual Pearson correlation coefficient. In the slightly more general case of L1 ≥ 1 and K1 = 1, the canonical ˜ ˜ correlation between X1 and Z1 is simply R: the square root of R2 in a regression of ˜ ˜ X on Z. In the general case of L1 , K1 ≥ 1, the squared canonical correlations may be
15. As X2 ≡ Z2 , these variables are perfectly correlated with each other. The canonical correlations between X and Z before partialling out would also include the L2 ≡ K2 correlations that are equal to unity.

20

Enhanced routines for IV/GMM estimation and testing

˜ ˜ ˜ ˜ ˜ ˜ ˜ ˜ calculated as the eigenvalues of (X1 X1 )−1 (X1 Z1 )(Z1 Z1 )−1 (Z1 X1 ). The rank condition can then be interpreted as the requirement that all K1 of the canonical correlations must be significantly different from zero. If one or more of the canonical correlations is zero, the model is underidentified or unidentified. An alternative and useful interpretation of the rank condition is to use the reduced form. Write the set of reduced form (“first stage”) equations for the regressors X as X = ZΠ + v Using our partitioning of X and Z, we can rewrite this as X1 = [Z1 Z2 ] [Π11 Π12 ] + v1 X2 = [Z1 Z2 ] [Π21 Π22 ] + v2 (38) (39) (37)

The equation for X2 is not very interesting: because X2 ≡ Z2 , it follows that Π21 = 0 and Π22 = I. The rank condition for identification comes from the equation for the endogenous regressors X1 . The L × K1 matrix Π11 must be of full column rank (ρ(Π11 ) = K1 ). If ρ(Π11 ) < K1 , the model is again unidentified. The consequence of utilizing excluded instruments that are uncorrelated with the endogenous regressors is increased bias in the estimated IV coefficients (Hahn and Hausman (2002)) and worsening of the large-sample approximations to the finite-sample distributions. In this case, the bias of the IV estimator is the same as that of the OLS estimator and IV becomes inconsistent (ibid.). In this case, instrumenting only aggravates the problem, as IV and OLS share the same bias but IV has a larger mean squared error (MSE) by virtue of its larger variance. Serious problems also arise if the correlations between the excluded instruments and endogenous regressors are nonzero but “weak”. Standard IV/GMM methods of estimating β1 suffer from serious finite sample bias problems and alternative methods should be considered. In rest of this section we show how to use ivreg2 to conduct tests for underidentification and weak identification, and how ivreg2 provides a procedure for inference that is robust to weak identification.

7.2

Testing for underidentification and instrument redundancy

Of course, we do not observe the true QXZ or Π11 matrices; these matrices must be estimated. Testing whether or not the rank condition is satisfied therefore amounts to testing the rank of a matrix. Do the data enable the researcher to reject the null hypothˆ esis that the equation is underidentified, i.e., that ρ(Π11 ) = (K1 − 1), or, equivalently, ˆ˜˜ ρ(QX Z ) = (K1 − 1)? Rejection of the null implies full rank and identification; failure to reject the null implies the matrix is rank-deficient and the equation is underidentified. If the reduced-form errors v are i.i.d., two approaches are available for testing the rank of QX Z : Anderson’s (1951) canonical correlations test and the related test of Cragg ˜˜ ˆ˜˜ and Donald (1993). In Anderson’s approach, H0 : ρ(QX Z ) = (K1 − 1) is equivalent to

Christopher F. Baum, Mark E. Schaffer and Steven Stillman

21

the null hypothesis that the smallest canonical correlation rK1 is zero. A large sample 2 test statistic for this is simply nrK1 . Under the null, the test statistic is distributed 2 χ with (L − K + 1) degrees of freedom, so that it may be calculated even for an exactly-identified equation. A failure to reject the null hypothesis suggests the model is unidentified. Not surprisingly given its “N × R2 ” form this test can be interpreted as an LM test.16 The Cragg–Donald (1993) statistic is an alternative and closely related test for the rank of a matrix that can also be used to test for underidentification. Whereas the Anderson test is an LM test, the Cragg–Donald test is a Wald test, also derived from an eigenvalue problem. Poskitt and Skeels (2002) show that in fact the Cragg–Donald test 2 2 statistic can be stated in terms of canonical correlations as nrK1 /(1 − rK1 ) (see Poskitt 2 and Skeels (2002), p. 17). It is also distributed as χ (L − K + 1). Both these tests require the assumption of i.i.d. errors, and hence are reported if ivreg2 is invoked without the robust, cluster or bw options. The Anderson LM χ2 statistic is reported by ivreg2 in the main regression output while both the Anderson LM and Cragg–Donald Wald χ2 statistics are reported with the first option. If the errors are heteroskedastic or serially correlated, the Anderson and Cragg– Donald statistics are not valid. This is an important shortcoming, because these violations of the i.i.d. assumption would typically be expected to cause the null of underidentification to be rejected too often. Researchers would face the danger of interpreting a rejection of the null as evidence of a well-specified model that is adequately identified, when in fact it was both underidentified and misspecified. Recently, several robust statistics for testing the rank of a matrix have been proposed. Kleibergen and Paap (2006) have proposed the rk statistic for this purpose. Their rk test statistic is reported by ivreg2 if the user requests any sort of robust covariance estimator. The LM version of the Kleibergen–Paap rk statistic can be considered as a generalization of the Anderson canonical correlation rank statistic to the non-i.i.d. case. Similarly, the Wald version of the rk statistic reduces to the Cragg– Donald statistic when the errors are i.i.d. The rk test is implemented in Stata by the ranktest command of Kleibergen and Schaffer (2007) which ivreg2 uses to calculate the rk statistic. If ivreg2 is invoked with the robust, bw or cluster options, the tests of underidentification reported by ivreg2 are based on the rk statistic and will be correspondingly robust to heteroskedasticity, autocorrelation or clustering. For a full discussion of the rk statistic, see Kleibergen and Paap (2006). It is useful to note that in the special case of a single endogenous regressor, the Anderson, Cragg–Donald, and Kleibergen–Paap statistics reduce to familiar statistics available from OLS estimation of the single reduced form equation with an appropriate choice of V CE estimator. Thus the Cragg–Donald Wald statistic can be calculated by estimating (38) and testing the joint significance of the coefficents Π11 on the excluded instruments Z1 using a standard Wald test and a traditional non-robust covariance es16. Earlier versions of ivreg2 reported an LR version of this test, where the test statistic is −n log(1 − 2 rK ). This LR test has the same asymptotic distribution as the LM form. See Anderson (1984), pp. 1 497-8.

22

Enhanced routines for IV/GMM estimation and testing

timator. The Anderson LM statistic can be obtained by calculating an LM test of the same joint hypothesis.17 The Kleibergen–Paap rk statistics can be obtained by performing the same tests with the desired robust covariance estimator. For example, estimating (38) using OLS and testing the joint significance of Z1 using a heteroskedastic-robust covariance estimator yields the heteroskedastic-robust Kleibergen–Paap rk Wald statistic.18 The same framework may also be used to test a set of instruments for redundancy as shown by Breusch et al. (1999). In an overidentified context with L ≥ K, if some of the instruments are redundant then the large-sample efficiency of the estimation is not improved by including them. It is well known, moreover, that using a large number of instruments or moment conditions can cause the estimator to have poor finite sample performance. Dropping redundant instruments may therefore lead to more reliable estimation. The intuition behind a test for instrument redundancy is straightforward. As above, assume we have partialled out any exogenous regressors X2 . Partition the excluded ˜ ˜ ˜ ˜ instruments Z1 into [ Z1A Z1B ], where Z1B is the set of possibly-redundant instruments after X2 has been partialled-out. Breusch et al. (1999), p. 106 show that the redundancy 1 ˜ ˜ ˜ of Z1B can be stated in several ways: (a) plim n Z1B MZ1A X1 = 0; (b) the correlations ˜ ˜ ˜ ˜ ˜ between Z1B and X1 (given Z1A ) are zero; (c) in a regression of X1 on the full set of ˜1 , the coefficients on Z1B are zero. It is easy to see that the ˜ excluded instruments Z FWL theorem can be used to restate this last condition without the partialling-out of X2 : (d) in a regression of X1 on the full set of included and excluded instruments Z, i.e., the reduced form Equation (38), the coefficients on Z1B are zero. Note that, as Hall and Peixe (2003) point out, redundancy is a conditional concept. Z1B either is or is not redundant conditional on Z1A . The above suggests a straightforward test of redundancy: simply estimate Equation (38) using OLS and test the significance of Z1B using a large-sample LM, Wald or LR test. For example, the redundancy test proposed by Hall and Peixe (2003) is simply the LR version of this test. These test statistics are all distributed as χ2 with degrees of freedom equal to the number of endogenous regressors times the number of instruments tested. As usual, implementing this test is easy for the case of a single endogenous variable, as only a single OLS estimation is necessary. The tests of the coefficients can be made robust to various violations of i.i.d. errors in the usual way. However, this procedure is more laborious (though still straightforward) if K1 > 1 as it is then necessary to jointly estimate multiple reduced-form equations.
17. This can be done very simply in Stata using ivreg2 by estimating (38) with only Z2 as regressors, Z1 as excluded instruments and an empty list of endogenous regressors. The Sargan statistic reported by ivreg2 will be the Anderson LM statistic. See our 2003 article for further discussion. 18. See the on-line help for ranktest for examples. These test statistics are “large-sample” χ2 tests and can be obtained from OLS regression using ivreg2. Stata’s regress command reports finite-sample t tests. Also note that the robust rk LM statistic can be obtained as described in the preceding footnote. Invoke ivreg2 with X1 as the dependent variable, Z2 as regressors, Z1 as excluded instruments and no endogenous regressors. With the robust option the reported Hansen J statistic is the robust rk statistic.

Christopher F. Baum, Mark E. Schaffer and Steven Stillman

23

Fortunately, a simpler procedure is available that will generate numerically equiv˘ alent test statistics for redundancy. Define a matrix X as X with both X2 and Z1A 1 ˘ ˘ partialled-out. Then condition (a) can be restated as (e) plim n Z1B X1 = 0 or (f) that ˘1B and X1 (given Z1A and Z2 ) are zero. The redundancy ˘ the correlations between Z of Z1B can be evaluated using the ranktest command to test the null hypothesis that the rank of QX Z is zero. Rejection of the null indicates that the instruments are not ˘˘ redundant. The LM version of the Anderson canonical correlations test is reported if the user indicates that the errors are i.i.d. In this case the LM test statistic is n times ˘ ˘ the sum of the squared canonical correlations between Z1B and X1 . If the user estimates the equation with robust, bw or cluster, an LM version of the Kleibergen–Paap rk statistic is reported that is correspondingly robust to heteroskedasticity, autocorrelation or clustering.

7.3

Testing for weak identification

The weak instruments problem arises when the correlations between the endogenous regressors and the excluded instruments are nonzero but small. In the past 10–15 years, much attention in the econometrics literature has been devoted to this topic. What is surprising is that, as Bound et al. (1995), Staiger and Stock (1997) and others have shown, the weak instruments problem can arise even when the correlations between X and Z are significant at conventional levels (5% or 1%) and the researcher is using a large sample. For more detailed discussion of the weak instruments problem, see Staiger and Stock (1997), Stock et al. (2002), or Dufour (2003). Thus rejecting the null of underidentification using the tests in the previous section and conventional significance levels is not enough; other methods are called for. One approach that has been advanced by Stock and Yogo (2005) is to test for the presence of weak instruments. The difference between this approach and the aforementioned underidentification tests is not in the basic statistic used, but in the finite sample adjustments and critical values and in the null hypothesis being tested. Moreover, the critical values for a weak instruments test are different for different estimators because the estimators are not affected to the same degree by weak instruments. Specifically, the LIML and CUE estimators are more robust to the presence of weak instruments than are IV and two-step GMM. The test statistic proposed by Stock and Yogo (2005) is the F -statistic form of the Cragg and Donald (1993) statistic,
2 N −L rK1 2 L2 1−rK

. ivreg2 will report this statistic for
1

an estimation that assumes i.i.d. disturbances. The null hypothesis being tested is that the estimator is weakly identified in the sense that it is subject to bias that the investigator finds unacceptably large. The Stock–Yogo weak instruments tests come in two flavors: maximal relative bias and maximal size, where the null is that the instruments do not suffer from the specified bias. Rejection of their null hypothesis represents the absence of a weak instruments problem. The first flavor is based on the ratio of the bias of the estimator to the bias of OLS. The null is that instruments are weak, where

24

Enhanced routines for IV/GMM estimation and testing

weak instruments are defined as instruments that can lead to an asymptotic relative bias greater than some value b. Because this test uses the finite sample distribution of the IV estimator, it cannot be calculated in certain cases. This is because the mth moment of the IV estimator exists if and only if m < (L − K + 1).19 The second flavor of the Stock–Yogo tests is based on the performance of the Wald test statistic for β1 . Under weak identification, the Wald test rejects too often. The test statistic is based on the rejection rate r (10%, 20%, etc.) that the researcher is willing to tolerate if the true rejection rate should be the standard 5%. Weak instruments are defined as instruments that will lead to a rejection rate of r when the true rejection rate is 5%. Stock and Yogo (2005) have tabulated critical values for their two weak identification tests for the IV estimator, the LIML estimator, and Fuller’s modified LIML estimator. The weak instruments bias in the IV estimator is larger than that of the LIML estimators, and hence the critical values for the null that instruments are weak are also larger. The Stock–Yogo critical values are available for a range of possible circumstances (up to 3 endogenous regressors and 100 excluded instruments). The weak identification test that uses the Cragg–Donald F statistic, like the corresponding underidentification test, requires an assumption of i.i.d. errors. This is a potentially serious problem, for the same reason as given earlier: if the test statistic is large simply because the disturbances are not i.i.d., the researcher will commit a Type I error and incorrectly conclude that the model is adequately identified. If the user specifies the robust, cluster or bw options in ivreg2, the reported weak instruments test statistic is a Wald F statistic based on the Kleibergen–Paap rk statistic. We are not aware of any studies on testing for weak instruments in the presence of non-i.i.d. errors. In our view, however, the use of the rk Wald statistic, as the robust analog of the Cragg–Donald statistic, is a sensible choice and clearly superior to the use of the latter in the presence of heteroskedasticity, autocorrelation or clustering. We suggest, however, that when using the rk statistic to test for weak identification, users either apply with caution the critical values compiled by Stock and Yogo (2005) for the i.i.d. case, or refer to the older “rule of thumb” of Staiger and Stock (1997) that the F -statistic should be at least 10 for weak identification not to be considered a problem. ivreg2 will report in the main regression output the relevant Stock and Yogo (2005) critical values for IV, LIML and Fuller-LIML estimates if they are available. The reported test statistic will be the Cragg–Donald statistic if the traditional covariance estimator is used or the rk statistic if a robust covariance estimator is requested. If the user requests two-step GMM estimation, ivreg2 will report an rk statistic and the IV critical values. If the user requests the CUE estimator, ivreg2 will report an rk statistic and the LIML critical values. The justification for this is that IV and LIML are special cases of two-step GMM and CUE respectively, and the similarities carry over to weak instruments: the literature suggests that IV and two-step GMM are less robust to weak instruments than LIML and CUE. Again, however, users of ivreg2 may again
19. See Davidson and MacKinnon (1993), pp. 221–222.

Christopher F. Baum, Mark E. Schaffer and Steven Stillman

25

wish to exercise some caution in applying the Stock–Yogo critical values in these cases.

7.4

Weak-identification-robust inference: the Anderson-Rubin test

The first-stage ivreg2 output also includes the Anderson and Rubin (1949) test of the significance of the endogenous regressors in the structural equation being estimated (not to be confused with the Anderson and Rubin (1949) overidentification test discussed earlier). In the form reported by ivreg2, the null hypothesis tested is that the coefficients β1 of the endogenous regressors X1 in the structural equation are jointly equal to zero. It is easily extended to testing the equality of the coefficients of X1 to other values, but this is not supported explicitly by ivreg2; see the next section for further discussion. The development of this Anderson and Rubin (1949) test is straightforward. Substitute the reduced-form expression (38) for the endogenous regressors X1 into the main equation of the model y = Xβ + u = X1 β1 + Z2 β2 + u = ([Z1 Z2 ] [Π11 Π12 ] + v1 )β1 + Z2 β2 + u and rearrange to obtain y = Z1 Π11 β1 + Z2 (Π12 β1 + β2 ) + (v1 β1 + u) (41) (40)

Now consider estimating a reduced form equation for y with the full set of instruments as regressors: y = Z1 γ1 + Z2 γ2 + η (42) If the null H0 : β1 = 0 is correct, Π11 β1 = 0, and therefore γ1 = 0. Thus the Anderson and Rubin (1949) test of the null H0 : β1 = 0 is obtained by estimating the reduced form for y and testing that the coefficients γ1 of the excluded instruments Z1 are jointly equal to zero. If we fail to reject γ1 = 0, then we also fail to reject β1 = 0. The Anderson–Rubin statistic is robust to the presence of weak instruments. As instruments become weak, the elements of Π11 become smaller, and hence so does Π11 β1 : the null H0 : γ1 = 0 is less likely to be rejected. That is, as instruments become weak, the power of the test declines, an intuitively appealing feature: weak instruments come at a price. ivreg2 reports both the χ2 version of the Anderson–Rubin statistic (distributed with L1 degrees of freedom) and the F -statistic version of the test. ivreg2 also reports the closely-related Stock and Wright (2000) S-statistic. The S statistic tests the same null hypothesis as the A-R statistic and has the same distribution under the null. It is given by the value of the CUE objective function (with the exogenous regressors partialled out). Whereas the A-R statistic provides a Wald test, the S statistic provides an LM or GMM distance test of the same hypothesis. Importantly, if the model is estimated with a robust covariance matrix estimator, both the Anderson–Rubin statistic and the S statistic reported by ivreg2 are correspondingly robust. See Dufour (2003) and Chernozhukov and Hansen (2005) for further discussion of the Anderson–Rubin approach. For related alternative test statistics that

26

Enhanced routines for IV/GMM estimation and testing

are also robust to weak instruments (but not violations of the i.i.d. assumption), see the condivreg and condtest commands available from Moreira and Poi (2003) and Mikusheva and Poi (2006).

7.5

An example of estimation with weak instruments using ivreg2

We illustrate the weak instruments problem with a variation on a log wage equation illustrated in Hayashi (2000). The explanatory variables are s (completed years of schooling), expr (years of work experience), tenure in the current job (in years), rns (a dummy for residency in the Southern U.S.), smsa (a dummy for urban workers), the worker’s iq score, and a set of year dummies. Instruments include the worker’s age and mrt (marital status: 1=married) as instruments.
. use http://www.stata-press.com/data/imeus/griliches, clear (Wages of Very Young Men, Zvi Griliches, J.Pol.Ec. 1976) . ivreg2 lw s expr tenure rns smsa _I* (iq = age mrt), ffirst robust redundant( > mrt) Summary results for first-stage regressions Variable iq | Shea Partial R2 | | 0.0073 | Partial R2 0.0073 | | F( 2, 744) 2.93 P-value 0.0539

NB: first-stage F-stat heteroskedasticity-robust Underidentification tests Ho: matrix of reduced form coefficients has rank=K1-1 (underidentified) Ha: matrix has rank=K1 (identified) Kleibergen-Paap rk LM statistic Chi-sq(2)=5.90 P-val=0.0524 Kleibergen-Paap rk Wald statistic Chi-sq(2)=5.98 P-val=0.0504 Weak identification test Ho: equation is weakly identified Kleibergen-Paap Wald rk F statistic 2.93 See main output for Cragg-Donald weak id test critical values Weak-instrument-robust inference Tests of joint significance of endogenous regressors B1 in main equation Ho: B1=0 and overidentifying restrictions are valid Anderson-Rubin Wald test F(2,744)= 46.95 P-val=0.0000 Anderson-Rubin Wald test Chi-sq(2)=95.66 P-val=0.0000 Stock-Wright LM S statistic Chi-sq(2)=69.37 P-val=0.0000 NB: Underidentification, weak identification and weak-identification-robust test statistics heteroskedasticity-robust Number of observations N = 758 Number of regressors K = 13 Number of instruments L = 14 Number of excluded instruments L1 = 2 IV (2SLS) estimation Estimates efficient for homoskedasticity only Statistics robust to heteroskedasticity Number of obs F( 12, 745) Prob > F Centered R2 = = = = 758 4.42 0.0000 -6.4195

Total (centered) SS

=

139.2861498

Christopher F. Baum, Mark E. Schaffer and Steven Stillman
Total (uncentered) SS Residual SS = = 24652.24662 1033.432656 Robust Std. Err. .0418904 .1183267 .0292551 .0306682 .1559971 .1031119 .1663252 .1523585 .1637992 .2468458 .1861877 .1668657 2.781762 Uncentered R2 = Root MSE = 0.9581 1.168

27

lw iq s expr tenure rns smsa _Iyear_67 _Iyear_68 _Iyear_69 _Iyear_70 _Iyear_71 _Iyear_73 _cons

Coef. -.0948902 .3397121 -.006604 .0848854 -.3769393 .2181191 .0077748 .0377993 .3347027 .6286425 .4446099 .439027 10.55096

z -2.27 2.87 -0.23 2.77 -2.42 2.12 0.05 0.25 2.04 2.55 2.39 2.63 3.79

P>|z| 0.024 0.004 0.821 0.006 0.016 0.034 0.963 0.804 0.041 0.011 0.017 0.009 0.000

[95% Conf. Interval] -.1769939 .1077959 -.0639429 .0247768 -.682688 .0160236 -.3182166 -.2608179 .0136622 .1448336 .0796887 .1119763 5.098812 -.0127865 .5716282 .050735 .144994 -.0711906 .4202146 .3337662 .3364165 .6557432 1.112451 .809531 .7660778 16.00312 5.897 0.0524 0.002 0.9665

Underidentification test (Kleibergen-Paap rk LM statistic): Chi-sq(2) P-val = -redundant- option: IV redundancy test (LM test of redundancy of specified instruments): Chi-sq(1) P-val = Instruments tested: mrt Weak identification test (Kleibergen-Paap rk Wald F statistic): Stock-Yogo weak ID test critical values: 10% maximal IV size 15% maximal IV size 20% maximal IV size 25% maximal IV size Source: Stock-Yogo (2005). Reproduced by permission. NB: Critical values are for Cragg-Donald F statistic and i.i.d. errors. Hansen J statistic (overidentification test of all instruments): Chi-sq(1) P-val =

2.932 19.93 11.59 8.75 7.25

1.564 0.2111

Instrumented: iq Included instruments: s expr tenure rns smsa _Iyear_67 _Iyear_68 _Iyear_69 _Iyear_70 _Iyear_71 _Iyear_73 Excluded instruments: age mrt

In the first stage regression results, the Kleibergen–Paap underidentification LM and Wald tests fail to reject their null hypotheses at the 95% level, suggesting that even in the case of overidentification via the order condition the instruments may be inadequate to identify the equation. The Anderson–Rubin Wald test and Stock–Wright LM test readily reject their null hypothesis and indicate that the endogenous regressors are relevant. However, given that those null hypotheses are joint tests of irrelevant regressors and appropriate overidentifying restrictions, the evidence is not so promising. In the main equation output, the redundant(mrt) option indicates that mrt provides no useful information to identify the equation. This equation may be exactly identified at best.

28

Enhanced routines for IV/GMM estimation and testing

7.6

The relationship between weak-identification-robust inference and overidentification tests

The Anderson–Rubin weak-identification-robust test (and its related alternatives) relies heavily on the orthogonality of the excluded instruments Z1 . If the orthogonality conditions are violated, the Anderson–Rubin test will tend to reject the null H0 : β1 = 0 even if the true β1 = 0. The reason is easy to see: if Z1 is correlated with the disturbance u, it will therefore also be correlated with the reduced form error η, and so the estimated γ1 will be biased away from zero even if in reality β1 = 0. ˆ More generally, in a test of overidentification, the maintained hypothesis is that the model is identified, so that a rejection means rejecting the orthogonality conditions. In the weak-identification-robust test of β1 , the maintained hypothesis is that the instruments are valid, so that a rejection means rejecting the null that β1 equals the hypothesized value. This relationship between weak identification and overidentification tests can be stated precisely in the case of CUE or LIML estimation. We have been careful in the above to state that the two Anderson–Rubin tests should not be confused, but in fact they are, in a sense, based on the same statistic. Assume that the exogenous regressors X2 , if any, have been partialled-out so that β1 ≡ β. The value of the CUE GMM ˆ objective function at βCU E provides a test of the orthogonality conditions; the LIML LR version of this test is the Anderson–Rubin overidentifying restrictions test. The ˜ value of the CUE GMM objective function at some other, hypothesized β provides a ˜ This is the Stock and Wright (2000) S statistic, which is a Lagrange test H0 : β = β. Multiplier (LM) version of the Anderson–Rubin weak-instruments-robust test. This can be illustrated using the Hayashi–Griliches example below. We assume conditional homoskedasticity and estimate using LIML. The Anderson–Rubin LR overidentification statistic (distributed with one degree of freedom) is small, as is the Sargan– Hansen J statistic, suggesting that the orthogonality conditions are valid:
. use http://www.stata-press.com/data/imeus/griliches, clear (Wages of Very Young Men, Zvi Griliches, J.Pol.Ec. 1976) . qui ivreg2 lw s expr tenure rns smsa _I* (iq = age mrt), /// > fwl(s expr tenure rns smsa _I*) liml . di e(arubin) 1.1263807 . di e(j) 1.1255442

The Anderson–Rubin test of H0 : βIQ = 0 is calculated automatically by ivreg2 with the ffirst option, and is equivalent to estimating the reduced form for lw and testing the joint significance of the excluded instruments age and mrt:
. qui ivreg2 lw s expr tenure rns smsa _I* (iq = age mrt), liml ffirst . di e(archi2) 89.313862

Christopher F. Baum, Mark E. Schaffer and Steven Stillman
. qui ivreg2 lw s expr tenure rns smsa _I* age mrt . test age mrt ( 1) ( 2) age = 0 mrt = 0 chi2( 2) = Prob > chi2 = 89.31 0.0000

29

The Stock–Wright S statistic is a LM or GMM distance test of the same hypothesis. This LM version of the Anderson–Rubin Wald test of age and mrt using the reduced form estimation above is asymptotically equivalent to an LM test of the same hypothesis, available using ivreg2 and specifying these as excluded instruments (see Baum et al. (2003) for further discussion). It is this LM version of the Anderson–Rubin weakinstruments-robust test that is numerically identical to the value of the GMM objective function at the hypothesized value βIQ = 0:
. qui ivreg2 lw s expr tenure rns smsa _I* (=age mrt) . di e(j) 79.899445 . mat b[1,1]=0 . qui ivreg2 lw s expr tenure rns smsa _I* (iq = age mrt), /// > fwl(s expr tenure rns smsa _I*) b0(b) . di e(j) 79.899445

Note that for J(β0 ) to be the appropriate test statistic, it is necessary for the exogenous regressors to be partialled out with the fwl() option.

7.7

Additional first-stage options

To aid in the diagnosis of weak instruments, the savefirst option requests that the individual first-stage regressions be saved for later access using the [R] estimates command. If saved, they can also be displayed using first or ffirst and the ivreg2 replay syntax. The regressions are saved with the prefix “ ivreg2 ” unless the user specifies an alternative prefix with the savefprefix(prefix) option. The saved estimation results may be made the active set with estimates restore, allowing commands such as [R] test, [R] lincom and [R] testparm to be used. The rf option requests that the reduced form estimation of the equation be displayed. The saverf option requests that the reduced form estimation is saved for later access using the [R] estimates command. If saved, it can also be displayed using the rf and the ivreg2 replay syntax. The regression is saved with the prefix “ ivreg2 ” unless the user specifies an alternative prefix with the saverfprefix(prefix) option.

30

Enhanced routines for IV/GMM estimation and testing

8

Advanced ivreg2 options

Two options are available for speeding ivreg2 execution. nocollin specifies that the collinearity checks not be performed. This option should be used with caution. noid suspends calculation and reporting of the underidentification and weak identification statistics in the main output. The b0(matrix) option allows the user to specify that the GMM objective function, J, should be calculated for an arbitrary parameter vector. The parameter vector must be given as a matrix with appropriate row and column labels. The b0() option is most useful if the user wishes to conduct a weak-instruments-robust test of H0 : β1 = b0 , where b0 is specified by the user. For example, in the illustration given in Section 7.6, the null hypothesis that the coefficient on iq is 0.05 can be tested simply by replacing the line mat b=J(1,1,0) with mat b=J(1,1,0.05). A heteroskedastic-robust S-statistic can be obtained by specifying robust along with b0(b). To construct a weak-instrumentsrobust confidence interval, the user can simply conduct a grid search over the relevant range for β1 .20 Two options have been added to ivreg2 for special handling of the GMM estimation process. The wmatrix(matrix) option allows the user to specify a weighting matrix rather than computing the optimal weighting matrix. Estimation with the wmatrix option yields a possibly inefficient GMM estimator. ivreg2 will use this inefficient estimator as the first-step GMM estimator in two-step efficient GMM when combined with the gmm2s option; otherwise, ivreg2 reports this inefficient GMM estimator. The smatrix(matrix) option allows the user to directly specify the matrix S, the covariance matrix of orthogonality conditions. ivreg2 will use this matrix in the calculation of the variance-covariance matrix of the estimator, the J statistic, and if the gmm2s option is specified, the two-step efficient GMM coefficients. The smatrix option can be useful for guaranteeing a positive test statistic in user-specified GMM-distance tests as described in Section 5. As Ahn (1997) shows, Hansen’s J test has an LM interpretation but can also be calculated as the result of a Wald test. This is an application of the Newey and West (1987a) results on the equivalence of LM, Wald and GMM distance tests. In the context of an overidentified model, the J statistic will be identical to a Wald χ2 test statistic from an exactly identified model in which the additional instruments are included as regressors as long as the same estimate of S is used in both estimated equations. As an example:
. use http://www.stata-press.com/data/imeus/griliches, clear (Wages of Very Young Men, Zvi Griliches, J.Pol.Ec. 1976) . qui ivreg2 lw (iq=med kww age), gmm2s

20. It is important to note that an Anderson–Rubin confidence region need not be finite nor connected. The test provided in condivreg (Moreira and Poi (2003), Mikusheva and Poi (2006)) is uniformly most powerful in the situation where there is one endogenous regressor and i.i.d. errors. The Anderson– Rubin test provided by ivreg2 is a simple and preferable alternative when errors are not i.i.d. or there is more than one endogenous regressor.

Christopher F. Baum, Mark E. Schaffer and Steven Stillman
. di e(sargan) 102.10909 . mat S0 = e(S) . qui ivreg2 lw med age (iq=kww), gmm2s smatrix(S0) . test med age ( 1) ( 2) med = 0 age = 0 chi2( 2) = Prob > chi2 = . test kww age ( 1) ( 2) kww = 0 age = 0 102.11 0.0000

31

. qui ivreg2 lw kww age (iq=med), gmm2s smatrix(S0)

chi2( 2) = 102.11 Prob > chi2 = 0.0000 . qui ivreg2 lw med kww (iq=age), gmm2s smatrix(S0) . test med kww ( 1) med = 0 ( 2) kww = 0 chi2( 2) = Prob > chi2 =

102.11 0.0000

9

The RESET specification test in the IV context

The ivreset command performs various flavors of Ramsey’s regression error specification test (RESET) as adapted by Pesaran and Taylor (1999) and Pagan and Hall (1983) for instrumental variables (IV) estimation. The RESET test is sometimes called an omitted variables test (as in official Stata’s ovtest) but probably is best interpreted as a test of neglected nonlinearities in the choice of functional form (Wooldridge (2002), pp. 124–5). Under the null hypothesis that there are no neglected nonlinearities, the residuals should be uncorrelated with low-order polynomials in y , where the y s are preˆ ˆ dicted values of the dependent variable. In the ivreset implementation of the test, an equation of the form y = Xβ + Y γ + v is estimated by IV, where the Y s are powers of y , the fitted value of the dependent variable y. Under the null hypothesis that there are ˆ no neglected nonlinearities and the equation is otherwise well-specified, γ should not be significantly different from zero. As Pesaran and Taylor (1999) and Pagan and Hall (1983) point out, however, a ˆ RESET test for an IV regression cannot use the standard IV predicted values y ≡ X β ˆ because X includes endogenous regressors that are correlated with u. Instead, the RESET test must be implemented using “forecast values” of y that are functions of the instruments (exogenous variables) only. In the Pagan–Hall version of the test, the forecast values y are the reduced form predicted values of y, i.e., the predicted values from ˆ a regression of y on the instruments Z. In the Pesaran–Taylor version of the test, the forecast values y are the “optimal forecast” values. The optimal forecast (predictor) y is ˆ ˆ

32

Enhanced routines for IV/GMM estimation and testing

ˆ ˆˆ ˆ ˆ defined as X β, where β is the IV estimate of the coefficents and X ≡ [Z Π Z2 ], i.e., the reduced form predicted values of the endogenous regressors plus the exogenous regressors. Note that if the equation is exactly identified, the optimal forecasts and reduced form forecasts coincide, and the Pesaran–Taylor and Pagan–Hall tests are identical. The ivreset test flavors vary according to the polynomial terms (square, cube, fourth power of y ), the choice of forecast values (Pesaran–Taylor optimal forecasts or ˆ Pagan–Hall reduced form forecasts), test statistic (Wald or GMM-distance), and large vs. small sample statistic (χ2 or F -statistic). The test statistic is distributed with degrees of freedom equal to the number of polynomial terms. The default is the Pesaran– Taylor version using the square of the optimal forecast of y and a χ2 Wald statistic with one degree of freedom. If the original ivreg2 estimation was heteroskedastic-robust, cluster-robust, AC or HAC, the reported RESET test will be as well. The ivreset command can also be used after OLS regression with [R] regress or ivreg2 when there are no endogenous regressors. In this case, either a standard Ramsey RESET test using fitted values of y or a robust test corresponding to the specification of the original regression is reported. We illustrate use of ivreset using a model fitted to the Griliches data:
. use http://fmwww.bc.edu/ec-p/data/hayashi/griliches76.dta (Wages of Very Young Men, Zvi Griliches, J.Pol.Ec. 1976) . quietly ivreg2 lw s expr tenure rns smsa (iq=med kww), robust . ivreset Ramsey/Pesaran-Taylor RESET test Test uses square of fitted value of y (X-hat*beta-hat) Ho: E(y|X) is linear in X Wald test statistic: Chi-sq(1) = 4.53 P-value = 0.0332 Test is heteroskedastic-robust . ivreset, poly(4) rf small Ramsey/Pagan-Hall RESET test Test uses square, cube and 4th power of reduced form prediction of y Ho: E(y|X) is linear in X Wald test statistic: F(3,748) = 1.72 P-value = 0.1616 Test is heteroskedastic-robust

The first ivreset takes all the defaults, and corresponds to a second-order polynomial in y with the Pesaran–Smith optimal forecast and a Wald χ2 test statistic which rejects ˆ the null at better than 95%. The second employs a fourth-order polynomial and requests the Pagan–Hall reduced form forecast with a Wald F -statistic, falling short of the 90% level of significance.

10

A test for autocorrelated errors in the IV context

The ivactest command performs the Cumby and Huizinga (1992) generalization of a test proposed by Sargan (1988) for serial independence of the regression errors, which in turn generalizes the test proposed by Breusch and Godfrey (estat bgodfrey) applicable to OLS regressions. Sargan’s extension of the Breusch–Godfrey test to the IV context,

Christopher F. Baum, Mark E. Schaffer and Steven Stillman

33

the SC test, is described as a “general misspecification chi-squared statistic” by Pesaran and Taylor (1999), p. 260. The SC test statistic is based upon the residuals of the instrumental variables regression and its conventional V CE. Cumby and Huizinga extend Sargan’s test to cases in which the IV V CE was estimated as heteroskedasticityrobust, autocorrelation-robust or HAC. In the words of Cumby and Huizinga (1992), the null hypothesis of the test is “that the regression error is a moving average of known order q ≥ 0 against the general alternative that autocorrelations of the regression error are nonzero at lags greater than q. The test . . . is thus general enough to test the hypothesis that the regression error has no serial correlation (q = 0) or the null hypothesis that serial correlation in the regression error exists, but dies out at a known finite lag (q > 0).” (p. 185). The Cumby–Huizinga test is especially attractive because it can be used in three frequently encountered cases where alternatives such as the Box–Pierce test ([TS] wntestq), Durbin’s h test (estat durbinalt) and the Breusch–Godfrey test (estat bgodfrey) are not applicable. One of these cases is the presence of endogenous regressors, which renders each of these tests invalid. A second case involves the overlapping data commonly encountered in financial markets where the observation interval is shorter than the holding period, which requires the estimation of the induced moving average (MA) process. The Cumby–Huizinga test avoids estimation of the MA process by utilizing only the sample autocorrelations of the residuals and a consistent estimate of their asymptotic covariance matrix. The third case involves conditional heteroskedasticity of the regression error term, which is also handled without difficulty by the Cumby–Huizinga test. If the prior estimation command estimated a V CE under the assumption of i.i.d. errors, the Cumby–Huizinga statistic becomes the Breusch-Godfrey statistic for the same number of autocorrelations, and will return the same result as estat bgodfrey. That special case of the test was that proposed by Sargan in an unpublished working paper in 1976 (reprinted in Sargan (1988)). Two parameters may be specified in ivactest: s, the number of lag orders to be tested, and q, the lowest lag order to be tested.21 By default, ivactest takes s=1 and q=0 and produces a test for AR(1). A test for AR(p) may be produced with s=p. Under the null hypothesis of serial independence for lags q − (q + s), the Cumby–Huizinga test statistic is distributed χ2 with s degrees of freedom. We illustrated the use of ivactest in Section 3 above.

11

A summary of ivreg2 estimation options

The version of ivreg2 accompanying this paper uses a different syntax for specifying the type of estimator to be employed. In earlier versions (including those circulated with Stata Journal software updates in issues 4:2 and 5:4), the gmm option implied a
21. If the previous command estimated a V CE under the assumption of i.i.d. errors, q must be 0.

34

Enhanced routines for IV/GMM estimation and testing

heteroskedasticity-robust estimator. When the gmm option was combined with the bw option, estimates were autocorrelation-robust but not heteroskedasticity-robust. This version of ivreg2 uses a new taxonomy of estimation options, summarized below. Note that the gmm2s option by itself produces the IV (2SLS) estimator, as described in Section 2.2. One of the options [robust, cluster, bw] must be added to generate two-step efficient GMM estimates. The following table summarizes the estimator and the properties of its point and interval estimates for each combination of estimation options. Estimator option (none) Covariance matrix option(s) robust, cluster, bw, kernel IV/2SLS with robust SEs LIML with robust SEs Two-step GMM with robust SEs CUE GMM with robust SEs k-class estimator with robust SEs Inefficient GMM with robust SEs Two-step GMM with robust SEs

(none)

IV/2SLS SEs consistent under homoskedasticity LIML SEs consistent under homoskedasticity IV/2SLS SEs consistent under homoskedasticity LIML SEs consistent under homoskedasticity k-class estimator SEs consistent under homoskedasticity possibly inefficient GMM SEs consistent under homoskedasticity Two-step GMM with user-specified first step SEs consistent under homoskedasticity

liml

gmm2s

cue

kclass

wmatrix

gmm2s + wmatrix

11.1

ivreg2 vs. ivregress

Stata’s official [R] ivregress command in Stata 10.0 now provides a LIML and GMM estimator in addition to two-stage least squares. The GMM estimator can produce HAC estimates, as discussed above in Section 3, but cannot produce AC estimates. The [R] ivregress command does not support the general k-class estimator nor GMMCUE but provides an “iterative GMM” estimator. Overidentification tests and firststage statistics are available as estat subcommands. ivreg2’s ability to partial out regressors via the partial option is not available in [R] ivregress.

Christopher F. Baum, Mark E. Schaffer and Steven Stillman

35

A number of tests performed by ivreg2 are not available from [R] ivregress. These include the “GMM distance” tests of endogeneity/exogeneity discussed in Section 5, the general underidentification/weak identification test of Kleibergen and Paap (2006) discussed in Section 7 and tests for instrument relevance. In diagnosing potentially weak instruments, ivreg2’s ability to save the first-stage regressions is also unique.

12

Syntax diagrams

These diagrams describe all of the programs in the ivreg2 suite, including those which have not been substantially modified since their documentation in Baum et al. (2003). ivreg2 depvar varlist1 (varlist2=varlist iv) weight if in

, gmm2s bw(# | auto) kernel(string) liml fuller(#) kclass(#) coviv cue cueinit(matrix) cueoptions(string) b0(matrix) robust cluster(varname) orthog(varlist ex) endog(varlist en) redundant(varlist ex) partial(varlist ex) small noconstant smatrix(matrix) wmatrix(matrix) first ffirst savefirst savefprefix(string) rf saverf saverfprefix(string) nocollin noid level(#) noheader nofooter eform(string) depname(varname) plus overid , chi2 dfr f all varlist varlist , polynomial(#) rform cstat small , s(#) q(#) , ivlev ivsq fitlev fitsq ph phnorm nr2 bpg all

ivhettest ivendog ivreset ivactest

13

Acknowledgements

We are grateful to many members of the Stata user community who have assisted in the identification of useful features in the ivreg2 suite and helped identify problems with the programs. We thank (without implicating) Austin Nichols for his suggestions on this draft, Frank Kleibergen for discussions about testing for identification, and Manuel Arellano and Graham Elliott for helpful discussions of GMM-CUE. Some portions of the discussion of weak instruments in Section 7.3 are taken from Chapter 8 of Baum (2006).

36

Enhanced routines for IV/GMM estimation and testing

14

References

Ahn, S. C. 1997. Orthogonality tests in linear models. Oxford Bulletin of Economics and Statistics 59(1): 183–186. Anderson, T. W. 1984. Introduction to Multivariate Statistical Analysis. 2nd ed. New York: John Wiley & Sons. Anderson, T. W., and H. Rubin. 1949. Estimation of the parameters of a single equation in a complete system of stochastic equations. Annals of Mathematical Statistics 20: 46–63. Andrews, D. W. K. 1991. Heteroskedasticity and autocorrelation consistent covariance matrix estimation. Econometrica 59: 817–858. Baum, C. F. 2006. An Introduction to Modern Econometrics Using Stata. College Station, TX: Stata Press. Baum, C. F., M. E. Schaffer, and S. Stillman. 2003. Instrumental variables and GMM: Estimation and testing. Stata Journal 3: 1–31. Bound, J., D. A. Jaeger, and R. Baker. 1995. Problems with instrumental variables estimation when the correlation between the instruments and the endogeneous explanatory variable is weak. Journal of the American Statistical Association 90: 443–450. Breusch, T., H. Qian, P. Schmidt, and D. Wyhowski. 1999. Redundancy of moment conditions. Journal of Econometrics 91(1): 89–111. Chernozhukov, V., and C. Hansen. 2005. The Reduced Form: A Simple Approach to Inference with Weak Instruments. Working paper, University of Chicago, Graduate School of Business. Cragg, J. G., and S. G. Donald. 1993. Testing identifiability and specification in instrumental variables models. Econometric Theory 9: 222–240. Cumby, R. E., and J. Huizinga. 1992. Testing the autocorrelation structure of disturbances in ordinary least squares and instrumental variables regressions. Econometrica 60(1): 185–195. Cushing, M., and M. McGarvey. 1999. Covariance Matrix Estimation. In Generalized Methods of Moments Estimation, ed. L. Matyas. Cambridge University Press. Davidson, R., and J. G. MacKinnon. 1993. Estimation and Inference in Econometrics. 2nd ed. New York: Oxford University Press. Dufour, J. 2003. Identification, Weak Instruments and Statistical Inference in Econometrics. Working Paper 2003s-49, CIRANO. Frisch, R., and F. V. Waugh. 1933. Partial time regressions as compared with individual trends. Econometrica 1(4): 387–401.

Christopher F. Baum, Mark E. Schaffer and Steven Stillman

37

Fuller, W. A. 1977. Some Properties of a Modification of the Limited Information Estimator. Econometrica 45(4): 939–53. Greene, W. H. 2003. Econometric Analysis. 5th ed. Upper Saddle River, NJ: Prentice– Hall. Hahn, J., and J. Hausman. 2002. Notes on bias in estimators for simultaneous equation models. Economics Letters 75(2): 237–41. Hahn, J., J. Hausman, and G. Kuersteiner. 2004. Estimation with weak instruments: Accuracy of higher-order bias and MSE approximations. Econometrics Journal 7(1): 272–306. Hall, A. R. 2005. Generalized Method of Moments. Oxford: Oxford University Press. Hall, A. R., and F. P. M. Peixe. 2003. A Consistent Method for the Selection of Relevant Instruments. Econometric Reviews 22(5): 269–287. Hall, A. R., G. D. Rudebusch, and D. W. Wilcox. 1996. Judging instrument relevance in instrumental variables estimation. International Economic Review 37(2): 283–298. Hansen, L., J. Heaton, and A. Yaron. 1996. Finite sample properties of some alternative GMM estimators. Journal of Business and Economic Statistics 14(3): 262–280. Hayashi, F. 2000. Econometrics. 1st ed. Princeton, NJ: Princeton University Press. Kleibergen, F., and R. Paap. 2006. Generalized reduced rank tests using the singular value decomposition. Journal of Econometrics 127(1): 97–126. Kleibergen, F., and M. Schaffer. 2007. RANKTEST: Stata module to testing the rank of a matrix using the Kleibergen-Paap rk statistic. Available at: http://ideas.repec.org/c/boc/bocode/s456865.html. Accessed 22 August 2007. Lovell, M. 1963. Seasonal adjustment of economic time series. Journal of the American Statistical Association 58: 993–1010. Mikusheva, A., and B. P. Poi. 2006. Tests and confidence sets with correct size when instruments are potentially weak. Stata Journal 6(3): 335–347. Moreira, M., and B. Poi. 2003. Implementing Tests with the Correct Size in the Simultaneous Equations Model. Stata Journal 3(1): 57–70. Nagar, A. 1959. The bias and moment matrix of the general K-class estimators of the parameters in simultaneous eequations. Econometrica 27(4): 575–595. Newey, W. K., and K. D. West. 1987a. Hypothesis testing with efficient method of moments estimation. International Economic Review 28: 777–787. ———. 1987b. A simple, positive semi-definite, heteroskedasticity and autocorrelation consistent covariance matrix. Econometrica 55: 703–708.

38

Enhanced routines for IV/GMM estimation and testing

———. 1994. Automatic lag selection in covariance matrix estimation. Review of Economic Studies 61: 631–653. Pagan, A. R., and D. Hall. 1983. Diagnostic tests as residual analysis. Econometric Reviews 2(2): 159–218. Pesaran, M. H., and L. W. Taylor. 1999. Diagnostics for IV regressions. Oxford Bulletin of Economics & Statistics 61(2): 255–281. Poskitt, D., and C. Skeels. 2002. Assessing Instrumental Variable Relevance: An Alternative Measure and Some Exact Finite Sample Theory. Technical report, Monash University and University of Melbourne. Sargan, J. 1988. Testing for misspecification after estimation using instrumental variables. In Contributions to econometrics: John Denis Sargan, ed. E. Maasoumi, vol. 1. Cambridge University Press. Staiger, D., and J. H. Stock. 1997. Instrumental variables regression with weak instruments. Econometrica 65(3): 557–86. Stock, J., and M. Watson. 2003. Introduction to Econometrics. Reading, MA: Addison– Wesley. Stock, J. H., and J. H. Wright. 2000. GMM with Weak Identification. Econometrica 68(5): 1055–1096. Stock, J. H., J. H. Wright, and M. Yogo. 2002. A Survey of Weak Instruments and Weak Identification in Generalized Method of Moments. Journal of Business & Economic Statistics 20(4): 518–29. Stock, J. H., and M. Yogo. 2005. Testing for Weak Instruments in Linear IV Regression. In Identification and Inference for Econometric Models: Essays in Honor of Thomas Rothenberg, ed. D. W. Andrews and J. H. Stock, 80–108. Cambridge University Press. Wooldridge, J. M. 2002. Econometric Analysis of Cross Section and Panel Data. 1st ed. Cambridge, MA: MIT Press. ———. 2003. Introductory Econometrics: A Modern Approach. 2nd ed. New York: Thomson Learning.

Similar Documents

Premium Essay

Price of Oil

...INDEPTH: OIL The price of oil - in context CBC News Online | April 18, 2006 Oil is sold in barrels - it's the same unit of measure used to sell whisky. A barrel of oil - or whisky - contains 159 litres. The price of a barrel of oil has been testing new highs since it pushed through $50 a barrel in September 2004 - and pushed gasoline prices well beyond $1 a litre in the summer of 2005. But how high are prices like that, historically speaking? Turns out these records may not be records, after all. Oil prices were stable for most of the 100 years before 1973 at well under $5 a barrel. Expressed in today's dollars (all figures in U.S. dollars), the price was closer to $10 a barrel, hitting highs of about $15 and lows close to $8. Even as the world economy boomed in the decades following the Second World War, prices remained fairly stable. That's mainly because the United States held most of the clout in the oil industry - and the U.S. government regulated the price of oil. From 1958 to 1970, prices were stable at about $3 per barrel, but in real terms the price of crude oil declined from above $15 to below $12 per barrel. The decline in the price of crude when adjusted for inflation was further exacerbated in 1971 and 1972 by the weakness of the U.S. dollar. But by the early 1970s, that changed. The Organization of Petroleum Exporting Countries had become a force and in 1973, the first major oil shock hit the world as Arab nations refused to sell to countries that...

Words: 7116 - Pages: 29

Premium Essay

Oil Prices

...Oil Prices and the U.S Trade Deficit Along with the financial industry, chemical industry and entertainment industry, the energy industry is one of the top markets in the United States with oil production as one of its core essentials. Since the beginning of 2002, oil prices have almost quadrupled overtime. The United States is estimated to be the number one country of oil consumption therefore making the soaring prices one of the major concerns within the country. Although the amount of U.S imports and exports have varied overtime, recently the U.S has been running trade deficits. With the price of oil increasing, an oil-importing country like the U.S will have a substantial increase in the cost of petroleum imports therefore suggesting the deterioration of their trading deficit will be even greater. In this study, Michele Cavallo examines the changes of oil prices and how they affect a number of different factors. These factors include the slow-paced growth in oil production creating has an increase in demand which has outpaced the increase in supply. Cavallo explores the relationship between the surge in oil prices and trade, how the U.S trade deficit evolves in response to higher oil prices and furthermore creates a model that helps explain how the import of oil, despite the increase in price, remained constant and what affect it has on the trade deficit. Using data from January 2002 to July 2006 for overall trade balance and the petroleum trade balance, Cavallo...

Words: 702 - Pages: 3

Free Essay

Oil Prices

...OUTLINE Introduction A. What effects can produce oil prices increase? a. Brief history and evolution in oil markets b. Causes of the increment in oil prices B. Colombia on the two sides of oil prices rise effects c. Brief description of effects d. Brief history of petroleum industry Body I. International context a. Global situation of oil prices b. Volatility and Dutch disease II. Colombia Case c. analysis of effects in the macroeconomic view: inflation and currency appreciation Conclusion A. Which are the solutions to control the harmful effects of oil prices increase B. What strategies are implementing in Colombia to deal with the effects of oil prices increase. Thesis statement Since the 1970s the world hadn`t experienced an oil increase like the one that is happening these days where many countries are concerned about the effects that this phenomenon can bring to their economies. As an oil exporting country, Colombia has to deal with a lot of challenge in order to transform all the revenues from petroleum into benefits to their society. However there are some effects that can bring some instability to this small economy, especially the one that international markets create a speculative bubble which can end in the Dutch disease. ‘The Dutch disease is a major market failure originating in the existence of cheap and abundant natural or human resources that keep the currency of a country overvalued...

Words: 2669 - Pages: 11

Premium Essay

Oil Prices in India

...The Oil Ministers of 12 member states of Organization of the Petroleum Exporting Countries (OPEC) concluded their meeting in Vienna on November 27 by deciding to continue with their three-year-old production quota of 30 million barrels per day (mbpd). Thus, they calculatingly ignored nearly one mbpd oversupply in the global oil market which has pushed the crude prices down by over 30 per cent since June 2014. The global oil glut, in turn, has been caused by a number of factors which include OPEC’s own overproduction, rising non-OPEC production (particularly by the U.S.-based “Shale Revolutionaries”) and lower demand from China and Europe. By declining to cut their output to shore up the prices, OPEC in general, and Saudi Arabia in particular, have refused to play the role of global “swing producer.” As most factors responsible for the current global demand-supply disequilibrium are systemic in nature, the world faces prospects for relatively bearish oil prices over the foreseeable future. Indeed, the prices have continued to fall with the Indian basket touching $72.51/barrel on November 27 — a decline of nearly $9 from the average during the first fortnight of the month. As the world’s fourth largest importer of crude, India can afford to exult at this precipitous crude price decline. Still, given the strategic importance of this development, a more comprehensive analysis is desirable. A virtuous cycle in the economy From the limited perspective of India’s consumer economy...

Words: 1017 - Pages: 5

Premium Essay

Oil and Gas Prices

...Oil and Gas 2 There are many issues that cause the cost of oil and gas to increase. The main contributing issue to the increasing cost of oil and gas is supply and demand, when demand is greater than supply, the price of oil and gas will increase. The factors that affect supply include increased demand, problems with refineries and pipelines, and disruption to supply or threat of disruption to supply. With the increased demand for oil in the United States and other countries such as India and China; the extra demand for oil has put enormous pressure on available oil reserves. The Energy Information Administration stated, “If refinery or pipeline and/or reductions in imports cause supplies to decline unexpectedly, gasoline inventories (stocks) may drop rapidly. This may cause wholesalers to bid higher for available supply over concern that future supplies may not be adequate” (Energy Information Administration, 2008, para. 9). With this in mind, the other underlying factors that affect supply are disruption to supply or threat of disruption to supply along with The Organization of Petroleum Exporting Countries (OPEC). The Organization of Petroleum Exporting Countries is an organization of oil producing countries which produces over 40% of the world’s crude oil and has two-thirds of the world’s oil reserves. This organization was formed in 1960 to regulate the supply of oil and to some extent, the price of oil. The organization includes Algeria, Indonesia...

Words: 969 - Pages: 4

Premium Essay

Falling Oil Prices

...Written Assignment: Falling Oil Prices On the graph I constructed above, I demonstrate how the effects of production and other countries, have impacted the prices of oil to drastically fall from an equilibrium price and quantity of $99.00 a barrel to the low cost of $49.00. My determinants include, the decrease in production costs, China’s recession, the U.S. becoming one of the top oil producers worldwide, and finally the increase of alternative energy efficient vehicles. Over the last couple of years, many new discoveries and technological advances have all led to the decrease in production costs of oil. With that being said the supply of oil has gone up and prices are decreasing rapidly. One source talks about new drilling techniques that are opening up fields of oil in the western United States that used to be out of our reach. All of the companies investing their money into these technological advances expect to achieve an increase in production by at least twenty percent (Fahey, Jonathan.) Another big contributor to the vast drop of prices is China’s recession. The crash of their market has created a chain of impact worldwide. Oil is what keeps the economy going and China being one of the biggest oil consumers was what kept the economy afloat (Carr, Michael.) Now with their recession going on, we are impacted because of the value of the dollar going down and decrease in demand. In similar news, the prices of oil drop are also due to the U.S. becoming one...

Words: 1029 - Pages: 5

Premium Essay

Oil and Gas Prices

...Oil and Gas Prices Darlene Dant COM 150 In August 2006 the American national average for a gallon of gas was $3.09. Gas prices hit an all time high in July 2008 with a national average of $4.12 per gallon. By December 2008 the national average for a gallon of gas was a mere $1.61 (GasBuddy, 2009). Due to the affect that supply and demand has in combination with state and federal taxes, America has seen significant fluctuations in gas prices. As people say, “What goes up must come down” and, in the oil and gas industry the opposite is also true, “What goes down must come up”. Fuel costs are affected by the world’s oil supply. The Organization of the Petroleum Exporting Countries (OPEC) consists of 12 members from various countries, who are the main suppliers of the world’s oil (OPEC, 2009). According to the Energy Information Administration (EIA [2009]), America gets the majority of its oil from five countries: Canada, Venezuela, Mexico, and Saudi Arabia. There are different grades, or qualities, of crude oil. Two of the most popular grades are: light-sweet crude oil (better grade) and heavy-sour crude oil (lesser grade). Depending on where the oil is coming from, it may be of a better, or lesser, grade compared to that of another country. The most desirable crude oil is light-sweet crude oil. While easily obtained in the past, light-sweet crude oil is becoming less available, causing an increase in price (Wagner, 2008). While light-sweet crude oil may have...

Words: 1816 - Pages: 8

Premium Essay

How Oil Prices Are Established

...How Oil Prices are Established  Did you realize that at our current consumption of crude oil and at our current status of known reserves, we have approximately 40 years of reserves remaining? This is a startling fact when we take into account all the products that are produced from refined crude oil or from its by-products. Many people are aware of the price increases they feel at the gas pump, but has anyone ever considered the cost or investment put forth in finding new reserves? Under the right conditions, oil would sometimes seep up to the surface, but in our times, the search for new reserves is more costly and dangerous.  When considering how the price of crude oil and its by-products are determined, one must first look at the quantities available and the amount required by its users. Crude oil by itself is not a valuable product. The products that are refined from crude oil are where the value lies. Crude oil is refined down into such products as gasoline, diesel fuel, jet fuel, propane and various flammable gasses, perfumes and insecticides. Some refined products from crude oil are also used as feed stocks in the production of other products like animal feed, plastics and other household items. With the expanding economies of various countries like China and Russia (Brown and Virmani 2007), the demand for these products has risen dramatically over the past four to five years. Many nations classified as third world countries are also increasing their need. Unfortunately...

Words: 1392 - Pages: 6

Premium Essay

Oil Price Fluctuation and the Economy

...Research Proposal Impact of Oil Price Fluctuation on the Macro Economy Von Lamey Eastern New Mexico University December 3, 2013 Table of Contents Introduction……………………………………………………………………3 Review of Literature…………………………………………………………..5 Theory…………………………………………………………………………15 Application…………………………………………………………………….18 Summary & Conclusion……………………………………………………….20 Tables…………………………………………………………………………..22 Bibliography……………………………………………………………………24 1. Introduction Oil price fluctuations have affected the people and economies of the U.S. for most of the twentieth century. The commodity has seen minor changes and major fluctuations during this period. Major price changes within a short timeframe are called shocks. The research I propose will attempt to answer the question: What is the impact of changing oil prices on the macro economy of a country? Research has demonstrated oil price fluctuations do impact economies as well as supply of and demand for the commodity. This influence on macroeconomic activity generated symmetric movement between price and many macroeconomic indices in the 1970's. However, after 1982, macroeconomic indices did not demonstrate the same proclivity to react to oil price movement. Information spreads almost instantly with the emergence of the internet. This expedient movement of news has led to an evolving trend of speculation which may or may not be beneficial to commodity pricing. One may infer that the recent prevalence...

Words: 7099 - Pages: 29

Premium Essay

Why Oil Prices Fall?

...Oil price: traders and CEO’s clash over recovery prospects OBJECTİVES *What is the OPEC and it’s main goals? *Why are oil prices falling? *Who are the winners and who are the loosers? *Consequences of falling prices. *Important comments about oil price. What is the OPEC and it’s main goals? Organization of the Petroleum Exporting Countries (OPEC), a permanent, international organization headquartered in Vienna, Austria, was established in Baghdad, Iraq on 10–14 September 1960.Its mandate is to "coordinate and unify the petroleum policies" of its members and to "ensure the stabilization of oil markets in order to secure an efficient, economic and regular supply of petroleum to consumers, a steady income to producers, and a fair return on capital for those investing in the petroleum industry. In 2014 OPEC comprised twelve members: Algeria, Angola, Ecuador, Iran, Iraq, Kuwait, Libya, Nigeria, Qatar, Saudi Arabia, the United Arab Emirates and Venezuela. Why are oil prices falling? Here are four main factors driving oil’s sharp decline: 1. US oil production is booming Exploding US oil production has transformed one of the world’s leading oil consumers into one of its leading producers as well – in fact, North Dakota alone produces a million barrels of oil per day. US production now rivals oil giants Saudi Arabia and Russia, largely thanks to innovative drilling that has unlocked oil and natural gas deposits trapped in shale rock. US production has...

Words: 1530 - Pages: 7

Premium Essay

Reasons for Upheavals in Oil Prices

...Upheavals In Oil Prices In The World Robert Bourgot Florida Institute of Technology The Causes of Oil Price Upheavals in the World Introduction At this stage of civilization, oil has become an essential commodity for all countries of the world.  The lack of it means that the pace of development of any country will be adversely affected (Carollo, 2011). The world has, in the recent times, experienced severe fluctuations in oil prices. As at the start of twenty first century, the oil prices have shown a significant increase, but the most significant rise in oil prices started around the year 2002. As mentioned earlier, these upheavals in oil prices have adverse effects to the economy of each nation. An increase in oil price causes increased production costs which consequently lead to cost push inflation. This leads to unemployment, decrease in supply, fall in the people’s purchasing power among many others (Brabec, 2012). In order to address these issues, it is imperative that the roots of these issues are identified. This is the central focus of this paper. It identifies and explains the possible causes of oil price upheavals in the world. According to Maher (2012), causes of fluctuating oil prices in the world can be explained under demand side, supply side, economic side, and environmental protection side. This section looks at the causes of oil price upheavals from the different points of views mentioned above. The demand for crude oil has been...

Words: 1550 - Pages: 7

Premium Essay

Oil Prices on Economic Growth

...Oil Prices’ Impact on Economic Growth Since 2008, the U.S. has seen one of the slowest recoveries from a recession since the Great Depression. Never before since World War II has either inflation adjusted GDP or unemployment rate been below where it was four years after a recession began. Our economy in this recovery could have grown and created jobs at the average rate like the 10 previous postwar recessions. GDP per person could be $4,528 higher and 14 million more Americans would be working today (Gramm and Solon). Economists claim that the lack of strength in the recovery was due to the depth of the recession and underestimating the severity of the economic disaster. Another speculation is that the financial crisis, by the very nature is a much slower and more difficult recovery. A recession is generally defined as a decline in GDP growth in a six-month period. So what can keep the GDP, or the value of all consumed goods, from returning to a healthy economic state? Oil is a primary source of energy we use everyday. The more oil we use, the faster the economy grows. Over the last forty years, a 1 percent hit to the world oil consumption has led to a 2 percent increase in GDP. That means if GDP increased 4 percent a year, like before 2008, oil consumption was increasing by 2 percent a year (Anandan, Ramaswamy, and Sridhar). Statistically, in 2006 figures display that the average oil price was $67.65 per barrel. In 2008 when the recession hit, the average oil price was...

Words: 947 - Pages: 4

Premium Essay

How to Deal with High Oil Prices?

...Deal with High Oil Prices? Currently we are facing a very serious issue, That is, hike in oil prices which is increasing rapidly. Now a day’s oil price is soaring the sky, which can be depicted as one of the major drawbacks in the growth of our economy because high oil prices generally cause a huge negative impact on the global economic growth and continuous depreciation of Indian rupees. So we will discuss how to deal with high oil prices and the methods to overcome. In India there is scarcity of natural resources due to which we have to import petroleum from other countries. As day by day demand for petrol, oil products is increasing and consumption rate of oil being high, it consequently tends to increase in oil prices. Government is always blamed for ever increasing prices but somewhere we people are also responsible for high oil prices. It is our duty also to see how we can contribute towards reduction in oil prices. We must keep in mind that oil is not extracted in our country, so we should also preserve our resources and them in a sustainable manner. According to my opinion only solution we are left with to deal with high oil prices is to reduce the consumption of oil i.e. Decrease in consumption = Decrease in demand = decrease in price. Now the question arises how to reduce the consumption of oil? It can only be reduced by combined efforts of both government and Common People. Following are the measures to be taken to reduce the consumption of oil which in turn...

Words: 583 - Pages: 3

Premium Essay

How Crude Oil Price Affect Gas Prices

...How Crude Oil Prices Affect Gas Prices Crude oil prices make up 71% of the price of gasoline. The rest of what you pay at the pump depends on refinery and distribution costs, corporate profits, and Federal taxes. Usually, these costs remain stable, so that the daily change in the price of gasoline accurately reflects oil price fluctuations. (Source: EIA, FAQ, December 6, 2013) It usually takes about six weeks for oil price changes to work their way through the distribution system to the gas pump. Oil prices are a little more volatile than gas prices. This means oil prices might rise higher, and fall farther, than gas prices. Historical Oil and Gas Prices: Oil and gas prices have been especially volatile since the 2008 financial crash. Here's a look at their peaks and valleys, and what caused the price swings. * 2014 - Prices remained around $100/barrel. That's because the U.S. has plenty of shale oil. 2013 - Oil rose swiftly to $118.90/barrel on February 8, sending gas prices to $3.85 by February 25. Prices had started rising earlier than normal thanks to Iran's threatening war games near the Straits of Hormuz. What Causes High Oil Prices?: Like most of the things you buy, oil prices are affected by supply and demand. More demand, like the summer driving season, drives higher prices. There is usually less demand in the winter, since only the Northeast U.S. uses heating oil. However, oil prices are also affected by oil price futures, which are traded on the commodities...

Words: 2488 - Pages: 10

Premium Essay

Effect of Oil Prices on Inflation in Kenya

...on GDP and on the masses. Why are these problems visible in a country like Kenya? Because so many other problem are putting multiplier times negative impact on GDP and other macro economic variables such as inflation. Oil and other petroleum products are scarce commodities in the world. Like prices of other commodities the price of crude oil experiences wide price swings in times of shortage or oversupply. The crude oil price cycle may extend over several years responding to changes in demand as well as OPEC and non-OPEC supply. Throughout much of the twentieth century, the price of U.S. petroleum was heavily regulated through production or price controls. In the post World War II era, U.S. oil prices at the wellhead averaged $28.52 per barrel adjusted for inflation to 2010 dollars. In the absence of price controls, the U.S. price would have tracked the world price averaging near $30.54. Over the same post war period, the median for the domestic and the adjusted world price of crude oil was $20.53 in 2010 prices. Adjusted for inflation, from 1947 to 2010 oil prices only exceeded $20.53 per barrel 50 percent of the time. (See note in the box on right.) Until March 28, 2000 when OPEC adopted the $22-$28 price band for the OPEC basket of crude, real oil prices...

Words: 11224 - Pages: 45