Lecture Advanced Econometrics (Part II) - Chapter 7: Greneralized linear regression model
lượt xem 2
download
Lecture "Advanced Econometrics (Part II) - Chapter 7: Greneralized linear regression model" presentation of content: Model, properties of ols estimators, white's heteroscedascity consistent estimator, greneralized least squares estimation.
Bình luận(0) Đăng nhập để gửi bình luận!
Nội dung Text: Lecture Advanced Econometrics (Part II) - Chapter 7: Greneralized linear regression model
- Advanced Econometrics Chapter 7: Generalized Linear Regression Model Chapter 7 GENERALIZED LINEAR REGRESSION MODEL I. MODEL: Our basic model: Y = X.β + ε with ε ~ N [0, σ 2 I ] We will now generalize the specification of the error term. E(ε) = 0, E(εε') = σ 2 Ω = Σ . n ×n This allows for one or both of: 1. Heteroskedasticity. 2. Autocorrelation. The model now is: (1) Y = X β + ε n ×k (2) X is non-stochastic and Rank ( X ) = k . (3) E(ε) = 0 n ×1 (4) E(εε') = Σ = σ ε2 Ω n× n n× n Heteroskedasticity case: σ 12 0 0 0 σ 22 0 Σ= 0 0 σ n2 Nam T. Hoang University of New England - Australia 1 University of Economics - HCMC - Vietnam
- Advanced Econometrics Chapter 7: Generalized Linear Regression Model Autocorrelation case: 1 ρ1 ρ n −1 ρ 1 ρ n −2 2 Σ = σε 1 ρ n −1 ρ n −2 1 ρ i = Corr (ε t , ε t −i ) = correlation between errors that are i periods apart. II. PROPERTIES OF OLS ESTIMATORS: 1. βˆ = ( X ′X ) −1 X ′Y = ( X ′X ) −1 X ′( Xβ + ε ) βˆ = β + ( X ′X ) −1 X ′ε E ( βˆ ) = β + ( X ′X ) −1 X ′E (ε ) = β βˆ is still an unbiased estimator 2. VarCov( βˆ ) = E [( βˆ − β )( βˆ − β )' ] = E[( X ′X ) −1 X ′ε )(( X ′X ) −1 X ′ε )' ] = E [( X ′X ) X ′εε ' X ( X ′X ) ] −1 −1 = ( X ′X ) −1 X ′E (εε ' ) X ( X ′X ) −1 = ( X ′X ) −1 X ′(σ 2 Ω) X ( X ′X ) −1 ≠ σ 2 ( X ′X ) −1 so standard formula for σˆ βˆ no longer holds and σˆ βˆ is a biased estimator of true σˆ βˆ . → βˆ ~ N[β, σ 2 ( X ′X ) −1 X ′Ω) X ( X ′X ) −1 ] so the usual OLS output will be misleading, the std error, t-statistics, etc will be based on σˆ ε2 ( X ' X ) −1 not on the correct formula. 3. OLS estimators are no longer best (inefficient). Nam T. Hoang University of New England - Australia 2 University of Economics - HCMC - Vietnam
- Advanced Econometrics Chapter 7: Generalized Linear Regression Model Note: for non-stochastic X, we care about the efficient of βˆ . Because we know if n↑ → Var( βˆ j ) ↓ → plim βˆ = β, βˆ is consistent. 4. If X is stochastic: - OLS estimators are still consistent (when E(ε|X) = 0. - IV estimators are still consistent (when E(ε|X) ≠ 0). - The usual covariance matrix estimator of VarCov( βˆ ) which is σˆ ε2 ( X ' X ) −1 will be inconsistent (n →∞) for the true VarCov( βˆ ). We need to know how to deal with these issues. This will lead us to some generalized estimator. III. WHITE'S HETEROSCEDASCITY CONSISTENT ESTIMATOR OF VarCov( β ). ˆ (Or Robust estimator of VarCov( βˆ ) If we knew σ2Ω then the estimator of the VarCov( βˆ ) would be: V = ( X ′X ) −1 X ′(σ 2 Ω) X ( X ′X ) −1 −1 −1 = X ′X X ′(σ 2 Ω) X X ′X 1 1 1 1 nn n n −1 −1 = X ′X X ′Σ) X X ′X 1 1 1 1 nn n n 1 If Σ is unknown, we need a consistent estimator of X ′Σ) X (Note that the number of n unknowns is Σ grows one-for-one with n, but [X ′Σ) X ] is k×k matrix it does not grow with n). 1 Let: Σ* = X ′ΣX n 1 n n Σ* = ∑∑σ ij Xk ×1i X1×k′j n i =1 j =1 Nam T. Hoang University of New England - Australia 3 University of Economics - HCMC - Vietnam
- Advanced Econometrics Chapter 7: Generalized Linear Regression Model σ 12 0 0 0 σ 22 0 In the case of heteroskedasticity Σ = 0 0 σ n2 1 n 2 Σ* = ∑σ i X i X i′ n i =1 White (1980) showed that if: 1 n 2 Σ0 = ∑ ei X i X i′ n i =1 then plim(Σ0) = plim(Σ*) so we can estimate by OLS and then a consistent estimator of V will be: −1 −1 11 1 n 1 Vˆ = X ′X ∑ ei2 X i X i′ X ′X nn n i =1 n Vˆ = n ( X ′X ) Σ 0 ( X ′X ) −1 −1 Vˆ is consistent estimator for V, so White's estimator for VarCov( βˆ ) is: VarCov ( βˆ ) = ( X ′X ) X ' Σˆ X ( X ′X ) = Vˆ −1 −1 e12 0 0 0 e22 0 1 where: Σ = ˆ (Note Σ 0 = Σˆ ) n 2 0 0 en Vˆ is consistent for V = n ( X ′X )−1 σ 2 Ω( X ′X )−1 regardless of the (unknown) form of the heteroskedasticity (only for heteroskedasticity). Newey - West produced a corresponding consistent estimator of V when there is autocorrelation and/or heteroskedasticity. Note that White's estimator is only for the case of heteroskedasticity and autocorrelation. White's estimator just modifies the covariance matrix estimator, not βˆ . The t-statistics, F-statistics, etc will be modified, but only in a manner that is appropriate asymptotically. So if we have heteroskedasticity or autocorrelation, whether we modify the covariance matrix estimator or not, the usual t-statistics will be unreliable in finite samples (the Nam T. Hoang University of New England - Australia 4 University of Economics - HCMC - Vietnam
- Advanced Econometrics Chapter 7: Generalized Linear Regression Model white's estimator of VarCov( βˆ ) only useful when n is very large, n → ∞ the Vˆ → VarCov( βˆ ). → βˆ is still inefficient. → To obtain efficient estimators, use generalized lest squares - GLS A good practical solution is to use White's adjustment, then use Wald test, rather than the F-test for exact linear restrictions. Now let's turn to the estimation of β, taking account of the full process for the error term. IV. GENERALIZED LEAST SQUARES ESTIMATION (GLS): OLS estimator will be inefficient in finite samples. 1. Assume E(εε') = n×Σn is known, positive definite. → there exists C j and λ j j = 1,2, ... ,n such that n ×1 n ×1 Σ Cj = Cj λj (characteristic vector C, Eigen-value λ). n× n n ×1 n ×1 n ×1 → before C'ΣC = Λ where C = [C1 C 2 C n ] n ×1 λ1 0 0 λ1 0 0 0 λ 0 0 λ2 0 Λ= Λ1 / 2 = 2 0 0 λn 0 0 λn C ' ΣC = Λ = ( Λ1 / 2 )' ( Λ1 / 2 ) → ( Λ−1 / 2 )C 'ΣC ( Λ−1 / 2 ) = ( Λ−1 / 2 )( Λ1 / 2 )( Λ1 / 2 )( Λ−1 / 2 ) = I H' H' → HΣ H ' = I → Σ = H −1 IH ' −1 = H −1 H ' −1 → Σ = H'H H = Λ−1 / 2 C ' Nam T. Hoang University of New England - Australia 5 University of Economics - HCMC - Vietnam
- Advanced Econometrics Chapter 7: Generalized Linear Regression Model Our model: Y = Xβ + ε Pre-multiply by H: HX β + H HY = ε Y* X* ε* → Y * = X *β + ε * ε* will satisfy all classical assumption because: E(ε*ε*') = E[H(εε')H'] = HΣH' = I. Since transformed model meets classical assumptions, application of OLS to (Y*, X*) data yields BLUE. → βˆGLS = ( X * ' X * ) −1 X * ' Y * → H ' H X ) −1 X ' = (X ' H ' HY Σ −1 Σ −1 → βˆGLS = ( X ' Σ −1 X ) −1 X ' Σ −1Y Moreover: [ ] [ VarCov ( βˆGLS ) = ( X * ' X * ) −1 X * ' E (ε *ε * ' ) X * ( X * ' X * ) −1 ] = ( X * ' X * ) − 1 = ( X ' Σ −1 X ) −1 VarCov( βˆGLS ) = ( X ' Σ −1 X ) −1 → βˆGLS ~ N [β , ( X ' Σ −1 X )] Note that: βˆGLS is BLUE of βˆ → E ( βˆGLS ) = β GLS estimator is just OLS, applied to the transformed model → satisfy all assumptions. Gauss - Markov theorem can be applied → βˆGLS is BLUE of βˆ . → βˆOLS must be inefficient in this case. → Var ( βˆ j GLS ) ≤ Var ( βˆ j OLS ) . Nam T. Hoang University of New England - Australia 6 University of Economics - HCMC - Vietnam
- Advanced Econometrics Chapter 7: Generalized Linear Regression Model Example: σ 12 0 0 1 / σ 12 0 0 0 σ 22 0 0 1 / σ 22 0 Σ = → Σ −1 = known 0 0 σ n2 0 0 1 / σ n2 1 / σ 12 0 0 0 1 / σ 22 0 H'H = Σ-1 →H = 0 0 1 / σ n2 1 / σ 12 0 0 Y1 Y1 / σ 12 0 1 / σ 22 0 Y2 Y2 / σ 22 HY = = =Y* 0 0 1 / σ n2 Yn Yn / σ n2 1 / σ 1 X 12 / σ 1 X 1k / σ 1 1/ σ 2 X 22 / σ 2 X 2k / σ 2 X = HX = * 1 / σ n X n2 / σ n X nk / σ n Transformed model has each observations divided by σi: Yi 1 X X εi = β1 + β 2 i 2 + + β k ik + σi σi σi σi σi Apply OLS to this transformed equation → "Weighted Least Squares": Let: βˆ = GSL estimator. εˆ = Y * − X * βˆGLS εˆ' εˆ σˆ = n−k Then to test: H0: Rβ = q (F Wald test). [ Rβˆ − q ]' [ R ( X * ' X * ) −1 R ' ]−1 [ Rβˆ − q ] Fnr− k = r ~F if H0 is true. ( r ,n − k ) σˆ 2 Nam T. Hoang University of New England - Australia 7 University of Economics - HCMC - Vietnam
- Advanced Econometrics Chapter 7: Generalized Linear Regression Model [εˆc′εˆc − εˆ' εˆ ] and F r = r n −k εˆ' εˆ (n − k ) where: εˆc = Y * − X * βˆc GLS βˆc GLS = βˆGLS − ( X ' Σ −1 X ) −1 R ' [ R ( X ' Σ −1 X ) −1 R' ]−1 ( RβˆGLS − q) is the "constrained" GLS estimator of β. 2. Feasible GLS estimation: In practice, of course, Σ is usually unknown, and so βˆ cannot be constructed, it is not feasible. The obvious solution is to estimate Σ, using some Σˆ then construct: βˆGLS = ( X ' Σˆ −1 X ) −1 X ' Σˆ −1Y A practical issue: Σ is an (n×n), it has n(n+1)/2 distinct parameters, allowing for symmetry. But we only have "n" observations → need to constraint Σ. Typically Σ = Σ(θ) where θ contain a small number of parameters. Ex: Heteroskedasticity var(εi) = σ2(θ1+θ2Zi). θ1 + θ 2 z1 0 0 0 θ1 + θ 2 z n 0 Σ= 0 0 θ1 + θ 2 z n just 2 parameters to be estimated to form Σˆ . Serial correlation: 1 ρ ρ n−1 ρ 1 ρ n −2 Σ= 1 = Σ( ρ ) n −1 ρ ρ 1 n −2 only one parameter to be estimated. • If Σˆ is consistent for Σ then β will be asymptotically efficient for β. • Of course to apply β we want to know the form of Σ → construct tests. Nam T. Hoang University of New England - Australia 8 University of Economics - HCMC - Vietnam
CÓ THỂ BẠN MUỐN DOWNLOAD
-
Lecture Advanced Econometrics (Part II) - Chapter 3: Discrete choice analysis - Binary outcome models
18 p | 62 | 6
-
Lecture Advanced Econometrics (Part II) - Chapter 13: Generalized method of moments (GMM)
9 p | 84 | 4
-
Lecture Advanced Econometrics (Part II) - Chapter 5: Limited dependent variable models - Truncation, censoring (tobit) and sample selection
13 p | 62 | 4
-
Lecture Advanced Econometrics (Part II) - Chapter 6: Models for count data
7 p | 80 | 3
-
Lecture Advanced Econometrics (Part II) - Chapter 4: Discrete choice analysis - Multinomial models
13 p | 72 | 3
-
Lecture Advanced Econometrics (Part II) - Chapter 6: Dummy varialable
0 p | 70 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 2: Hypothesis testing
7 p | 54 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 8: Heteroskedasticity
0 p | 104 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 9: Autocorrelation
0 p | 44 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 10: Models for panel data
0 p | 83 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 11: Seemingly unrelated regressions
0 p | 72 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 12: Simultaneous equations models
0 p | 73 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 1: Review of least squares & likelihood methods
6 p | 65 | 2
Chịu trách nhiệm nội dung:
Nguyễn Công Hà - Giám đốc Công ty TNHH TÀI LIỆU TRỰC TUYẾN VI NA
LIÊN HỆ
Địa chỉ: P402, 54A Nơ Trang Long, Phường 14, Q.Bình Thạnh, TP.HCM
Hotline: 093 303 0098
Email: support@tailieu.vn