intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Lecture Advanced Econometrics (Part II) - Chapter 2: Hypothesis testing

Chia sẻ: Tran Nghe | Ngày: | Loại File: PDF | Số trang:7

56
lượt xem
3
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Lecture "Advanced Econometrics (Part II) - Chapter 2: Hypothesis testing" presentation of content: Maximum likelihood estimators, wald test, likelihood ratio test, lagrange multiplier test, application of tests procedures to linear models, hausman specification test, power and size of tests.

Chủ đề:
Lưu

Nội dung Text: Lecture Advanced Econometrics (Part II) - Chapter 2: Hypothesis testing

  1. Advanced Econometrics - Part II Chapter 2: Hypothesis Testing Chapter 2 HYPOTHESIS TESTING I. MAXIMUM LIKELIHOOD ESTIMATORS: n (θ ) = ∏ f ( Z i , θ ) → θˆMLE = arg max (θ ) i =1 θ n L(θ ) ln= = (θ ) ∑ ln f (Z ,θ ) i =1 i • Asymptotic normality: ∂L Solve: θˆMLE for =0 ∂θ   ∂2 L   −1 θˆMLE ~ N  θ , -E      ∂θ∂θ '     ∂L ∂L ′   ∂2 L     I (θ ) = E    = −E   θ vector (k ×1)  ∂θ  ∂θ    ∂θ∂θ '     ∂L   ∂θ   1 θ1   ∂L  θ  ∂L   =  ∂θ 2  θ =  2 ∂θ          ∂L  θ k   ∂θ k   ∂2 L ∂2 L ∂2 L      ∂θ1 2 ∂θ1∂θ 2 ∂θ1∂θ k   ∂2 L ∂2 L ∂2 L  ∂2 L    =  ∂θ 2 ∂θ1 ∂θ 22 ∂θ 2 ∂θ k  ∂θ∂θ ′          ∂ L 2 ∂ L2 ∂2 L      ∂θ k ∂θ 1 ∂θ k ∂θ 2 ∂θ k2  • For the linear model: Y = Xβ + ε ( n×1) ( n×k )( k ×1) ( n×1) Nam T. Hoang UNE Business School 1 University of New England
  2. Advanced Econometrics - Part II Chapter 2: Hypothesis Testing → Y = Xβˆ + e ε ~ N (0,σ 2 I ) n n 1 L( β ,σ 2 ) = − ln 2π − ln σ 2 − (Y − Xβ )′(Y − Xβ ) 2 2 2σ 2  ∂L 1 (= 0)  ∂β = − 2 (− X ′Y + X ′X β ) σ   ∂L = n 1 − 2 + 2 (Y − X β )′(Y − X β )  ∂σ 2 2σ 2σ (= 0)  e1  βˆ = ( X ' X ) −1 X 'Y e   → 2 1 e =  2 ˆ ˆ e' e  σˆ = (Y − Xβ )′(Y − Xβ ) =  n n   en  −1 σ 2 ( X ' X ) −1 0   ∂2 L    −E   = 2σ 4  ∂θ∂θ ′  0    n  • We consider maximum likelihood estimator θ & the hypothesis: c(θ ) = q II. WALD TEST • Let θˆ be the vector of parameter estimator obtained without restrictions. • We test the hypothesis: H 0 : c(θ ) = q θˆ is restriction MLE of θ • If the restriction is valid, then c(θˆ) − q should be close to zero. We reject the hypothesis of this value significantly different from zero. • The Wald statistic is: ( −1 ) W = [c(θˆ) − q ]′ Var[c(θˆ) − q ] [c(θˆ) − q ] Under: H 0 : c(θ ) = q • W has chi-squared distribution with degree of freedom equal to the number of restrictions (i.e number of equations in c(θˆ) − q = 0 ) W ~ X [2J ] Nam T. Hoang UNE Business School 2 University of New England
  3. Advanced Econometrics - Part II Chapter 2: Hypothesis Testing III. LIKELIHOOD RATIO TEST: • H 0 : c(θ ) = q  Let θˆU be the maximum likelihood estimator of θ obtained without restriction.  Let θˆR be the MLE of θ with restrictions.  If LˆU & Lˆ R are the likelihood functions evaluated at these two estimate.  The likelihood ratio: Lˆ R λ= LˆU (0 ≤ λ ≤ 1)  If the restriction c(θ ) = q is valid then Lˆ R should be close to LˆU . • Under H 0 : c(θ ) = q → −2 ln λ ~ X [2J ] is chi-squared, with degree of freedom equal to the number of restrictions imposed. LR = −2 ln λ ~ X [2J ] IV. LAGRANGE MULTIPLIER TEST (OR SCORE TEST): H 0 : c(θ ) = q Let λ be a vector of Lagrange Multipliers, define the Lagrange function: L* (θ ) = L(θ ) + λ ′[c(θ ) − q ] The FOC is:  * ′  ∂L (θ ) =∂L (θ )  ∂c (θ )   ∂θ +  λ=0 ∂θ  ∂θ ′    ∂L* (θ )  = c (θ ) − = q 0  ∂λ Nam T. Hoang UNE Business School 3 University of New England
  4. Advanced Econometrics - Part II Chapter 2: Hypothesis Testing If the restrictions are valid, then imposing them will not lead to a significant difference ∂L(θˆR ) in the maximized value of the likelihood function. This means is close to 0 or λ ∂θˆR is close to 0. We can test this hypothesis: H 0 : c(θ ) = q → leads to LM test. ′ −1  ∂L(θˆR )    ∂ 2 L(θˆR )    ∂L(θˆR )  =LM  −E  ∂θˆ    ∂θˆ ∂θˆ′    ∂θˆ   R    R R    R  Under the null hypothesis H 0 : λ = 0 LM has a limiting chi-squared distribution with degrees of freedom equal to the number of restrictions. Graph V. APPLICATION OF TESTS PROCEDURES TO LINEAR MODELS Model: Y = Xβ + ε = Xβˆ + e ( n×1) ( n×k )( k ×1) ( n×1) ( n×1) H 0 : Rβ = q q R ( j ×k ) ( j×1) ( j ×k ) 1. Wald test: βˆ is an MEE of β (unrestriction) (′ )[ W = Rβˆ − q Rσˆ 2 ( X ′X ) R′ Rβˆ − q ~ X [2J ] −1 −1 ]( ) e′e βˆ is an unrestriction estimator of β : σˆ 2 = n It can be shown that: n(e′R eR − e′e) W = ~ X [2J ] e′e (1) With eR = Y − Xβˆ R βˆR is an estimator subject to the restriction Rβ . Nam T. Hoang UNE Business School 4 University of New England
  5. Advanced Econometrics - Part II Chapter 2: Hypothesis Testing 2. LR test: H 0 : Rβ = q L( βˆ R , X ) λ= L( βˆ , X ) [ ] LR = −2 ln λ = 2 ln L( βˆ ) − ln L( βˆR ) ~ X [2J ] It can be shown: LR = n(ln e′R eR − ln e′e) eR = Y − XβˆR 3. LM test: H 0 : Rβ = q It can be shown: neR X ( X ′X ) −1 X ′eR ne′R X ( X ′X ) −1 X ′eR n(e′R eR − e′e) LM = = = (3) σˆ 2 e′R eR e′R e′R It can be shown: 2 2 n(e′R eR − e′e) n  e′R eR − e′e  n(e′R eR − e′e) n  e′R eR − e′e  LR = −   = +   (2) e′e 2 e′e  e′R eR 2  e′R eR  From (1), (2), (3) we have: For the linear models: W ≥ LR ≥ LM The tests are asymptotically equivalent but in general will give different numerical results in finite samples. Which test should be used? The choice among would, LR & LM is typically made on the Basic of ease of computation. LR require both restrict & unrestrict. Wald require only unrestrict & LM requires only restrict estimators. Nam T. Hoang UNE Business School 5 University of New England
  6. Advanced Econometrics - Part II Chapter 2: Hypothesis Testing VI. HAUSMAN SPECIFICATION TEST: - Consider a test for endogeneity of a regressor in linear model. - Test based on comparisons between two different estimators are called Hausman Test. - Two alternative estimators are: βˆOLS & βˆ2 SLS estimators. Where βˆ2 SLS uses instruments to control for possible endogeneity of the regressor: H 0 : βˆOLS ≈ βˆ2 SLS Hausman’s statistic: [ ] −1 ( βˆ2 SLS − βˆOLS )′ VarCov( βˆ2 SLS − βˆOLS ) ( βˆ2 SLS − βˆOLS ) ~ χ [2r ] r: the number of endogenous regressors. Model general: ~ consider two estimators θˆ and θ We consider the test situation where: H0 : plim(θˆ − θ ) = 0 HA : plim(θˆ − θ ) ≠ 0 ~ ~ Assume under H0 : n (θˆ − θ ) → N (0,Var (θˆ − θ )) The Hausman test statistic: −1 ~ 1 ~  ~ H = (θˆ − θ ) Var (θˆ − θ )  (θˆ − θ ) ~ χ [2q ] n  ~ q is rank of Var (θˆ - θ ) For the linear model: VarCov( βˆ2 SLS − βˆOLS ) = VarCov( βˆ2 SLS ) − VarCov( βˆOLS ) Nam T. Hoang UNE Business School 6 University of New England
  7. Advanced Econometrics - Part II Chapter 2: Hypothesis Testing VII. POWER AND SIZE OF TESTS: Size of a test: Size = Pr[type I error] = Pr[reject H0 | H0 true] Common choices: 0.01, 0.05 or 0.1, α = 0.05 Monte-Carlo: set H0 true, → see the probability of reject H0 → size Power of a test: Power = Pr [reject H0/H0 wrong] = 1 - Pr[accept H0/H0 wrong] = 1 - Pr[Type II error] Monte-Carlo: set H0 wrong, → see the probability of reject H0 → power size. Nam T. Hoang UNE Business School 7 University of New England
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
2=>2