intTypePromotion=1
zunia.vn Tuyển sinh 2024 dành cho Gen-Z zunia.vn zunia.vn
ADSENSE

Lecture Advanced Econometrics (Part II) - Chapter 1: Review of least squares & likelihood methods

Chia sẻ: Tran Nghe | Ngày: | Loại File: PDF | Số trang:6

66
lượt xem
2
download
 
  Download Vui lòng tải xuống để xem tài liệu đầy đủ

Lecture "Advanced Econometrics (Part II) - Chapter 1: Review of least squares & likelihood methods" presentation of content: Least quares methods, maximum likelihood estimation.

Chủ đề:
Lưu

Nội dung Text: Lecture Advanced Econometrics (Part II) - Chapter 1: Review of least squares & likelihood methods

  1. Advanced Econometrics - Part II Chapter 1: Review Of Least Squares & Likelihood Methods Chapter 1 REVIEW OF LEAST SQUARES & LIKELIHOOD METHODS I. LEAST QUARES METHODS: 1. Model: - We have N observations (individuals, forms, …) drawn randomly from a large population i = 1, 2, …, N - On observation i: Yi and K-dimensional column vector of explanatory variables X i = ( X i1 , X i 2 ,..., X ik ) and assume X ie = 1 for all i = 1, 2, …, N. - We are interested in explaining the distribution of Yi in terms of the explanatory variables X i using linear model: Yi = β ' X i + ε i ( β = ( β1 ,..., β k )) In matrix notation: Y = Xβ + ε Yi = β1 + β 2 X i 2 + β3 X i 3 + ... + β k X ik + ε i Assumption 1: {X i , Yi }in=1 are independent and identically distributed Assumption 2: ε i ‫ ׀‬X i ~ N ( 0, σ2 ) n Assumption 3: ε i ⊥ X i ( ∑ ε i X ij = 0 ) i =1 Assumption 4: E[ ε i ‫ ׀‬X i ] = 0 Assumption 5: E[ ε i * X i ] = 0 Nam T. Hoang UNE Business School 1 University of New England
  2. Advanced Econometrics - Part II Chapter 1: Review Of Least Squares & Likelihood Methods The Ordinary Least Squares (OLS) estimator for β solves: n min ∑ (Yi − β ' X i ) 2 β i =1 −1  n   n  This leads to: βˆ =  ∑ X i X 'i   ∑ X iYi  = ( X ' X ) −1 ( X ' Y )  i =1   i =1  The exact distribution of the OLS estimation under the normality assumption is: βˆ ~ N [ β ,σ 2 .( X ' X ) −1 ] • Without the normality of the ε it is difficult to derive the exact distribution of βˆ . However we can establish asymptotic distribution: d N ( βˆ − β ) → N (0,σ 2 .E[ XX ' ]−1 ) • We do not know σ 2 , we can consistently estimate it as 1 n σˆ 2 = ∑ (Yi − βˆ ' X i ) 2 n − k − 1 i =1 • In practice, whether we have exact normality for the error terms or not, we will use the following distribution for βˆ : βˆ ≈ N ( β ,V ) where: V = σ .( E ( X ' X )) 2 −1 N estimate V by: Vˆ = σˆ 2 .(∑ X i X 'i ) −1 i =1 • If we are interested in a specific coefficient: βˆk ≈ N ( βˆk ,Vˆkk ) Vˆij is the (i,j) element of the matrix Vˆ • Confidence intervals for β k would be (95%)  βˆ − 1.96 Vˆ ; βˆ + 1.96 Vˆ   k kk k kk   • Test a hypothesis whether β k = α Nam T. Hoang UNE Business School 2 University of New England
  3. Advanced Econometrics - Part II Chapter 1: Review Of Least Squares & Likelihood Methods βk −α t= ~ N (0,1) Vˆkk 2. Robust Variances: If we don’t have the homoscedasticity assumption then: n ( βˆ − β ) → N (0, (E[XX' ]) -1 (E[ε 2 XX' ]) (E[XX' ]) -1 We can estimate the heteroskedasticity – consistent variance as: (White’s estimator) −1 −1 1 N  1 N  1 N  Vˆ =  N ∑ i =1 X i X 'i    N ∑ i =1 εˆ X i X 'i  2  N ∑ i =1 X i X 'i   II. MAXIMUM LIKELIHOOD ESTIMATION: 1. Introduction: • Linear regression model: Yi = X 'i β + ε i with ε‫׀‬Xi ~ N ( 0, σ2 ) n OLS: =βˆ arg min ∑ (Yi − X 'i β ) 2 β i =1 −1  n   n  → βˆ =  ∑ X i X 'i   ∑ X iYi   i =1   i =1  • Maximum likelihood estimator: ( βˆ , σˆ MLE 2 ) = arg max2 L( β , σ 2 ) β ,σ Where: n  1 1  L( β ,σ 2 ) = ∑ − ln(2πσ 2 ) − (Yi − X 'i β ) 2  i =1  2 2σ 2  n 1 n 2 ∑ = − ln(2πσ 2 ) − (Yi − X 'i β ) 2 2 2σ i =1 ( X − µ )2 1 − Note: X ~ N( µ ,σ ) → density function of X: f ( X ) = 2 e 2σ 2 2πσ 2 Nam T. Hoang UNE Business School 3 University of New England
  4. Advanced Econometrics - Part II Chapter 1: Review Of Least Squares & Likelihood Methods • This lead to the same estimator for β as in OLS and the MLE approach is a systematic way to deal with complex nonlinear model. 1 n σˆ MLE 2 = ∑ n i =1 (Yi − X 'i β ) 2 2. Likelihood function: • Suppose we have independent and identically distributed random variables Z1 ,..., Z n with common density f ( Z i , θ ) . The likelihood function given a sample Z1 , Z 2 ,..., Z n is n (θ ) = ∏ f ( Z i , θ ) i =1 • The log – likelihood function: n L(θ ) ln= = (θ ) ∑ ln f (Z ,θ ) i =1 i • Building a likelihood function is first step to job search theory model • An example of maximum likelihood function:  An unemployed individual is assumed to receive job offers.  Arriving according to rate λ such that the expected number of job offers arriving in a short interval of length dt is λdt  Each offer consist of some wage rate w, draw independently of previous wages, with continuous distribution function Fw (w)  If the offer is better than the reservation wage w , that is with probability 1 − F ( w ) , the offer is accepted.  The reservation wage is set to maximize utility.  Suppose that the arrival rate is constant over time.  Optimal reservation wage is also constant over time.  The probability of receiving an acceptable offer in a short time dt is θ .dt with θ = λ.(1 − F ( w )) Nam T. Hoang UNE Business School 4 University of New England
  5. Advanced Econometrics - Part II Chapter 1: Review Of Least Squares & Likelihood Methods  The constant acceptance rate θ implies that the distribution for the unemployment duration is exponential with mean 1 θ and density function: → f ( y ) = θe ( − yθ ) y: unemployment duration - random variable 1 1  Mean & variance , 2 θ θ  S ( y ) = 1 − F ( y ) = e ( − yθ ) : survivor function fY ( y) Pr( y < Y < y + dy )  h( y ) = = lim = θ : hazard function S ( y ) dy →0 P( y < Y ) (The rate at which a job is offered and accepted) Likelihood function: a) If we observe the exact unemployment duration yi n n (θ ) → = i =i 1 =i 1 ∏ f(y = , θ ) ∏ h( y i θ )S( yi θ ) b) We observe a number of people all becoming unemployed at the same point in time, but we only observe whether they exited unemployment before a fixed point in time, say c: n n =i 1 =i 1 (θ = ) ∏ F(c θ )di .(1 − F (c θ ))1-di= ∏ (1-S(c θ )) di .S (c θ )1-di di = 1 denotes that individual i left unemployment before c and di = 0 to denote this individual was still unemployed at time c. c) If we observe the exact exit or failure time if it occurs before c, but only an indicator of exit occurs after c n n =(θ ) i di =i 1 =i 1 1-di ∏ = f(y , θ ) .S (c θ ) ∏ h( y i θ )di .S( yi θ )di .S (c θ )1-di d) Denote ci is the specific censoring time of individual i. Letting t denote the minimum of the exit time yi and censoring time ci , t i = min( yi , ci ) n n n =(θ ) i di i 1-di =i 1 =i 1 =i 1 i di ∏ =if(y , θ ) .S (c θ ) 1-di ∏ = f (t θ ) .S(t θ ) ∏ h(t i θ ) di .S (ti θ ) Nam T. Hoang UNE Business School 5 University of New England
  6. Advanced Econometrics - Part II Chapter 1: Review Of Least Squares & Likelihood Methods 3. Properties of MLE n θˆMLE = arg max ∑ ln f ( Z i ,θ ) θ →Θ i =1 a. Consistency: For all ε > 0 lim Pr( θˆMLE − θ > ε ) = 0 n →∞ b. Asymptotic normality:   ∂2 L   −1 n (θˆ MLE − θ 0 ) → N  0, −  E ( Zi ,θ0 )     ∂θ∂θ '    4. Computation of the maximum likelihood estimator: Newton – Raphson method: • Approximate the objective function Q(θ ) = − L(θ ) around some starting value θ 0 by a quadrate function and find the exact minimum for that quadrate approximation. Call this θ1 • Redo the quadrate Nam T. Hoang UNE Business School 6 University of New England
ADSENSE

CÓ THỂ BẠN MUỐN DOWNLOAD

 

Đồng bộ tài khoản
2=>2