Lecture Advanced Econometrics (Part II) - Chapter 3: Discrete choice analysis - Binary outcome models
lượt xem 6
download
Lecture "Advanced Econometrics (Part II) - Chapter 3: Discrete choice analysis - Binary outcome models" presentation of content: Discrete choice model, basic types of discrete values, the probability models, estimation and inference in binary choice model, binary choice models for panel data.
Bình luận(0) Đăng nhập để gửi bình luận!
Nội dung Text: Lecture Advanced Econometrics (Part II) - Chapter 3: Discrete choice analysis - Binary outcome models
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Chapter 3 DISCRETE CHOICE ANALYSIS: BINARY OUTCOME MODELS I. INTRODUCTION: The simplest of the model in which the dependent variable is discrete value is the model with y is binary. 1. Discrete choice model: Model in which the dependent variable assumes discrete values Example: 1 if person i is employed in labor force yi = 0 otherwise Regardless the definition of y, it is traditional to refer to y = 1 as a “success” and y = 0 as a “failure”. 2. Basic types of discrete values: a) Dichotomous or binary: These take on a value of one or zero, depending on which of two possible results occur b) Polychotomons variables: These take on a discrete number, greater than two, of possible values/ non-categoring: y = {number of patent issued to a company during a year} c) Unordered Variables: These are variables for which there is no natural ranking of the alternatives. Example: for a sample of commuters, we define a variable: Nam T. Hoang UNE Business School 1 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models 1 if person i is a lawyer 2 if person i is a teacher yi = 3 if person i is a doctor 4 if person i is a plumer 1 if person i drives to work 2 if person i takes a bus yi = 3 if person i takes a train 4 if otherwise We can define these dependent variables in any order desired → unordered categorical variables. d) Ordered variables: With these variables, outcome have a natural ranking. Examples: 1 if person i is in poor health yi = 2 if person i is in good health 3 if person i is in excellent health 1 if person i spent less than $1,000 2 if person i spent $1,000 − $2,000 yi = 3 if person i spent $2,000 − $4000 4 if person i spent more than $4,000 A special case of an ordered variable is a “sequential variables”. This occurs when second event is dependent on the first event, the third event is dependent on the previous two event, … 1 if person i has not completed high school. 2 if person i has completed high school , not college. yi = 3 if person i has college degree, not higher degree. 4 if person i has a professional degree In marketing research, one often considers attitudes of preference that are measured on a scale 1, 2, 3, 4, 5. for instance. Nam T. Hoang UNE Business School 2 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models 1 if intensely dislike 2 if moderately dislike yi = 3 if neutral 4 if moderately like. 5 if intensely like II. THE PROBABILITY MODELS: We assume that there is a latent variables y * such that yi* = X i β + ε i yi* is the additional utility that individual i would get by choosing yi = 1 rather than yi = 0 we do not observe yi* , but we observe variable y, which takes on value of 0 or 1 according to the rule: 1 if y * > 0 yi = 0 otherwise Prob ( yi = 1 ) = Prob ( yi* >0) = Prob ( X i β + ε i >0) = Prob ( ε i > − X i β ) = 1 – Prob ( ε i < − X i β ) = 1 – F( − X i β ) = F( X i β ) Where F is the cumulative distribution function of ε i Prob ( yi = 0 ) = Prob ( yi* < 0) = Prob ( X i β + ε i < 0) = Prob ( ε i < − X i β )= F (− X i β ) = 1 - F( X i β ) The likelihood for the sample is the product of the probability of each observation. Then = = i yi 0= i∏ F (− X β ) ∏ [1 − F (− X β )] i yi 1 i The functional form of F will depend on the assumptions made about ε i 1. Probit Model: If the cumulative distribution of ε i is the normal distribution, i.e ε i ~ N(0, σ 2 ). Then we have: the Probit Model Nam T. Hoang UNE Business School 3 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models ε Xβ F (− X i β ) = Prob ( ε i < − X i β ) = Prob i < i σ σ Xβ X β = Φ − i = 1 − Φ i σ σ X β 1 − F ( − X i β ) = Φ i σ → Likelihood function y 1− yi X i β Xiβ n Xiβ X i β i = ∏ 1 − Φ = i yi 0 = σ ∏ Φ i yi 1 = σ = ∏ Φ i1 σ 1 − Φ σ Where: Xiβ z2 X β σ 1 −2 → Φ i σ = ∫ −∞ 2π e dz The standard normal probability distribution function log-likelihood function. L β = ( σ) β ln= ( σ) i ∑ y ln Φ i X i β σ X i β + (1 − yi ) ln 1 − Φ σ Notice that L β ≤ 0 . Because ln Φ (.) ≤ 0 and ln[1 − Φ (.)] ≤ 0 σ Another important note of the likelihood function is that the parameter β and σ always appear together, only the ratio β matters. → convenient to normalize σ to be one → σ so we can talk just about β . First derivatives ∂L( β ) n ϕ(Xiβ ) ϕ(Xiβ ) = ∂β ∑y i =1 i Φ( X i β ) − (1 − yi ) 1 − Φ( X i β ) 2. The Logit Model If the cumulative distribution of ε i is the logistic, we have the logit model, with the eε eε probability density function f (ε ) = , cdf: F (ε ) = [1 + eε ]2 1 + eε Nam T. Hoang UNE Business School 4 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models exp(− X i β ) 1 → F (− X i β ) = = 1 + exp(− X i β ) 1 + exp( X i β ) exp( X i β ) 1 − F (− X i β ) = 1 + exp( X i β ) Likelihood function: 1− yi yi 1 exp( X i β ) n 1 exp( X i β ) = (β ) ∏ = ∏ ∏ =i 1 1 + exp( X i β ) exp( X i β ) 1 += 1 + exp( X i β ) i yi 1 1 + exp( X i β ) i yi 0= n exp( β ∑ X i yi ) = n i =1 ∏ [1 + exp( X β )] i =1 i Log-likelihood function: n exp( X i β ) n 1 L( β ) = ∑ yi . ln + ∑ (1 − yi ). ln i =1 1 + exp( X i β ) i =1 1 + exp( X i β ) With derivative: ∂L exp( X i β ) = ∂β ∑X i yi − 1 + exp( X i β ) (marginal effect) 3. Solving the maximum likelihood function: Denote φ (.) is the density function of the standard normal. For the probit model: n n L = ∑ yi ln Φ ( X i β ) + ∑ (1 − yi ) ln[1 − Φ ( X i β )] i =1 i =1 ∂L n [ yi − Φ ( X i β )] S (β = ) = ∂β ∑ Φ( X β )[1 − Φ( X β )]ϕ ( X β ) X i =1 i i i i The ML estimator βˆ ML can be obtained as a solution of the equation S ( β ) = 0 These equation are nonlinear in β , thus we have to solve them by an iterative procedure. The Information matrix: Nam T. Hoang UNE Business School 5 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models ∂2 L n [ϕ ( X i β )]2 I (β ) = E− ∑ = X i X i′ ∂β∂β ′ i =1 Φ ( X i β )[1 − Φ ( X i β )] We start with some initial value of β , say β 0 , and complete the value S ( β 0 ) and I ( β 0 ) → new estimate of β by the method of scoring. For the logit model: n n L = β ∑ X i yi −∑ ln[1 + exp( X i β )] i =1 i =1 ∂L n exp( X i β ) n S (β ) = −∑ = xi + ∑ X i yi = 0 = ∂β 1 + exp( X i β ) i 1= i 1 These equation are nonlinear → we have to use the Newton – Raphson method to solve the equation Information. δ 2L n exp( X i β ) I ( β ) = E − = ∑ X i X i′ δβδ β ′ i =1 [1 + exp( X i β )] 2 → starting with some initial value of β , say β 0 , … 4. The linear probability model: Assume: yi = X i β + ε i = β1 + β 2 X 2i + β 3 X 3i + ... + β k X ki + ε i Where yi takes values 0 or 1 and X ji are non – stochastic → E (ui ) = 0 E ( yi ) = β1 + β 2 X 2i + ... + β k X ki E ( yi ) = 0.P( yi = 0) + 1.P( yi = 1) = P( yi = 1) E (ε i2 ) = E[ yi − E ( yi )]2 = E ( yi2 ) − [ E ( yi )]2 = E ( yi ) − [ E ( yi )]2 = E ( yi )[1 − E ( yi )] = P( yi = 1)[1 − P( yi = 1)] → vary with i → heteroskedasticity Nam T. Hoang UNE Business School 6 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Problems of the Linear Probability Model: Heteroskedasticity → OLS will not be efficient. Some cases that yi > 1 and yi < 0 → difficulties of interpretation. (We cannot constrain Xβ to the (0, 1) interval) In many cases E ( yi X i ) can be outside the limit (0, 1). III. PROBIT vs LOGIT: Why do we choose logistic distribution? → Simplicity: the equation of the logistic CDF is very simple, while normal CDF involves an unevaluated integral (for multivariate this will be important). Additional: eX f (X ) = (1 + e X ) 2 eX F(X ) = 1+ eX Xiβ F (ε i < X i β )= F ( X i β )= ∫ −∞ f ( x)dx CDF of logistic distribution F(x) Graph Distribution: lim F ( x) = 1 x → +∞ ex F ( x) = 0 f ( x) = xlim → −∞ (1 + e x ) 2 F(.) is continuous & monotonous strictly x1 < x2 → f ( x1 ) < f ( x2 ) - Logistic distribution: is almost the same as normal distribution. - It’s symmetric. 1 1 − 2 ( X −µ )2 Normal distribution: f ( x) = e 2σ σ 2π x F ( x) = ∫ f ( x)dx −∞ Nam T. Hoang UNE Business School 7 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models x2 1 −2 Standard normal: f ( x) = e 2π x x2 1 −2 F ( x) = Φ ( x) = ∫ −∞ 2π e dx IV. ESTIMATION AND INFERENCE IN BINARY CHOICE MODEL: 1. Likelihood function: Estimation of binary choice models is usually based on the method of maximum likelihood. Each observation is treated as a single draw from a Bernoulli distribution, suppose F is symmetric distribution. =i yi 0= = i i yi 1 ∏ F (− X β ) ∏ [1 − F (− X β )=] ∏ [1 − F ( X β )] ∏ F ( X β ) i = i yi 0 i = i yi 1 i n = ∏ [ F ( X β )] i =1 i yi [1 − F ( X i β )]1− yi n L = log = ∑ {y ln F ( X β ) + (1 − y ) ln[1 − F ( X β )]} i =1 i i i i The likelihood equations are: ∂L n y f − fi = ∑ i i + (1 − y i ) X i = 0 ∂β i =1 Fi (1 − Fi ) Where f i is the density: dF ( X i β ) fi = = f (Xiβ ) d(Xiβ ) Likelihood equation are: ∂L n y f (Xiβ ) − f (Xiβ ) = ∑ i + (1 − y i ) X i′ = 0 ∂β i =1 F ( X i β ) (1 − F ( X i β )) ( k×1) ( k×1) ( k ×1) Nam T. Hoang UNE Business School 8 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models dF ( X i β ) where f ( X i β ) = d(Xiβ ) The likelihood equation will be non-linear and require a numerical solution. For the logit model: ∂L n e Xiβ = ∑ yi − Xi = 0 ∂β i =1 (1 + e X i β ) ( k×1) ( k×1) ( k ×1) 1 1 n n e Xiβ If X i contain a constant term (1) X 1′ = 1 then ∑ yi = ∑ i =1 i =1 1 + e Xiβ 1 → The average of the predicted probability must equal the proportion of ones in the sample. For the normal distribution: L = ∑ ln[1 − Φ ( X i β )] + ∑ ln Φ ( X i β ) yi =0 yi =1 ∂L − φ(Xiβ ) φ(Xiβ ) =∑ Xi + ∑ X .i ∂β yi =0 1 − Φ ( X i β ) ( n×1) yi =1 Φ ( X i β ) ( n×1) X .′i is the column i of matrix X Let qi = 2 yi − 1 then ∂L n q φ(Xiβ ) = ∑ i X .i = ∑ λi X .i = 0 ∂β i =1 Φ ( X i β ) λi The second derivatives for the logit model are simple ∂2L n e Xiβ e Xiβ H = = −∑ 1 − Xiβ Xiβ X .i X .′i (log it ) ∂β ∂β ′ i =1 1 + e 1+ e Hessian matrix H is always negative definite, so the log – likelihood is globally concave. For the probit: Nam T. Hoang UNE Business School 9 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models ∂2L n H = = ∑ − λi (λi + X i β )X .i X .′i ( probit ) ∂β∂β ′ i =1 The matrix is also negative definite for all value of β L(probit) is also globally concave. 2. Newton – Raphson method for calculating the MLE: Suppose we need to find max L(θ ), θ ( k ×1) ∂L(θ ) → U (θ ) = (Score function) ( k ×1) ∂θ ( k ×1) If the L(θ ) is concave, maximum likelihood will be obtained by solving: U (θˆ) = 0 ( k ×1) ( k ×1) ˆ Consider expanding the score function evaluated at the MLE θ around a trial value θ o using a first order Taylor expansion, ∂U (θ o ) ˆ U (θˆ) = U (θ o )+ (θ − θ o ) ( k ×1) ( k ×1) ∂θ o ( k ×1) ( k ×k ) The Hessian matrix of the L(θ ) is: ∂ 2 L(θ ) ∂U (θ ) H (θ ) = = ( k ×1) ∂θ∂θ ( k × k ) ' ∂θ ∂U (θ o ) → H (θ o ) = ∂θ o Set U (θˆ) = 0 then ∂U (θ o ) ˆ U (θ o ) + (θ − θ o ) = = U (θ o ) + H (θ o )(θˆ − θ o ) = 0 ∂θ o → H (θ o ).θˆ = H (θ o )θ o − U (θ o ) = 0 → θˆ = θ o − H −1 (θ o ) U (θ o ) = 0 ( k ×1) ( k ×1) ( k ×k ) ( k ×1) Nam T. Hoang UNE Business School 10 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models This result provides the basis for an iterative approach for computing the MLE known as the Newton-Raphson method Give a trial value θ o . - Use the equation to get an improved estimate and repeat the process with the new estimator. - Stop when the differences between successive estimates are sufficiently close to zero. (or elements of U (θˆ) are sufficiently close to zero). This procedure tends to converges quickly if the log-likelihood function L(θ ) is concave and if the starting value is reasonably close to the optimal value. We can replace the Hessian matrix by the information matrix θˆ = θ o + I −1 (θ o )U (θ o ) ∂2L I (θ o ) = − E ' ∂θ∂θ 3. Margianl effects • After estimating β , we can get estimated value of the probability that the i th observation is equal to 1 ( yi = 1) . exp( X i βˆ ) Logit: Pr( yi = 1 X i ) = (i = 1,..., n) 1 + exp( X βˆ )i Xiβ Z2 1 − Probit: Pr( yi = 1 X i ) = ˆ Φ( X i β ) eπ ∫ −∞ e 2 dZ (i = 1,..., n) • The coefficients from the probit and logit models are difficult to interpret because they measure the change in the unobservable yi* associated with a change in one of the explanatory variables. • A more useful measure is what we call the marginal effects: ∂ Pr( yi = 1 X i ) ∂F ( X i β ) ME j = = ∂xij ∂xij = F ' ( X i β )β j = φ ( X i β )β j ME j = ϕ ( X i β ) β j Nam T. Hoang UNE Business School 11 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models change unit of x j change of the probability of person i choose yi = 1 • For: ∂ Pr( yi = 1 X i ) eXiβ ME j = = βj ∂xij [1 + e ]Xiβ 2 change unit of x j the probability of person i choose yi = 1 change F ( X i βˆ ) = Fˆ = f ( X i βˆ ) βˆ ' ∂Fˆ ∂Fˆ VarCov( Fˆ ) = VarCov( βˆ ) ∂βˆ ∂βˆ An partial effects. 4. Average Partial Effects: ∂ Pr( yi = 1 X ) APE = E x ∂x In practice: 1 n APE j = ∑ f ( X i βˆ )βˆ j n i =1 ( j = 1, k ) 1 ∂ Pr( yi = 1 X i ) 1 n ' 1 n γ k = APEk = n ∑ ∂xik = ∑ F ( X i β ) β k = ∑ γ k ( xi ) n i =1 n i =1 Let γ ij = f ( X i βˆ ) βˆ j marginal effect of xij on probability of person i takes the action. Then APE j (Average Partial Effects) of xij is ( j = 1,...., k ) 1 n ˆ )β = 1 γ = γ n APE j = ∑ i j n∑ n i =1 f ( X β i =1 ij j If xij changes one unit the average probability of an individual takes the action will change 1 n APE j = ∑ f ( X i βˆ )β j n i=1 −1 ∂ 2 L( βˆ ) () [ ] Notes: V = VarCov βˆ = I ( βˆ ) −1 = −E ˆ ˆ' ∂β∂β 1 1 n Var (γ j ) = ∑ (γ ij − γ j ) n n − 1 i =1 Nam T. Hoang UNE Business School 12 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models 1 n n Var ( γ ) = ( k ×1) ∑∑ f ( X i βˆ )V [ f ( X i βˆ )]' n 2 =i 1 =j 1 V is the estimated covariance matrix of βˆ ' Or Var ( γ ) = f ( X βˆ )V f ( X βˆ ) ( k ×1\ γ 1 γ 2 . γ = . . γ k 1 n f ( Xβ ) = ∑ f ( Xiβ ) n i =1 • Report estimation result for probit and logit: Variable coefficients (SE) marginal effects ( X ) (SF) APE (SE) Variable coefficients (SE) Average Partial Effects. 5. Hypothesis test: Wald test: For a set of restriction: Rβ = q , the statistic is. ( ){ } ( Rβˆ − q) ~ χ −1 ' W = Rβˆ − q RVarCov ( βˆ ) R ' 2 [J ] −1 ∂ 2 L( βˆ ) VarCov( βˆ ) = I ( βˆ ) [ ] −1 = −E ˆ ˆ' ∂β∂β Likelihood Ratio test: [ LR = −2 ln Lˆ R − ln Lˆu ] EX: Test that all the slope coefficients in the probit of logit model are zero the restricted log-likelihood is the same for both probit and logit models. Lo = n[P ln P + (1 − P) ln(1 − P)] P is the proportion of ( yi = 1) Lagrange Multiplier test: Nam T. Hoang UNE Business School 13 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models LM = G ( βˆ )VarCov ( βˆ ) G ' ( βˆ ) ~ χ[2J ] (1× k ) ( k ×k ) ( k ×1) Where G ( βˆ ) = F ' ( Xβˆ ) evaluated at the restricted parameter vector of βˆ (1×k ) (1×k ) 6. Specification tests for Binary Choice Models: We consider two important specification problem effect of omitted variables and the effects of heteroskedasticity. In the classical model: Y = X 1β1 + X 2 β 2 + ε If we omit X 2 E[ βˆ1 ] = β1 + ( X 1' X 1 ) −1 X 1' X 2 β 2 Unless X 1 and X 2 are orthogonal or β 2 = 0 , βˆ1 is a biased estimator, but still consistent In the context of a binary choice model, we have: a) If x2 is omitted from a model containing x1 , x2 p lim βˆ1 = c1β1 + c2 β 2 Where c1 , c2 are complicated function of unknown parameters the coefficient on the included variable will be inconsistent (trouble). b) If the disturbances in the regression are heteroskedasticity, then the maximum likelihood estimators are inconsistent and the covariance matrix is inappropriate (trouble) The second result is particularly troublesome because the probit and logit model is most often with micro data, which are frequently heteroskedasticity. Test the omitted variables: H o : y * = X 1 β1 + ε H 1 : y * = X 1 β1 + X 2 β 2 + ε So the test is of null hypothesis that β 2 = 0 Nam T. Hoang UNE Business School 14 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Test the Heteroskedasticity: Assume that the heteroskedasticity takes the form: Var (ε ) = e Zγ [ ] 2 → σ = e Zγ Z is subset of X. y * = Xβ + ε Var (ε X ) = e Zγ [ ] 2 ∂ Pr( yi = 1 X ) Xβ β − ( Xβ )γ k → = φ Zγ k Zγ ∂xk e e The log – likelihood is: n Xiβ X i β =ln ∑ y ln F e γ i Z + (1 − yi ) ln 1 − F e Z γ i =1 We can use the LM test construct at restricted γ = 0 7. Measuring goodness of fit: There are several ways to measure the goodness of fit. a) Percentage correctly predicted: For each i, computer the predicted probability that yi = 1 If Pr( yi = 1 X i ) > .5 → yi = 1 Pr( yi = 1 X i ) < .5 → yi = 0 The percentage of times the predicted yi matches the actual yi the Percentage Correctly predicted. Nam T. Hoang UNE Business School 15 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models We can compute the percentage correctly predicted for each outcome y = 0 and y = 1 . The overall percentage correctly predicted is a weighted average of the two, with the weights being the fractions of zero and one outcomes. b) Pseudo R-squared. ln £ L Pseudo R − squared = 1− 1− = ln£ o Lo Lo is the value of the log – likelihood function in the model with only intercept (hypothesis is that all β = 0 ). L is the value of the log – likelihood function of the estimated model. If all the slope coefficient are zero, then Psendo R-squared is zero. L < Lo (because L is negative and L > Lo ) then we always have: 0 < Pseudo R − squared < 1 (Lo = n[P ln P + (1 − P) ln(1 − P)]) V. BINARY CHOICE MODELS FOR PANEL DATA: yit* = X it β + ε it (1×k ) ( k ×1) (1×1) y = 1 if yit* > 0 i = 1,2,..., n and it yit = 0 if yit* ≤ 0 t = 1,2,..., Ti 1. Random Effects Models: Specifies: ε it = vit + ui → yit* = X it β + vit + ui yit = 1 if yit* > 0 and 0 otherwise Where vit & ui are independent random variables with. E (vit X ) = 0 , Cov(vit , v j X ) = Var (vit X ) = 1 If i = j and t = s , 0 otherwise. Nam T. Hoang UNE Business School 16 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models E (ui X ) = 0 , Cov(ui , u j X ) = Var (ui X ) = σ u2 if i = j , 0 otherwise. Cov(vit , u j X ) ≈ 0 for all i, t, j → E (ε it X ) = 0 σ u2 Var (ε it X ) = σ v2 + σ u2 = 1 + σ u2 Corr (ε it , ε is X ) = ρ = 1 + σ u2 ρ → σ u2 = 1− ρ In the cross–section case: − X i β (U i ) P( yi X i ) = ∫ f (ε )dε − ∞ ( Li ) i i if yi = 0 and +∞ (U i ) = ∫ f (ε )dε − X i β ( Li ) i i if yi = 1 • The contribution of the group i to the likelihood would be the joint probability of all Ti observations. U iTi U i 1 Li = P( yi1 ,..., yiTi X ) = ∫ ∫ f (ε LiTi Li 1 i1 , ε i 2 ,..., ε iTi )dε i1dε i 2 ...ε iTi +∞ Ti f (ε i1 , ε i 2 ,..., ε iTi ) = ∫ ∏ f (ε −∞ t =1 it ui ) f (ui )dui Because joint density: f (ε i1 , ε i 2 ,..., ε iTi , ui ) = f (ε i1 , ε i 2 ,..., ε iTi ui ) f (ui ) So: + ∞ Ti U iTi Li = P[ yi1 ,..., yiTi X ] = ∫ ∏ ∫ f (ε it ui )dε it f (ui )dui − ∞ t =1 LiTi +∞ Ti Li = P( yi1 ,..., yiTi X ) = ∫−∞∏t =1 Pr ob(Yit = yit X it β + ui ) f (ui )dui Likelihood function: n = ∏ Li i =1 n L = ln = ln ∏ Li = ln L1 + ln L2 + ... + ln Ln i =1 Nam T. Hoang UNE Business School 17 University of New England
- Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Assume that ui is normally distributed using MLE to estimate β 2. Fixed Effects Models: The fixed effects model is: i = 1,2,..., n yit* = α i dit + X it β + ε it t = 1,2,..., Ti yit = 1 if yit* > 0 yit = 0 otherwise Where d it is a dummy variable that takes the value one for individual i and zero otherwise. n Ti L ln= = ∑∑ ln P( y =i 1 =t 1 it α i + X it β ) P() is the probability of the observed outcome: For logit: eαi + X it β prob( = yit 1= X it ) = Fit 1 + eαi + X it β Likelihood function: = ∏∏ ( F ) i t it y it (1 − Fit )1− yit Reading: 15.8.2, 15.8.3 3. Pooled Models: Suppose the model is: i = 1,2,..., n P( yit = 1 X it ) = F ( X it β ) t = 1,2,..., Ti Log – likelihood function: n Ti L = ln = ∑∑ { y =i 1 =t 1 it log F ( X it β ) + (1 − yit ) log[1 − F ( X it β )]} Matlab programming work: IQ 0.045 ME β = 0.06 FE 0.04 ε ~ N (0,1) Nam T. Hoang UNE Business School 18 University of New England
CÓ THỂ BẠN MUỐN DOWNLOAD
-
Lecture Advanced Econometrics (Part II) - Chapter 13: Generalized method of moments (GMM)
9 p | 84 | 4
-
Lecture Advanced Econometrics (Part II) - Chapter 5: Limited dependent variable models - Truncation, censoring (tobit) and sample selection
13 p | 62 | 4
-
Lecture Advanced Econometrics (Part II) - Chapter 4: Discrete choice analysis - Multinomial models
13 p | 72 | 3
-
Lecture Advanced Econometrics (Part II) - Chapter 6: Models for count data
7 p | 80 | 3
-
Lecture Advanced Econometrics (Part II) - Chapter 12: Simultaneous equations models
0 p | 73 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 11: Seemingly unrelated regressions
0 p | 72 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 10: Models for panel data
0 p | 83 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 9: Autocorrelation
0 p | 44 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 8: Heteroskedasticity
0 p | 104 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 7: Greneralized linear regression model
0 p | 65 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 6: Dummy varialable
0 p | 70 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 2: Hypothesis testing
7 p | 54 | 2
-
Lecture Advanced Econometrics (Part II) - Chapter 1: Review of least squares & likelihood methods
6 p | 65 | 2
Chịu trách nhiệm nội dung:
Nguyễn Công Hà - Giám đốc Công ty TNHH TÀI LIỆU TRỰC TUYẾN VI NA
LIÊN HỆ
Địa chỉ: P402, 54A Nơ Trang Long, Phường 14, Q.Bình Thạnh, TP.HCM
Hotline: 093 303 0098
Email: support@tailieu.vn