Advanced Econometrics - Part II - Chapter 3: Discrete choice analysis: Binary outcome models

Tài liệu Advanced Econometrics - Part II - Chapter 3: Discrete choice analysis: Binary outcome models: Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 1 University of New England Chapter 3 DISCRETE CHOICE ANALYSIS: BINARY OUTCOME MODELS I. INTRODUCTION: The simplest of the model in which the dependent variable is discrete value is the model with y is binary. 1. Discrete choice model: Model in which the dependent variable assumes discrete values Example: 1 0 i person i is employed in labor force other if e y wis  =   Regardless the definition of y, it is traditional to refer to y = 1 as a “success” and y = 0 as a “failure”. 2. Basic types of discrete values: a) Dichotomous or binary: These take on a value of one or zero, depending on which of two possible results occur b) Polychotomons variables: These take on a discrete number, greater than two, of possible values/ non-categoring: y = {number of patent issued to a company during a year} c) Unordered Varia...

pdf18 trang | Chia sẻ: honghanh66 | Lượt xem: 551 | Lượt tải: 0download
Bạn đang xem nội dung tài liệu Advanced Econometrics - Part II - Chapter 3: Discrete choice analysis: Binary outcome models, để tải tài liệu về máy bạn click vào nút DOWNLOAD ở trên
Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 1 University of New England Chapter 3 DISCRETE CHOICE ANALYSIS: BINARY OUTCOME MODELS I. INTRODUCTION: The simplest of the model in which the dependent variable is discrete value is the model with y is binary. 1. Discrete choice model: Model in which the dependent variable assumes discrete values Example: 1 0 i person i is employed in labor force other if e y wis  =   Regardless the definition of y, it is traditional to refer to y = 1 as a “success” and y = 0 as a “failure”. 2. Basic types of discrete values: a) Dichotomous or binary: These take on a value of one or zero, depending on which of two possible results occur b) Polychotomons variables: These take on a discrete number, greater than two, of possible values/ non-categoring: y = {number of patent issued to a company during a year} c) Unordered Variables: These are variables for which there is no natural ranking of the alternatives. Example: for a sample of commuters, we define a variable: Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 2 University of New England 1 2 3 4 i if if y i person i is a lawyer person i is a teacher person i is a doctor person i is a plu r f if me   =    1 2 3 4 i person i drives to work person i takes a bus person i takes a tr if if y if if otherwis ain e   =    We can define these dependent variables in any order desired → unordered categorical variables. d) Ordered variables: With these variables, outcome have a natural ranking. Examples: 1 2 3 i person i is in poor health person i is in good health person i is in excellent heal if y if if th  =    $1,000 $1,000 $2,000 $ 1 2 3 4 2,000 $4000 $4,000 i if if y i person i sp f if ent less than person i spent person i spent person i spent more than   =  − −  A special case of an ordered variable is a “sequential variables”. This occurs when second event is dependent on the first event, the third event is dependent on the previous two event, . , . 1 2 , . 4 3 i person i has not completed high school person i has completed high school not college per if if y if i son i has college degree not higher degree person i has a professional degreef   =    In marketing research, one often considers attitudes of preference that are measured on a scale 1, 2, 3, 4, 5. for instance. Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 3 University of New England 1 2 3 4 . 5 iy if neu if intensely dislike if moderately dislike if moderately like if intensely like tral   =     II. THE PROBABILITY MODELS: We assume that there is a latent variables *y such that iii Xy εβ += * * iy is the additional utility that individual i would get by choosing 1=iy rather than 0=iy we do not observe *iy , but we observe variable y, which takes on value of 0 or 1 according to the rule: *1 0 0i otherwi if y y se  > =   Prob ( 1=iy ) = Prob ( * iy >0) = Prob ( iiX εβ + >0) = Prob ( iε > βiX− ) = 1 – Prob ( iε < βiX− ) = 1 – F( βiX− ) = F( βiX ) Where F is the cumulative distribution function of iε Prob ( 0iy = ) = Prob ( * iy < 0) = Prob ( iiX εβ + < 0) = Prob ( iε < βiX− )= )( βiXF − = 1 - F( βiX ) The likelihood for the sample is the product of the probability of each observation. Then [ ] i ii y 0 i y 1 ( ) 1 ( )i iF X F Xβ β = = = − − −∏ ∏ The functional form of F will depend on the assumptions made about iε 1. Probit Model: If the cumulative distribution of iε is the normal distribution, i.e iε ~ N(0, 2σ ). Then we have: the Probit Model Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 4 University of New England )( βiXF − = Prob ( iε < βiX− ) = Prob       < σ β σ ε ii X =      −Φ σ βiX =      Φ− σ βiX1 )(1 βiXF −− =      Φ σ βiX → Likelihood function i i 1 i y 0 i y 1 1 1 1 ii yyn i i i i i X X X Xβ β β β σ σ σ σ − = = =           = −Φ Φ = Φ −Φ                      ∏ ∏ ∏ Where: dzeX z X i i 2 2 2 1 − ∞− ∫=    Φ→ σ β πσ β The standard normal probability distribution function log-likelihood function. ( ) ( )ln ln (1 ) ln 1i ii i i X X L y y β ββ β σ σ σ σ         = = Φ + − −Φ                 ∑ Notice that 0≤     σ βL . Because 0(.)ln ≤Φ and [ ] 0(.)1ln ≤Φ− Another important note of the likelihood function is that the parameter β and σ always appear together, only the ratio      σ β matters. → convenient to normalize σ to be one → so we can talk just about β . First derivatives 1 ( ) ( )( ) (1 ) ( ) 1 ( ) n i i i i i i i X XL y y X X ϕ β ϕ ββ β β β= ∂ = − − ∂ Φ −Φ∑ 2. The Logit Model If the cumulative distribution of iε is the logistic, we have the logit model, with the probability density function 2( ) [1 ] ef e ε εε = + , cdf: ( ) 1 eF e ε εε = + Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 5 University of New England )exp(1 1 )exp(1 )exp()( ββ β β ii i i XX XXF + = −+ − =−→ )exp(1 )exp()(1 β β β i i i X XXF + =−− Likelihood function: 1 1 0 1 exp( ) exp( )1 1( ) 1 exp( ) 1 exp( ) 1 exp( ) 1 exp( ) i i i i y yn i i i i y i yi i i i X X X X X X β β β β β β β − = = =         = =       + + + +        ∏ ∏ ∏ 1 1 exp( ) [1 exp( )] n i i i n i i X y X β β = = = + ∑ ∏ Log-likelihood function: ∑∑ ==       + −+      + = n i i i n i i i i X y X XyL 11 )exp(1 1ln).1( )exp(1 )exp(ln.)( ββ β β With derivative: exp( ) 1 exp( ) i i i i XL X y X β β β  ∂ = − ∂ +  ∑ (marginal effect) 3. Solving the maximum likelihood function: Denote (.)φ is the density function of the standard normal. For the probit model: ])(1ln[)1()(ln 11 ∑∑ == Φ−−+Φ= n i ii n i ii XyXyL ββ 1 [ ( )] ( ) ( ) ( )[1 ( )] n i i i i i i i y XLS X X X X β β ϕ β β β β= −Φ∂ = = ∂ Φ −Φ∑ The ML estimator MLβˆ can be obtained as a solution of the equation 0)( =βS These equation are nonlinear in β , thus we have to solve them by an iterative procedure. The Information matrix: Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 6 University of New England 22 1 [ ( )] ( ) ( )[1 ( )] n i i i i i i XLI E X X X X ϕ β β β β β β=  ∂ ′= − = ′∂ ∂ Φ −Φ  ∑ We start with some initial value of β , say 0β , and complete the value )( 0βS and )( 0βI → new estimate of β by the method of scoring. For the logit model: ])exp(1ln[ 11 ∑∑ == +−= n i i n i ii XyXL ββ 1 1 exp( ) ( ) 0 1 exp( ) n n i i i i i ii XLS x X y X β β β β= = ∂ = = − + = ∂ +∑ ∑ These equation are nonlinear → we have to use the Newton – Raphson method to solve the equation Information. ii n i i i XX X XLEI ′ + =      ′ −= ∑ =1 2 2 )]exp(1[ )exp()( β β βδβδ δβ → starting with some initial value of β , say 0β , 4. The linear probability model: Assume: ikikiiiii XXXXy εββββεβ +++++=+= ...33221 Where iy takes values 0 or 1 and jiX are non – stochastic → 0)( =iuE kikii XXyE βββ +++= ...)( 221 )1()1(.1)0(.0)( ===+== iiii yPyPyPyE )]1(1)[1( )](1)[()]([)()]([)()]([)( 22222 =−== −=−=−=−= ii iiiiiiiii yPyP yEyEyEyEyEyEyEyEE ε → vary with i → heteroskedasticity Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 7 University of New England Problems of the Linear Probability Model:  Heteroskedasticity → OLS will not be efficient.  Some cases that 1>iy and 0<iy → difficulties of interpretation. (We cannot constrain βX to the (0, 1) interval)  In many cases )( ii XyE can be outside the limit (0, 1). III. PROBIT vs LOGIT: Why do we choose logistic distribution? → Simplicity: the equation of the logistic CDF is very simple, while normal CDF involves an unevaluated integral (for multivariate this will be important). Additional: 2)1( )( X X e eXf + = ( ) 1 X X eF X e = + ( ) ( ) ( ) iX i i iF X F X f x dx β ε β β −∞ < = = ∫ CDF of logistic distribution F(x) Graph Distribution:     = = −∞→ +∞→ 0)(lim 1)(lim xF xF x x 2)1( )( x x e exf + = F(.) is continuous & monotonous strictly )()( 2121 xfxfxx <→< - Logistic distribution: is almost the same as normal distribution. - It’s symmetric. Normal distribution: 2 2 )(2 1 2 1)( µ σ πσ −− = X exf ∫ ∞− = x dxxfxF )()( Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 8 University of New England Standard normal: 2 2 2 1)( x exf − = π ∫ ∞− − =Φ= x x dxexxF 2 2 2 1)()( π IV. ESTIMATION AND INFERENCE IN BINARY CHOICE MODEL: 1. Likelihood function:  Estimation of binary choice models is usually based on the method of maximum likelihood.  Each observation is treated as a single draw from a Bernoulli distribution, suppose F is symmetric distribution. [ ] i i i ii y 0 i y 1 i y 0 i y 1 ( ) 1 ( ) [1 ( )] ( )i i i iF X F X F X F Xβ β β β = = = = = − − − = −∏ ∏ ∏ ∏ n 1 1 [ ( )] [1 ( )]i iy yi i i F X F Xβ β − = = −∏ { } n i i i 1 log y ln ( ) (1 y ) ln[1 ( )]i iL F X F Xβ β = = = + − −∑ The likelihood equations are: 0 )1( )y1(y 1 i i =      − − −+= ∂ ∂ ∑ = i n i i i i i X F f F fL β Where if is the density: )( )( )( β β β i i i i XfXd XdFf == Likelihood equation are: )1()1(1 i i )1( 0 ))(1( )()y1( )( )(y ××= × =′      − − −+= ∂ ∂ ∑ kk i n i i i i i k X XF Xf XF XfL β β β β β Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 9 University of New England where )( )()( β β β i i i Xd XdFXf = The likelihood equation will be non-linear and require a numerical solution. For the logit model: )1()1(1 i )1( 0 )1( y ××= × =      + −= ∂ ∂ ∑ kk i n i X X k X e eL i i β β β If iX contain a constant term (1)                 =′ 1 1 1 1 1  X then ∑∑ == + = n i X Xn i i i e e 11 i 1 y β β → The average of the predicted probability must equal the proportion of ones in the sample. For the normal distribution: )(ln)](1ln[ 10 ∑∑ == Φ+Φ−= ii y i y i XXL ββ )1( . 1)1(0 )( )( )(1 )( ×=×= ∑∑ Φ+Φ− − = ∂ ∂ n i y i i n i y i i X X XX X XL ii β βφ β βφ β iX .′ is the column i of matrix X Let 12 −= ii yq then 0 )( )( .. 1 ==      Φ = ∂ ∂ ∑∑ = iii n i i ii XX X XqL i λ β βφ β λ  The second derivatives for the logit model are simple iiX Xn i X X it XX e e e eLH i i i i .. 1 2 )(log 1 1 1 ′      + − + −= ′∂∂ ∂ = ∑ = β β β β ββ Hessian matrix H is always negative definite, so the log – likelihood is globally concave. For the probit: Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 10 University of New England ( ) iiii n i iprobit XXXLH .. 1 2 )( ′+−= ′∂∂ ∂ = ∑ = βλλ ββ The matrix is also negative definite for all value of β  L(probit) is also globally concave. 2. Newton – Raphson method for calculating the MLE: Suppose we need to find max ),(θL )1( ×k θ ( 1) ( 1) ( )( ) k k LU θθ θ× × ∂ → = ∂ (Score function) If the )(θL is concave, maximum likelihood will be obtained by solving: )1()1( 0)ˆ( ×× = kk U θ Consider expanding the score function evaluated at the MLE θˆ around a trial value oθ using a first order Taylor expansion, )ˆ()()()ˆ( )1()( )1()1(  ×× ×× − ∂ ∂ += k o kk o o k o k UUU θθ θ θ θθ The Hessian matrix of the )(θL is: θ θ θθ θ θ ∂ ∂ = ∂∂ ∂ = ×× )()()( )( ' 2 )1( ULH kkk o o o UH θ θ θ ∂ ∂ =→ )()( Set 0)ˆ( =θU then =− ∂ ∂ + )ˆ()()( o o o o UU θθ θ θ θ 0)ˆ)(()( =−+= ooo HU θθθθ 0)()(ˆ).( =−=→ oooo UHH θθθθθ 0)()(ˆ )1()( 1 )1()1( =−=→ ×× − ×× k o kk o k ok UH θθθθ Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 11 University of New England This result provides the basis for an iterative approach for computing the MLE known as the Newton-Raphson method Give a trial value oθ . - Use the equation to get an improved estimate and repeat the process with the new estimator. - Stop when the differences between successive estimates are sufficiently close to zero. (or elements of )ˆ(θU are sufficiently close to zero). This procedure tends to converges quickly if the log-likelihood function )(θL is concave and if the starting value is reasonably close to the optimal value. We can replace the Hessian matrix by the information matrix )()(ˆ 1 ooo UI θθθθ −+=       ∂∂ ∂ −= ' 2 )( θθ θ LEI o 3. Margianl effects • After estimating β , we can get estimated value of the probability that the thi observation is equal to 1 )1( =iy . Logit: )ˆexp(1 )ˆexp()1Pr( β β i i ii X XXy + == ),...,1( ni = Probit: dZe e Xy i i X Z X ii ∫ ∞− − Φ == β β π 2 )ˆ( 2 1)1Pr( ),...,1( ni = • The coefficients from the probit and logit models are difficult to interpret because they measure the change in the unobservable *iy associated with a change in one of the explanatory variables. • A more useful measure is what we call the marginal effects: ij i ij ii j x XF x Xy ME ∂ ∂ = ∂ =∂ = )()1Pr( β jiji XXF ββφββ )()( ' == ( )j i jME Xϕ β β= Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 12 University of New England  change unit of jx  change of the probability of person i choose 1=iy • For: [ ] jX X ij ii j i i e e x Xy ME β β β 2 1 )1Pr( + = ∂ =∂ =  change unit of jx  the probability of person i choose 1=iy change βββ ˆ)ˆ(ˆ)ˆ( ii XfFXF ==       ∂ ∂       ∂ ∂ = β β β ˆ ˆ )ˆ(ˆ ˆ )ˆ( ' FVarCovFFVarCov  An partial effects. 4. Average Partial Effects:       ∂ =∂ = x Xy EAPE ix )1Pr( In practice: ∑ = = n i jij Xfn APE 1 ˆ)ˆ(1 ββ ),1( kj = ∑∑∑ == == ∂ =∂ == n i ik n i ki ik ii kk xn XF nx Xy n APE 11 ' )(1)(1 )1Pr(1 γββγ Let jiij Xf ββγ ˆ)ˆ(= marginal effect of ijx on probability of person i takes the action. Then jAPE (Average Partial Effects) of ijx is ),....,1( kj = j n i ij n i jij n Xf n APE γγββ === ∑∑ == 11 1)ˆ(1 If ijx changes one unit  the average probability of an individual takes the action will change jAPE ∑ = = n i jiXfn 1 )ˆ(1 ββ Notes: ( ) [ ] 1 ' 21 ˆˆ )ˆ()ˆ(ˆ − −       ∂∂ ∂ −=== ββ βββ LEIVarCovV ( ) ( )      − − = ∑ = n i jijj nn Var 11 11 γγγ Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 13 University of New England ( ) 2 ( 1) 1 1 1 ˆ ˆ( ) [ ( )]' n n i i k i j Var f X V f X n γ β β × = = = ∑∑ V is the estimated covariance matrix of βˆ Or ( ) ' ( 1\ ˆ ˆ( ) ( ) k Var f X V f Xγ β β × =                     = kγ γ γ γ . . . 2 1 ∑ = = n i iXfn Xf 1 )(1)( ββ • Report estimation result for probit and logit: Variable coefficients (SE) marginal effects )(X (SF) APE (SE) Variable coefficients (SE) Average Partial Effects. 5. Hypothesis test: Wald test: For a set of restriction: qR =β , the statistic is. ( ){ } 2 ]['' ~)ˆ()ˆ(ˆ 1 JqRRRVarCovqRW χβββ −−= − [ ] 1 ' 21 ˆˆ )ˆ()ˆ()ˆ( − −       ∂∂ ∂ −== ββ βββ LEIVarCov Likelihood Ratio test: [ ]uR LLLR ˆlnˆln2 −−= EX: Test that all the slope coefficients in the probit of logit model are zero  the restricted log-likelihood is the same for both probit and logit models. [ ])1ln()1(ln PPPPnLo −−+= P is the proportion of )1( =iy Lagrange Multiplier test: Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 14 University of New England 2 ][ )1( ' )()1( ~)ˆ()ˆ()ˆ( J kkkk GVarCovGLM χβββ ××× = Where )1( ' )1( )ˆ()ˆ( kk XFG ×× = ββ evaluated at the restricted parameter vector of βˆ 6. Specification tests for Binary Choice Models: We consider two important specification problem effect of omitted variables and the effects of heteroskedasticity. In the classical model: εββ ++= 2211 XXY If we omit 2X 22 ' 1 1 1 ' 111 )(]ˆ[ βββ XXXXE −+= Unless 1X and 2X are orthogonal or 02 =β , 1ˆβ is a biased estimator, but still consistent In the context of a binary choice model, we have: a) If 2x is omitted from a model containing 1x , 2x 22111ˆlim βββ ccp += Where 1c , 2c are complicated function of unknown parameters  the coefficient on the included variable will be inconsistent (trouble). b) If the disturbances in the regression are heteroskedasticity, then the maximum likelihood estimators are inconsistent and the covariance matrix is inappropriate (trouble) The second result is particularly troublesome because the probit and logit model is most often with micro data, which are frequently heteroskedasticity. Test the omitted variables: εβ += 11 *: XyHo εββ ++= 2211 * 1 : XXyH So the test is of null hypothesis that 02 =β Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 15 University of New England Test the Heteroskedasticity: Assume that the heteroskedasticity takes the form: [ ]2)( γε ZeVar = γσ Ze=→ Z is subset of X. εβ += Xy* [ ]2)( γε ZeXVar =     −    = ∂ =∂ → γγ γβββφ Z kk Z k i e X e X x Xy )()1Pr( The log – likelihood is: n i i 1 ln y ln (1 ) ln 1i iiZ Z X X F y F e eγ γ β β =       = + − −             ∑ We can use the LM test construct at restricted 0=γ 7. Measuring goodness of fit: There are several ways to measure the goodness of fit. a) Percentage correctly predicted: For each i, computer the predicted probability that 1=iy If 15.)1Pr( =→>= iii yXy 05.)1Pr( =→<= iii yXy The percentage of times the predicted iy matches the actual iy  the Percentage Correctly predicted. Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 16 University of New England We can compute the percentage correctly predicted for each outcome 0=y and 1=y . The overall percentage correctly predicted is a weighted average of the two, with the weights being the fractions of zero and one outcomes. b) Pseudo R-squared. o ln £1 1 ln£ o LPseudo R squared L − = − = − oL is the value of the log – likelihood function in the model with only intercept (hypothesis is that all 0=β ). L is the value of the log – likelihood function of the estimated model. If all the slope coefficient are zero, then Psendo R-squared is zero. oLL < (because L is negative and oLL > ) then we always have: 0 1Pseudo R squared< − < [ ]( ))1ln()1(ln PPPPnLo −−+= V. BINARY CHOICE MODELS FOR PANEL DATA: )11()1()1( * ××× += it kk itit Xy εβ    ≤= >= 00 01 * * itit itit yify yify and iTt ni ,...,2,1 ,...,2,1 = = 1. Random Effects Models: Specifies: iitit uv +=ε iititit uvXy ++=→ β * 1=ity if 0 * >ity and 0 otherwise Where iit uv & are independent random variables with. 0)( =XvE it , 1)(),( == XvVarXvvCov itjit If ji = and st = , 0 otherwise. Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 17 University of New England 0)( =XuE i , 2)(),( uiji XuVarXuuCov σ== if ji = , 0 otherwise. 0),( ≈XuvCov jit for all i, t, j 0)( =→ XE itε 222 1)( uuvit XVar σσσε +=+= 2 2 1 ),( u u isit XCorr σ σρεε + == ρ ρσ − =→ 1 2 u In the cross–section case: ∫ − ∞− = )( )( )()( ii i UX L iiii dfXyP β εε if 0=iy and ∫ +∞ − = )( )( )( i ii U LX ii df β εε if 1=iy • The contribution of the group i to the likelihood would be the joint probability of all iT observations. i iiT iiT i i ii iTi U L U L iiTiiiTii ddfXyyPL εεεεεε ...),...,,(),...,( 21211 1 1 ∫ ∫== ii T t iitiTii duufuff i i )()(),...,,( 1 21 ∫∏ +∞ ∞− = = εεεε Because joint density: )(),...,,(),,...,,( 2121 iiiTiiiiTii ufufuf ii εεεεεε = So: ii T t U L itiitiTii duufdufXyyPL i iiT iiT i )()(],...,[ 1 1 ∫ ∏ ∫ ∞+ ∞− =                 == εε ii T t iitititiTii duufuXyYobXyyPL i i )()(Pr),...,( 1 1 ∫ ∏ +∞ ∞− =       +=== β Likelihood function: 1 n i i L = =∏ 1 2 1 L ln ln ln ln ... ln n i n i L L L L = = = = + + +∏ Advanced Econometrics - Part II Chapter 3: Discrete choice analysis: Binary outcome models Nam T. Hoang UNE Business School 18 University of New England Assume that iu is normally distributed  using MLE to estimate β 2. Fixed Effects Models: The fixed effects model is: itititiit Xdy εβα ++= * iTt ni ,...,2,1 ,...,2,1 = =    = >= otherwisey yify it itit 0 01 * Where itd is a dummy variable that takes the value one for individual i and zero otherwise. 1 1 L ln ln ( ) iTn it i it i t P y Xα β = = = = +∑∑ P() is the probability of the observed outcome: For logit: prob( 1 ) 1 i it i it X it it itX ey X F e α β α β + += = =+ Likelihood function: 1( ) (1 )it ity yit it i t F F −= −∏∏ Reading: 15.8.2, 15.8.3 3. Pooled Models: Suppose the model is: )()1( βititit XFXyP == iTt ni ,...,2,1 ,...,2,1 = = Log – likelihood function: { } 1 1 L ln log ( ) (1 ) log[1 ( )] iTn it it it it i t y F X y F Xβ β = = = = + − −∑∑ Matlab programming work:      FE ME IQ           = 04.0 06.0 045.0 β )1,0(~ Nε

Các file đính kèm theo tài liệu này:

  • pdfchapter_03_discrete_choice_analysis_binary_outcome_models_7393_9184.pdf