No More Posts Available.

No more pages to load.

    • Source: Bad control
    • In statistics, bad controls are variables that introduce an unintended discrepancy between regression coefficients and the effects that said coefficients are supposed to measure. These are contrasted with confounders which are "good controls" and need to be included to remove omitted variable bias. This issue arises when a bad control is an outcome variable (or similar to) in a causal model and thus adjusting for it would eliminate part of the desired causal path. In other words, bad controls might as well be dependent variables in the model under consideration. Angrist and Pischke (2008) additionally differentiate two types of bad controls: a simple bad-control scenario and proxy-control scenario where the included variable partially controls for omitted factors but is partially affected by the variable of interest. Pearl (1995) provides a graphical method for determining good controls using causality diagrams and the back-door criterion and front-door criterion.


      Examples




      = Simple bad control

      =

      A simplified example studies effect of education on wages



      W


      {\displaystyle W}

      . In this thought experiment, two levels of education



      E


      {\displaystyle E}

      are possible: lower and higher and two types of jobs



      T


      {\displaystyle T}

      are performed: white-collar and blue-collar work. When considering the causal effect of education on wages of an individual, it might be tempting to control for the work-type



      T


      {\displaystyle T}

      , however, work type is a mediator (



      E

      T

      W


      {\displaystyle E\to T\to W}

      ) in the causal relationship between education and wages (see causal diagram) and thus, controlling for it precludes causal inference from the regression coefficients.


      = Bad proxy-control

      =

      Another example of bad control is when attempting to control for innate ability when estimating effect of education



      E


      {\displaystyle E}

      on wages



      W


      {\displaystyle W}

      . In this example, innate ability



      I


      {\displaystyle I}

      (thought of as for example IQ at pre-school age) is a variable influencing wages



      W


      {\displaystyle W}

      , but its value is unavailable to researchers at the time of estimation. Instead they choose before-work IQ test scores



      L


      {\displaystyle L}

      , or late ability, as a proxy variable to estimate innate ability and perform regression from education to wages adjusting for late ability. Unfortunately, late ability (in this thought experiment) is causally determined by education and innate ability and, by controlling for it, researchers introduced collider bias into their model by opening a back-door path



      E

      L

      I

      W


      {\displaystyle E\to L\leftarrow I\to W}

      previously not present in their model. On the other hand, if both links



      E

      L


      {\displaystyle E\to L}

      and



      I

      L


      {\displaystyle I\to L}

      are strong, one can expect strong (non-causal) correlation between



      I


      {\displaystyle I}

      and



      E


      {\displaystyle E}

      and thus large omitted-variable bias if



      I


      {\displaystyle I}

      is not controlled for. This issue, however, is separate from the causality problem.


      References

    Kata Kunci Pencarian: