guide:A16691a720: Difference between revisions
No edit summary |
mNo edit summary |
||
Line 1: | Line 1: | ||
This chapter provides a discussion of the econometrics literature on partial identification. | This chapter provides a discussion of the econometrics literature on partial identification. | ||
It first reviews what can be learned about (functionals of) probability distributions in the absence of parametric restrictions, under various scenarios of ''data incompleteness''. | It first reviews what can be learned about (functionals of) probability distributions in the absence of parametric restrictions, under various scenarios of ''data incompleteness''. | ||
It then reviews what can be learned about functionals characterizing semiparametric structural economic models, under various scenarios of ''model incompleteness''. | It then reviews what can be learned about functionals characterizing semiparametric structural economic models, under various scenarios of ''model incompleteness''. | ||
Finally, it discusses finite sample inference, the consequences of misspecification, and the computational challenges that a researcher needs to face when implementing partial identification methods. | Finally, it discusses finite sample inference, the consequences of misspecification, and the computational challenges that a researcher needs to face when implementing partial identification methods. | ||
Taking stock, I argue that several areas emerge where more progress is needed to bring the partial identification approach to empirical research to full fruition. | Taking stock, I argue that several areas emerge where more progress is needed to bring the partial identification approach to empirical research to full fruition. | ||
Whereas the last twenty years have seen the development of a burgeoning theoretical literature on the topic, empirical applications of the methods still lag behind. | Whereas the last twenty years have seen the development of a burgeoning theoretical literature on the topic, empirical applications of the methods still lag behind. | ||
I conjecture that part of the reason for this discrepancy is due to the lack of easy-to-implement procedures for computation of estimators and confidence sets (or intervals) in complex structural models. | I conjecture that part of the reason for this discrepancy is due to the lack of easy-to-implement procedures for computation of estimators and confidence sets (or intervals) in complex structural models. | ||
While the literature so far has aimed at developing methods that have desirable asymptotic properties for very general classes of models, there is arguably scope for more problem-specific methods that exploit the particularities of a certain model to obtain easy to implement statistical procedures. | While the literature so far has aimed at developing methods that have desirable asymptotic properties for very general classes of models, there is arguably scope for more problem-specific methods that exploit the particularities of a certain model to obtain easy to implement statistical procedures. | ||
It would also seem desirable that portable software accompanies the proposed methodologies, perhaps more in line with the current practice in the Statistics literature. | It would also seem desirable that portable software accompanies the proposed methodologies, perhaps more in line with the current practice in the Statistics literature. | ||
However, computational concerns cannot be the cause of the relative paucity of applications of partial identification methods as the ones reviewed in [[guide:Ec36399528#sec:prob:distr |Section]], e.g., bounds on treatment effects. | However, computational concerns cannot be the cause of the relative paucity of applications of partial identification methods as the ones reviewed in [[guide:Ec36399528#sec:prob:distr |Section]], e.g., bounds on treatment effects. | ||
These bounds are extremely easy to estimate and confidence intervals covering them can readily be computed. | These bounds are extremely easy to estimate and confidence intervals covering them can readily be computed. | ||
Line 163: | Line 17: | ||
The philosophy of the method is that the systematic reporting of bounds obtained under an increasingly strong set of assumptions illuminates the relative role played by assumptions and data in shaping the conclusions that the researcher draws. | The philosophy of the method is that the systematic reporting of bounds obtained under an increasingly strong set of assumptions illuminates the relative role played by assumptions and data in shaping the conclusions that the researcher draws. | ||
Point identification is the limit of this process, and carefully assessing how this limit is reached is key to learning about the quantities of interest. | Point identification is the limit of this process, and carefully assessing how this limit is reached is key to learning about the quantities of interest. | ||
In [[guide:Ec36399528#sec:prob:distr | | |||
Sharpness often requires | In [[guide:Ec36399528#sec:prob:distr |Section]] and [[guide:521939d27a#sec:structural |Section]], special attention is devoted to characterizing “sharp” identification regions. | ||
Sharpness often requires “many” moment inequalities, the number of which can exceed the available sample size. | |||
Hence, there is a need of appropriate statistical inference methods. | Hence, there is a need of appropriate statistical inference methods. | ||
As briefly mentioned in [[guide:6d1a428897#sec:inference | | As briefly mentioned in [[guide:6d1a428897#sec:inference |Section]] and [[guide:A85a6b6ff1#sec:computations |Section]], methods designed to provide valid test of hypotheses and confidence sets in this scenario already exist. | ||
However, I would argue that there is a need to better understand the trade-off between sharpness of the population identification region, and statistical efficiency, especially in the context of conditional moment inequalities where instrument functions are needed to transform the inequalities in unconditional ones. | However, I would argue that there is a need to better understand the trade-off between sharpness of the population identification region, and statistical efficiency, especially in the context of conditional moment inequalities where instrument functions are needed to transform the inequalities in unconditional ones. | ||
Similarly, there is a need of more research on data driven procedures for the choice of tuning parameters for the construction of confidence sets, in particular in the case of projection inference where the question has not yet been addressed. | Similarly, there is a need of more research on data driven procedures for the choice of tuning parameters for the construction of confidence sets, in particular in the case of projection inference where the question has not yet been addressed. | ||
Another open and arguably important question in the literature, is how to build confidence sets for general moment inequality models that do not exhibit spurious precision (i.e., are arbitrarily small) when the model is misspecified. | Another open and arguably important question in the literature, is how to build confidence sets for general moment inequality models that do not exhibit spurious precision (i.e., are arbitrarily small) when the model is misspecified. | ||
==General references== | ==General references== | ||
{{cite arXiv|last1=Molinari|first1=Francesca|year=2020|title=Microeconometrics with Partial Identification|eprint=2004.11751|class=econ.EM}} | {{cite arXiv|last1=Molinari|first1=Francesca|year=2020|title=Microeconometrics with Partial Identification|eprint=2004.11751|class=econ.EM}} | ||
==References== | ==References== | ||
{{reflist}} | {{reflist}} |
Revision as of 13:52, 30 May 2024
This chapter provides a discussion of the econometrics literature on partial identification. It first reviews what can be learned about (functionals of) probability distributions in the absence of parametric restrictions, under various scenarios of data incompleteness. It then reviews what can be learned about functionals characterizing semiparametric structural economic models, under various scenarios of model incompleteness. Finally, it discusses finite sample inference, the consequences of misspecification, and the computational challenges that a researcher needs to face when implementing partial identification methods.
Taking stock, I argue that several areas emerge where more progress is needed to bring the partial identification approach to empirical research to full fruition. Whereas the last twenty years have seen the development of a burgeoning theoretical literature on the topic, empirical applications of the methods still lag behind. I conjecture that part of the reason for this discrepancy is due to the lack of easy-to-implement procedures for computation of estimators and confidence sets (or intervals) in complex structural models.
While the literature so far has aimed at developing methods that have desirable asymptotic properties for very general classes of models, there is arguably scope for more problem-specific methods that exploit the particularities of a certain model to obtain easy to implement statistical procedures. It would also seem desirable that portable software accompanies the proposed methodologies, perhaps more in line with the current practice in the Statistics literature.
However, computational concerns cannot be the cause of the relative paucity of applications of partial identification methods as the ones reviewed in Section, e.g., bounds on treatment effects. These bounds are extremely easy to estimate and confidence intervals covering them can readily be computed. I therefore conjecture that the lack of applications might be due to a misconception, whereby nonparametric bounds are perceived as ‘`always too wide to learn anything". While it is true that, for example, worst-case nonparametric bounds on the average treatment effect cover zero by construction, the partial identification approach to empirical research proposes a wide array of assumptions that can be brought to bear to augment the empirical evidence and tighten the bounds. The philosophy of the method is that the systematic reporting of bounds obtained under an increasingly strong set of assumptions illuminates the relative role played by assumptions and data in shaping the conclusions that the researcher draws. Point identification is the limit of this process, and carefully assessing how this limit is reached is key to learning about the quantities of interest.
In Section and Section, special attention is devoted to characterizing “sharp” identification regions. Sharpness often requires “many” moment inequalities, the number of which can exceed the available sample size. Hence, there is a need of appropriate statistical inference methods. As briefly mentioned in Section and Section, methods designed to provide valid test of hypotheses and confidence sets in this scenario already exist. However, I would argue that there is a need to better understand the trade-off between sharpness of the population identification region, and statistical efficiency, especially in the context of conditional moment inequalities where instrument functions are needed to transform the inequalities in unconditional ones. Similarly, there is a need of more research on data driven procedures for the choice of tuning parameters for the construction of confidence sets, in particular in the case of projection inference where the question has not yet been addressed. Another open and arguably important question in the literature, is how to build confidence sets for general moment inequality models that do not exhibit spurious precision (i.e., are arbitrarily small) when the model is misspecified.
General references
Molinari, Francesca (2020). "Microeconometrics with Partial Identification". arXiv:2004.11751 [econ.EM].