Revision as of 13:39, 30 May 2024 by Admin

Introduction

[math] \newcommand{\edis}{\stackrel{d}{=}} \newcommand{\fd}{\stackrel{f.d.}{\rightarrow}} \newcommand{\dom}{\operatorname{dom}} \newcommand{\eig}{\operatorname{eig}} \newcommand{\epi}{\operatorname{epi}} \newcommand{\lev}{\operatorname{lev}} \newcommand{\card}{\operatorname{card}} \newcommand{\comment}{\textcolor{Green}} \newcommand{\B}{\mathbb{B}} \newcommand{\C}{\mathbb{C}} \newcommand{\G}{\mathbb{G}} \newcommand{\M}{\mathbb{M}} \newcommand{\N}{\mathbb{N}} \newcommand{\Q}{\mathbb{Q}} \newcommand{\T}{\mathbb{T}} \newcommand{\R}{\mathbb{R}} \newcommand{\E}{\mathbb{E}} \newcommand{\W}{\mathbb{W}} \newcommand{\bU}{\mathfrak{U}} \newcommand{\bu}{\mathfrak{u}} \newcommand{\bI}{\mathfrak{I}} \newcommand{\cA}{\mathcal{A}} \newcommand{\cB}{\mathcal{B}} \newcommand{\cC}{\mathcal{C}} \newcommand{\cD}{\mathcal{D}} \newcommand{\cE}{\mathcal{E}} \newcommand{\cF}{\mathcal{F}} \newcommand{\cG}{\mathcal{G}} \newcommand{\cg}{\mathcal{g}} \newcommand{\cH}{\mathcal{H}} \newcommand{\cI}{\mathcal{I}} \newcommand{\cJ}{\mathcal{J}} \newcommand{\cK}{\mathcal{K}} \newcommand{\cL}{\mathcal{L}} \newcommand{\cM}{\mathcal{M}} \newcommand{\cN}{\mathcal{N}} \newcommand{\cO}{\mathcal{O}} \newcommand{\cP}{\mathcal{P}} \newcommand{\cQ}{\mathcal{Q}} \newcommand{\cR}{\mathcal{R}} \newcommand{\cS}{\mathcal{S}} \newcommand{\cT}{\mathcal{T}} \newcommand{\cU}{\mathcal{U}} \newcommand{\cu}{\mathcal{u}} \newcommand{\cV}{\mathcal{V}} \newcommand{\cW}{\mathcal{W}} \newcommand{\cX}{\mathcal{X}} \newcommand{\cY}{\mathcal{Y}} \newcommand{\cZ}{\mathcal{Z}} \newcommand{\sF}{\mathsf{F}} \newcommand{\sM}{\mathsf{M}} \newcommand{\sG}{\mathsf{G}} \newcommand{\sT}{\mathsf{T}} \newcommand{\sB}{\mathsf{B}} \newcommand{\sC}{\mathsf{C}} \newcommand{\sP}{\mathsf{P}} \newcommand{\sQ}{\mathsf{Q}} \newcommand{\sq}{\mathsf{q}} \newcommand{\sR}{\mathsf{R}} \newcommand{\sS}{\mathsf{S}} \newcommand{\sd}{\mathsf{d}} \newcommand{\cp}{\mathsf{p}} \newcommand{\cc}{\mathsf{c}} \newcommand{\cf}{\mathsf{f}} \newcommand{\eU}{{\boldsymbol{U}}} \newcommand{\eb}{{\boldsymbol{b}}} \newcommand{\ed}{{\boldsymbol{d}}} \newcommand{\eu}{{\boldsymbol{u}}} \newcommand{\ew}{{\boldsymbol{w}}} \newcommand{\ep}{{\boldsymbol{p}}} \newcommand{\eX}{{\boldsymbol{X}}} \newcommand{\ex}{{\boldsymbol{x}}} \newcommand{\eY}{{\boldsymbol{Y}}} \newcommand{\eB}{{\boldsymbol{B}}} \newcommand{\eC}{{\boldsymbol{C}}} \newcommand{\eD}{{\boldsymbol{D}}} \newcommand{\eW}{{\boldsymbol{W}}} \newcommand{\eR}{{\boldsymbol{R}}} \newcommand{\eQ}{{\boldsymbol{Q}}} \newcommand{\eS}{{\boldsymbol{S}}} \newcommand{\eT}{{\boldsymbol{T}}} \newcommand{\eA}{{\boldsymbol{A}}} \newcommand{\eH}{{\boldsymbol{H}}} \newcommand{\ea}{{\boldsymbol{a}}} \newcommand{\ey}{{\boldsymbol{y}}} \newcommand{\eZ}{{\boldsymbol{Z}}} \newcommand{\eG}{{\boldsymbol{G}}} \newcommand{\ez}{{\boldsymbol{z}}} \newcommand{\es}{{\boldsymbol{s}}} \newcommand{\et}{{\boldsymbol{t}}} \newcommand{\ev}{{\boldsymbol{v}}} \newcommand{\ee}{{\boldsymbol{e}}} \newcommand{\eq}{{\boldsymbol{q}}} \newcommand{\bnu}{{\boldsymbol{\nu}}} \newcommand{\barX}{\overline{\eX}} \newcommand{\eps}{\varepsilon} \newcommand{\Eps}{\mathcal{E}} \newcommand{\carrier}{{\mathfrak{X}}} \newcommand{\Ball}{{\mathbb{B}}^{d}} \newcommand{\Sphere}{{\mathbb{S}}^{d-1}} \newcommand{\salg}{\mathfrak{F}} \newcommand{\ssalg}{\mathfrak{B}} \newcommand{\one}{\mathbf{1}} \newcommand{\Prob}[1]{\P\{#1\}} \newcommand{\yL}{\ey_{\mathrm{L}}} \newcommand{\yU}{\ey_{\mathrm{U}}} \newcommand{\yLi}{\ey_{\mathrm{L}i}} \newcommand{\yUi}{\ey_{\mathrm{U}i}} \newcommand{\xL}{\ex_{\mathrm{L}}} \newcommand{\xU}{\ex_{\mathrm{U}}} \newcommand{\vL}{\ev_{\mathrm{L}}} \newcommand{\vU}{\ev_{\mathrm{U}}} \newcommand{\dist}{\mathbf{d}} \newcommand{\rhoH}{\dist_{\mathrm{H}}} \newcommand{\ti}{\to\infty} \newcommand{\comp}[1]{#1^\mathrm{c}} \newcommand{\ThetaI}{\Theta_{\mathrm{I}}} \newcommand{\crit}{q} \newcommand{\CS}{CS_n} \newcommand{\CI}{CI_n} \newcommand{\cv}[1]{\hat{c}_{n,1-\alpha}(#1)} \newcommand{\idr}[1]{\mathcal{H}_\sP[#1]} \newcommand{\outr}[1]{\mathcal{O}_\sP[#1]} \newcommand{\idrn}[1]{\hat{\mathcal{H}}_{\sP_n}[#1]} \newcommand{\outrn}[1]{\mathcal{O}_{\sP_n}[#1]} \newcommand{\email}[1]{\texttt{#1}} \newcommand{\possessivecite}[1]{\ltref name="#1"\gt\lt/ref\gt's \citeyear{#1}} \newcommand\xqed[1]{% \leavevmode\unskip\penalty9999 \hbox{}\nobreak\hfill \quad\hbox{#1}} \newcommand\qedex{\xqed{$\triangle$}} \newcommand\independent{\perp\!\!\!\perp} \DeclareMathOperator{\Int}{Int} \DeclareMathOperator{\conv}{conv} \DeclareMathOperator{\cov}{Cov} \DeclareMathOperator{\var}{Var} \DeclareMathOperator{\Sel}{Sel} \DeclareMathOperator{\Bel}{Bel} \DeclareMathOperator{\cl}{cl} \DeclareMathOperator{\sgn}{sgn} \DeclareMathOperator{\essinf}{essinf} \DeclareMathOperator{\esssup}{esssup} \newcommand{\mathds}{\mathbb} \renewcommand{\P}{\mathbb{P}} [/math]

Why Partial Identification?

Knowing the population distribution that data are drawn from, what can one learn about a parameter of interest? It has long been understood that assumptions about the data generating process (DGP) play a crucial role in answering this identification question at the core of all empirical research. Inevitably, assumptions brought to bear enjoy a varying degree of credibility. Some are rooted in economic theory (e.g., optimizing behavior) or in information available to the researcher on the DGP (e.g., randomization mechanisms). These assumptions can be argued to be highly credible. Others are driven by concerns for tractability and the desire to answer the identification question with a certain level of precision (e.g., functional form and distributional assumptions). These are arguably less credible. Early on, [1] highlighted the importance of imposing restrictions based on prior knowledge of the phenomenon under analysis and some criteria of simplicity, but not for the purpose of identifiability of a parameter that the researcher happens to be interested in, stating (p.169):

One might regard problems of identifiability as a necessary part of the specification problem. We would consider such a classification acceptable, provided the temptation to specify models in such a way as to produce identifiability of relevant characteristics is resisted.

Much work, spanning multiple fields, has been devoted to putting forward strategies to carry out empirical research while relaxing distributional, functional form, or behavioral assumptions. One example, embodied in the research program on semiparameteric and nonparametric methods, is to characterize sufficient sets of assumptions, that exclude many suspect ones --sometimes as many as possible-- to guarantee that point identification of specific economically interesting parameters attains. This literature is reviewed in, e.g., [2][3], and is not discussed here.

Another example, embodied in the research program on Bayesian model uncertainty, is to specify multiple models (i.e., multiple sets of assumptions), put a prior on the parameters of each model and on each model, embed the various separate models within one large hierarchical mixture model, and obtain model posterior probabilities which can be used for a variety of inferences and decisions. This literature is reviewed in, e.g., [4] and [5], and is not discussed here.

The approach considered here fixes a set of assumptions and a parameter of interest a priori, in the spirit of [1], and asks what can be learned about that parameter given the available data, recognizing that even partial information can be illuminating for empirical research, while enjoying wider credibility thanks to the weaker assumptions imposed. The bounding methods at the core of this approach appeared in the literature nearly a century ago. Arguably, the first exemplar that leverages economic reasoning is given by the work of [6]. They provided bounds on Cobb-Douglas production functions in models of supply and demand, building on optimization principles and restrictions from microeconomic theory. [7] revisited their analysis to obtain bounds on the elasticities of demand and supply in a linear simultaneous equations system with uncorrelated errors. The first exemplars that do not rely on specific economic models appear in [8], [9], and [10], who bounded the coefficient of a simple linear regression in the presence of measurement error. These results were extended to the general linear regression model with errors in all variables by [11] and [12].

This chapter surveys some of the methods proposed over the last thirty years in the microeconometrics literature to further this approach. These methods belong to the systematic program on ’'partial identification analysis started with [13][14][15][16][17][18] and developed by several authors since the early 1990s. Within this program, the focus shifts from points to sets: the researcher aims to learn what is the set of values for the parameters of interest that can generate the same distribution of observables as the one in the data, for some DGP consistent with the maintained assumptions. In other words, the focus is on the set of observationally equivalent values, which henceforth I refer to as the parameters' sharp identification region. In the partial identification paradigm, empirical analysis begins with characterizing this set using the data alone. This is a nonparametric approach that dispenses with all assumptions, except basic restrictions on the sampling process such that the distribution of the observable variables can be learned as data accumulate. In subsequent steps, one incorporates additional assumptions into the analysis, reporting how each assumption (or set of assumptions) affects what one can learn about the parameters of interest, i.e., how it modifies and possibly shrinks the sharp identification region. Point identification may result from the process of increasingly strengthening the maintained assumptions, but it is not the goal in itself. Rather, the objective is to make transparent the relative role played by the data and the assumptions in shaping the inference that one draws.

There are several strands of independent, but thematically related literatures that are not discussed in this chapter. As a consequence, many relevant contributions are left out of the presentation and the references. One example is the literature in finance. [19] developed nonparametric bounds for the admissible set for means and standard deviations of intertemporal marginal rates of substitution (IMRS) of consumers. The bounds were developed exploiting the condition, satisfied in many finance models, that the equilibrium price of any traded security equals the expectation (conditioned on current information) of the product's future payoff and the IMRS of any consumer.[Notes 1] [20] extended the analysis to economies with frictions. [21] developed econometric tools to estimate the regions, to assess asset pricing models, and to provide nonparametric characterizations of asset pricing anomalies. Earlier on, the existence of volatility bounds on IMRSs were noted by [22] and [23]. The bounding arguments that build on the minimum-volatility frontier for stochastic discount factors proposed by [19] have become a litmus test to detect anomalies in asset pricing models (see, e.g. [24](p. 89)). I refer to the textbook presentations in [25](Chapter 13) and [26](Chapters 5 and 21), and the review articles by [27] and [28], for a careful presentation of this literature.

In macroeconomics, [29], [30], and [31] proposed bounds for impulse response functions in sign-restricted structural vector autoregression models, and carried out Bayesian inference with a non-informative prior for the non-identified parameters. I refer to [32](Chapter 13) for a careful presentation of this literature.

In microeconomic theory, bounds were derived from inequalities resulting as necessary and sufficient conditions that data on an individual's choice need to satisfy in order to be consistent with optimizing behavior, as in the research pioneered by [33] and advanced early on by [34] and [35]. [36] and [37] extended this research program to revealed preference extrapolation. Notably, in this work no stochastic terms enter the analysis. [38], [39], [40], [41], [42], and [43], extended revealed preference arguments to random utility models, and obtained bounds on the distributions of preferences. I refer to the survey articles by [44] and [45](Chapter XXX in this Volume) for a careful presentation of this literature.

A complementary approach to partial identification is given by sensitivity analysis, advocated for in different ways by, e.g., [46], [47], [48], [49], [50], and others. Within this approach, the analysis begins with a fully parametric model that point identifies the parameter of interest. One then reports the set of values for this parameter that result when the more suspicious assumptions are relaxed.

Related literatures, not discussed in this chapter, abound also outside Economics. For example, in probability theory, [51] and [52] put forward bounds on the joint distributions of random variables, and [53], [54], and [55] on the sum of random variables, when only marginal distributions are observed. The literature on probability bounds is discussed in the textbook by [56](Appendix A). Addressing problems faced in economics, sociology, epidemiology, geography, history, political science, and more, [57] derived bounds on correlations among variables measured at the individual level based on observable correlations among variables measured at the aggregate level. The so called ecological inference problem they studied, and the associated literature, is discussed in the survey article by [58] and references therein.

Goals and Structure of this Chapter

To carry out econometric analysis with partial identification, one needs: (1) computationally feasible characterizations of the parameters' sharp identification region; (2) methods to estimate this region; and (3) methods to test hypotheses and construct confidence sets. The goal of this chapter is to provide insights into the challenges posed by each of these desiderata, and into some of their solutions. In order to discuss the partial identification literature in microeconometrics with some level of detail while keeping this chapter to a manageable length, I focus on a selection of papers and not on a complete survey of the literature. As a consequence, many relevant contributions are left out of the presentation and the references. I also do not discuss the important but separate topic of statistical decisions in the presence of partial identification, for which I refer to the textbook treatments in [59][17] and to the review by [60](Chapter XXX in this Volume).

The presumption in identification analysis that the distribution from which the data are drawn is known allows one to keep separate the identification question from the distinct question of statistical inference from a finite sample. I use the same separation in this chapter. I assume solid knowledge of the topics covered in first year Economics PhD courses in econometrics and microeconomic theory.

I begin in Section with the analysis of what can be learned about features of probability distributions that are well defined in the absence of an economic model, such as moments, quantiles, cumulative distribution functions, etc., when one faces measurement problems. Specifically, I focus on cases where the data is incomplete, either due to selectively observed data or to interval measurements. I refer to [15][16][17] for textbook treatments of many other cases. I lay out formally the maintained assumptions for several examples, and then discuss in detail what is the source of the identification problem. I conclude with providing tractable characterizations of what can be learned about the parameters of interest, with formal proofs. I show that even in simple problems, great care may be needed to obtain the sharp identification region. It is often easier to characterize an outer region, i.e., a collection of values for the parameter of interest that contains the sharp one but may contain also additional values. Outer regions are useful because of their simplicity and because in certain applications they may suffice to answer questions of great interest, e.g., whether a policy intervention has a nonnegative effect. However, compared to the sharp identification region they may afford the researcher less useful predictions, and a lower ability to test for misspecification, because they do not harness all the information in the observed data and maintained assumptions.

In Section I use the same approach to study what can be learned about features of parameters of structural econometric models when the model is incomplete [61][62][63]. Specifically, I discuss single agent discrete choice models under a variety of challenging situations (interval measured as well as endogenous explanatory variables; unobserved as well as counterfactual choice sets); finite discrete games with multiple equilibria; auction models under weak assumptions on bidding behavior; and network formation models. Again I formally derive sharp identification regions for several examples.

I conclude each of these sections with a brief discussion of further theoretical advances and empirical applications that is meant to give a sense of the breadth of the approach, but not to be exhaustive. I refer to the recent survey by [64] for a thorough discussion of empirical applications of partial identification methods.

In Section I discuss finite sample inference. I limit myself to highlighting the challenges that one faces for consistent estimation when the identified object is a set, and several coverage notions and requirements that have been proposed over the last 20 years. I refer to the recent survey by [65] for a thorough discussion of methods to tests hypotheses and build confidence sets in moment inequality models.

In Section I discuss the distinction between refutable and non-refutable assumptions, and how model misspecification may be detectable in the presence of the former, even within the partial identification paradigm. I then highlight certain challenges that model misspecification presents for the interpretation of sharp identification (as well as outer) regions, and for the construction of confidence sets.

In Section I highlight that while most of the sharp identification regions characterized in Section can be easily computed, many of the ones in Section are more challenging. This is because the latter are obtained as level sets of criterion functions in moderately dimensional spaces, and tracing out these level sets or their boundaries is a non-trivial computational problem. In Section I conclude providing some considerations on what I view as open questions for future research. I refer to [66] for an earlier review of this literature, and to [67] for a careful presentation of the many notions of identification that are used across the econometrics literature, including an important historical account of how these notions developed over time.

Random Set Theory as a Tool for Partial Identification Analysis

Throughout Sections and Section, a simple organizing principle for much of partial identification analysis emerges. The cause of the identification problems discussed can be traced back to a collection of random variables that are consistent with the available data and maintained assumptions. For the problems studied in Section, this set is often a simple function of the observed variables. The incompleteness of the data stems from the fact that instead of observing the singleton variables of interest, one observes set-valued variables to which these belong, but one has no information on their exact value within the sets. For the problems studied in Section, the collection of random variables consistent with the maintained assumptions comprises what the model predicts for the endogenous variable(s). The incompleteness of the model stems from the fact that instead of making a singleton prediction for the variable(s) of interest, the model makes multiple predictions but does not specify how one is chosen.

The central role of set-valued objects, both stochastic and nonstochastic, in partial identification renders random set theory a natural toolkit to aid the analysis.[Notes 2] This theory originates in the seminal contributions of [68], [69], and [70], with the first self contained treatment of the theory given by [71]. I refer to [72] for a textbook presentation, and to [73][74] for a treatment focusing on its applications in econometrics.

[75] introduce the use of random set theory in econometrics to carry out identification analysis and statistical inference with incomplete data. [76][77] propose it to characterize sharp identification regions both with incomplete data and with incomplete models. [78] propose the use of optimal transportation methods that in some applications deliver the same characterizations as the random set methods. I do not discuss optimal transportation methods in this chapter, but refer to [79] for a thorough treatment.

Over the last ten years, random set methods have been used to unify a number of specific results in partial identification, and to produce a general methodology for identification analysis that dispenses completely with case-by-case distinctions. In particular, as I show throughout the chapter, the methods allow for simple and tractable characterizations of sharp identification regions. The collection of these results establishes that indeed this is a useful tool to carry out econometrics with partial identification, as exemplified by its prominent role both in this chapter and in Chapter XXX in this Volume by [80], which focuses on general classes of instrumental variable models. The random sets approach complements the more traditional one, based on mathematical tools for (single valued) random vectors, that proved extremely productive since the beginning of the research program in partial identification.

This chapter shows that to fruitfully apply random set theory for identification and inference, the econometrician needs to carry out three fundamental steps. First, she needs to define the random closed set that is relevant for the problem under consideration using all information given by the available data and maintained assumptions. This is a delicate task, but one that is typically carried out in identification analysis regardless of whether random set theory is applied. Indeed, throughout the chapter I highlight how relevant random closed sets were characterized in partial identification analysis since the early 1990s, albeit the connection to the theory of random sets was not made. As a second step, the econometrician needs to determine how the observable random variables relate to the random closed set. Often, one of two cases occurs: either the observable variables determine a random set to which the unobservable variable of interest belongs with probability one, as in incomplete data scenarios; or the (expectation of the) (un)observable variable belongs to (the expectation of) a random set determined by the model, as in incomplete model scenarios. Finally, the econometrician needs to determine which tool from random set theory should be utilized. To date, new applications of random set theory to econometrics have fruitfully exploited (Aumann) expectations and their support functions, (Choquet) capacity functionals, and laws of large numbers and central limit theorems for random sets. Appendix reports basic definitions from random set theory of these concepts, as well as some useful theorems. The chapter explains in detail through applications to important identification problems how these steps can be carried out.

Notation Used
[math](\Omega,\salg,\P)[/math] Nonatomic probability space
[math]\R^d,\|\cdot\|[/math] Euclidean space equipped with the Euclidean norm
[math]\cF,\cG,\cK[/math] Collection of closed, open, and compact subsets of [math]\R^d[/math] (respectively)
[math]\Sphere = \{x \in \R^d: \|x\| = 1\}[/math] Unit sphere in [math]\R^d[/math]
[math]\Ball = \{x \in \R^d: \|x\| \leq 1\}[/math] Unit ball in [math]\R^d[/math]
[math]\conv(A),\cl(A),|B|[/math] Convex hull and closure of a set [math]A\subset\R^d[/math] (respectively), and cardinality of a finite set [math]B\subset\R^d[/math]
[math]\ex,\ey,\ez,\dots[/math] Random vectors
[math]x,y,z,\dots[/math] Realizations of random vectors or deterministic vectors
[math]\eX,\eY,\eZ,\dots[/math] Random sets
[math]X,Y,Z,\dots[/math] Realizations of random sets or deterministic sets
[math]\epsilon,\eps,\nu,\zeta[/math] Unobserved random variables (heterogeneity)
[math]\Theta,\theta,\vartheta[/math] Parameter space, data generating value for the parameter vector, and a generic element of [math]\Theta[/math]
[math]\sR[/math] Joint distribution of all variables (observable and unobservable)
[math]\sP[/math] Joint distribution of the observable variables
[math]\sQ[/math] Joint distribution whose features one wants to learn
[math]\sM[/math] A joint distribution of observed variables implied by the model
[math]\sq_{\tau}(\alpha)[/math] Quantile function at level [math]\alpha \in (0,1)[/math] for a random variable distributed [math]\tau\in\{\sR,\sP,\sQ\}[/math]
[math]\E_\tau[/math] Expectation operator associated with distribution [math]\tau\in\{\sR,\sP,\sQ\}[/math]
[math]\sT_\eX(K)=\Prob{\eX\cap K\neq\emptyset},\, K\in\cK[/math] Capacity functional of random set [math]\eX[/math]
[math]\sC_\eX(F)=\Prob{\eX\subset F},\, F\in\cF[/math] Containment functional of random set [math]\eX[/math]
[math]\stackrel{p}{\rightarrow},\stackrel{\text{a.s.}}{\rightarrow},\Rightarrow[/math] Convergence in probability, convergence almost surely, and weak convergence (respectively)
[math]\ex\edis\ey[/math] [math]\ex[/math] and [math]\ey[/math] have the same distribution
[math]\ex\independent\ey[/math] Statistical independence between random variables [math]\ex[/math] and [math]\ey[/math]
[math]x^\top y[/math] Inner product between vectors [math]x[/math] and [math]y[/math], [math]x,y\in\R^d[/math]
[math]\bU,\bu[/math] Family of utility functions and one of its elements
[math]\crit_\sP[/math] Criterion function that aggregates violations of the population moment inequalities
[math]\crit_n[/math] Criterion function that aggregates violations of the sample moment inequalities
[math]\idr{\cdot}[/math] Sharp identification region of the functional in square brackets (a function of [math]\sP[/math])
[math]\outr{\cdot}[/math] An outer region of the functional in square brackets (a function of [math]\sP[/math])

Notation

This chapter employs consistent notation that is summarized in Table. Some important conventions are as follows: [math]\ey[/math] denotes outcome variables, [math](\ex,\ew)[/math] denote explanatory variables, and [math]\ez[/math] denotes instrumental variables (i.e., variables that satisfy some form of independence with the outcome or with the unobservable variables, possibly conditional on [math]\ex,\ew[/math]). I denote by [math]\sP[/math] the joint distribution of all observable variables. Identification analysis is carried out using the information contained in this distribution, and finite sample inference is carried out under the presumption that one draws a random sample of size [math]n[/math] from [math]\sP[/math]. I denote by [math]\sQ[/math] the joint distribution whose features the researcher wants to learn. If [math]\sQ[/math] were identified given the observed data (e.g., if it were a marginal of [math]\sP[/math]), point identification of the parameter or functional of interest would attain. I denote by [math]\sR[/math] the joint distribution of all variables, observable and unobservable ones; both [math]\sP[/math] and [math]\sQ[/math] can be obtained from it. In the context of structural models, I denote by [math]\sM[/math] a distribution for the observable variables that is consistent with the model. I note that model incompleteness typically implies that [math]\sM[/math] is not unique. I let [math]\idr{\cdot}[/math] denote the sharp identification region of the functional in square brackets, and [math]\outr{\cdot}[/math] an outer region. In both cases, the regions are indexed by [math]\sP[/math], because they depend on the distribution of the observed data.

General references

Molinari, Francesca (2020). "Microeconometrics with Partial Identification". arXiv:2004.11751 [econ.EM].

Notes

  1. [1] deduce a duality relation with the mean variance theory of [2] and [3], but the relation does not apply to the sharp bounds they derive. In the Arbitrage Pricing Model [4], bounds on extensions of existing pricing functions, consistent with the absence of arbitrage opportunities, were considered by [5] and [6].
  2. The first idea of a general random set in the form of a region that depends on chance appears in [7], originally published in 1933. For another early example where confidence regions are explicitly described as random sets, see [8](p. 67). The role of random sets in this chapter is different.

References

  1. 1.0 1.1 Cite error: Invalid <ref> tag; no text was provided for refs named koo:rei50
  2. Cite error: Invalid <ref> tag; no text was provided for refs named mat07
  3. Cite error: Invalid <ref> tag; no text was provided for refs named mat13
  4. Cite error: Invalid <ref> tag; no text was provided for refs named was00
  5. Cite error: Invalid <ref> tag; no text was provided for refs named cly:geo04
  6. Cite error: Invalid <ref> tag; no text was provided for refs named mar:and44
  7. Cite error: Invalid <ref> tag; no text was provided for refs named lea81
  8. Cite error: Invalid <ref> tag; no text was provided for refs named gin21
  9. Cite error: Invalid <ref> tag; no text was provided for refs named fri34
  10. Cite error: Invalid <ref> tag; no text was provided for refs named rei41
  11. Cite error: Invalid <ref> tag; no text was provided for refs named kle:lea84
  12. Cite error: Invalid <ref> tag; no text was provided for refs named lea87
  13. Cite error: Invalid <ref> tag; no text was provided for refs named man89
  14. Cite error: Invalid <ref> tag; no text was provided for refs named man90
  15. 15.0 15.1 Cite error: Invalid <ref> tag; no text was provided for refs named man95
  16. 16.0 16.1 Cite error: Invalid <ref> tag; no text was provided for refs named man03
  17. 17.0 17.1 17.2 Cite error: Invalid <ref> tag; no text was provided for refs named man07a
  18. Cite error: Invalid <ref> tag; no text was provided for refs named man13book
  19. 19.0 19.1 Cite error: Invalid <ref> tag; no text was provided for refs named han:jag91
  20. Cite error: Invalid <ref> tag; no text was provided for refs named lut96
  21. Cite error: Invalid <ref> tag; no text was provided for refs named han:hea:lut95
  22. Cite error: Invalid <ref> tag; no text was provided for refs named shi82
  23. Cite error: Invalid <ref> tag; no text was provided for refs named han82comment
  24. Cite error: Invalid <ref> tag; no text was provided for refs named shi03
  25. Cite error: Invalid <ref> tag; no text was provided for refs named lju:sar04
  26. Cite error: Invalid <ref> tag; no text was provided for refs named coc05
  27. Cite error: Invalid <ref> tag; no text was provided for refs named fer03
  28. Cite error: Invalid <ref> tag; no text was provided for refs named cam14
  29. Cite error: Invalid <ref> tag; no text was provided for refs named fau98
  30. Cite error: Invalid <ref> tag; no text was provided for refs named can:den02
  31. Cite error: Invalid <ref> tag; no text was provided for refs named uhl05
  32. Cite error: Invalid <ref> tag; no text was provided for refs named kil:lut17
  33. Cite error: Invalid <ref> tag; no text was provided for refs named sam38
  34. Cite error: Invalid <ref> tag; no text was provided for refs named hou50
  35. Cite error: Invalid <ref> tag; no text was provided for refs named ric66
  36. Cite error: Invalid <ref> tag; no text was provided for refs named afr67
  37. Cite error: Invalid <ref> tag; no text was provided for refs named var82
  38. Cite error: Invalid <ref> tag; no text was provided for refs named blo:mar60
  39. Cite error: Invalid <ref> tag; no text was provided for refs named mar60
  40. Cite error: Invalid <ref> tag; no text was provided for refs named hal73
  41. Cite error: Invalid <ref> tag; no text was provided for refs named mcf75
  42. Cite error: Invalid <ref> tag; no text was provided for refs named fal78
  43. Cite error: Invalid <ref> tag; no text was provided for refs named mcf:ric91
  44. Cite error: Invalid <ref> tag; no text was provided for refs named cra:der14
  45. Cite error: Invalid <ref> tag; no text was provided for refs named blu19
  46. Cite error: Invalid <ref> tag; no text was provided for refs named gil:lea83
  47. Cite error: Invalid <ref> tag; no text was provided for refs named ros:rub83
  48. Cite error: Invalid <ref> tag; no text was provided for refs named lea85
  49. Cite error: Invalid <ref> tag; no text was provided for refs named ros95
  50. Cite error: Invalid <ref> tag; no text was provided for refs named imb03
  51. Cite error: Invalid <ref> tag; no text was provided for refs named hoe40
  52. Cite error: Invalid <ref> tag; no text was provided for refs named fre51
  53. Cite error: Invalid <ref> tag; no text was provided for refs named mak81
  54. Cite error: Invalid <ref> tag; no text was provided for refs named rus82
  55. Cite error: Invalid <ref> tag; no text was provided for refs named fra:nel:sch87
  56. Cite error: Invalid <ref> tag; no text was provided for refs named sho:wel09
  57. Cite error: Invalid <ref> tag; no text was provided for refs named dun:dav53
  58. Cite error: Invalid <ref> tag; no text was provided for refs named cho:man09
  59. Cite error: Invalid <ref> tag; no text was provided for refs named man05
  60. Cite error: Invalid <ref> tag; no text was provided for refs named hir:por19
  61. Cite error: Invalid <ref> tag; no text was provided for refs named tam03
  62. Cite error: Invalid <ref> tag; no text was provided for refs named hai:tam03
  63. Cite error: Invalid <ref> tag; no text was provided for refs named cil:tam09
  64. Cite error: Invalid <ref> tag; no text was provided for refs named ho:ros17
  65. Cite error: Invalid <ref> tag; no text was provided for refs named can:sha17
  66. Cite error: Invalid <ref> tag; no text was provided for refs named tam10
  67. Cite error: Invalid <ref> tag; no text was provided for refs named lew18
  68. Cite error: Invalid <ref> tag; no text was provided for refs named cho53
  69. Cite error: Invalid <ref> tag; no text was provided for refs named aum65
  70. Cite error: Invalid <ref> tag; no text was provided for refs named deb67
  71. Cite error: Invalid <ref> tag; no text was provided for refs named mat75
  72. Cite error: Invalid <ref> tag; no text was provided for refs named mo1
  73. Cite error: Invalid <ref> tag; no text was provided for refs named mol:mol14
  74. Cite error: Invalid <ref> tag; no text was provided for refs named mol:mol18
  75. Cite error: Invalid <ref> tag; no text was provided for refs named ber:mol08
  76. Cite error: Invalid <ref> tag; no text was provided for refs named ber:mol:mol11
  77. Cite error: Invalid <ref> tag; no text was provided for refs named ber:mol:mol12
  78. Cite error: Invalid <ref> tag; no text was provided for refs named gal:hen11
  79. Cite error: Invalid <ref> tag; no text was provided for refs named gal16
  80. Cite error: Invalid <ref> tag; no text was provided for refs named che:ros19