Introduction
Why Partial Identification?
Knowing the population distribution that data are drawn from, what can one learn about a parameter of interest? It has long been understood that assumptions about the data generating process (DGP) play a crucial role in answering this identification question at the core of all empirical research. Inevitably, assumptions brought to bear enjoy a varying degree of credibility. Some are rooted in economic theory (e.g., optimizing behavior) or in information available to the researcher on the DGP (e.g., randomization mechanisms). These assumptions can be argued to be highly credible. Others are driven by concerns for tractability and the desire to answer the identification question with a certain level of precision (e.g., functional form and distributional assumptions). These are arguably less credible. Early on, [1] highlighted the importance of imposing restrictions based on prior knowledge of the phenomenon under analysis and some criteria of simplicity, but not for the purpose of identifiability of a parameter that the researcher happens to be interested in, stating (p.169):
One might regard problems of identifiability as a necessary part of the specification problem. We would consider such a classification acceptable, provided the temptation to specify models in such a way as to produce identifiability of relevant characteristics is resisted.
Much work, spanning multiple fields, has been devoted to putting forward strategies to carry out empirical research while relaxing distributional, functional form, or behavioral assumptions. One example, embodied in the research program on semiparameteric and nonparametric methods, is to characterize sufficient sets of assumptions, that exclude many suspect ones --sometimes as many as possible-- to guarantee that point identification of specific economically interesting parameters attains. This literature is reviewed in, e.g., [2][3], and is not discussed here.
Another example, embodied in the research program on Bayesian model uncertainty, is to specify multiple models (i.e., multiple sets of assumptions), put a prior on the parameters of each model and on each model, embed the various separate models within one large hierarchical mixture model, and obtain model posterior probabilities which can be used for a variety of inferences and decisions. This literature is reviewed in, e.g., [4] and [5], and is not discussed here.
The approach considered here fixes a set of assumptions and a parameter of interest a priori, in the spirit of [1], and asks what can be learned about that parameter given the available data, recognizing that even partial information can be illuminating for empirical research, while enjoying wider credibility thanks to the weaker assumptions imposed. The bounding methods at the core of this approach appeared in the literature nearly a century ago. Arguably, the first exemplar that leverages economic reasoning is given by the work of [6]. They provided bounds on Cobb-Douglas production functions in models of supply and demand, building on optimization principles and restrictions from microeconomic theory. [7] revisited their analysis to obtain bounds on the elasticities of demand and supply in a linear simultaneous equations system with uncorrelated errors. The first exemplars that do not rely on specific economic models appear in [8], [9], and [10], who bounded the coefficient of a simple linear regression in the presence of measurement error. These results were extended to the general linear regression model with errors in all variables by [11] and [12].
This chapter surveys some of the methods proposed over the last thirty years in the microeconometrics literature to further this approach. These methods belong to the systematic program on ’'partial identification analysis started with [13][14][15][16][17][18] and developed by several authors since the early 1990s. Within this program, the focus shifts from points to sets: the researcher aims to learn what is the set of values for the parameters of interest that can generate the same distribution of observables as the one in the data, for some DGP consistent with the maintained assumptions. In other words, the focus is on the set of observationally equivalent values, which henceforth I refer to as the parameters' sharp identification region. In the partial identification paradigm, empirical analysis begins with characterizing this set using the data alone. This is a nonparametric approach that dispenses with all assumptions, except basic restrictions on the sampling process such that the distribution of the observable variables can be learned as data accumulate. In subsequent steps, one incorporates additional assumptions into the analysis, reporting how each assumption (or set of assumptions) affects what one can learn about the parameters of interest, i.e., how it modifies and possibly shrinks the sharp identification region. Point identification may result from the process of increasingly strengthening the maintained assumptions, but it is not the goal in itself. Rather, the objective is to make transparent the relative role played by the data and the assumptions in shaping the inference that one draws.
There are several strands of independent, but thematically related literatures that are not discussed in this chapter. As a consequence, many relevant contributions are left out of the presentation and the references. One example is the literature in finance. [19] developed nonparametric bounds for the admissible set for means and standard deviations of intertemporal marginal rates of substitution (IMRS) of consumers. The bounds were developed exploiting the condition, satisfied in many finance models, that the equilibrium price of any traded security equals the expectation (conditioned on current information) of the product's future payoff and the IMRS of any consumer.[Notes 1] [20] extended the analysis to economies with frictions. [21] developed econometric tools to estimate the regions, to assess asset pricing models, and to provide nonparametric characterizations of asset pricing anomalies. Earlier on, the existence of volatility bounds on IMRSs were noted by [22] and [23]. The bounding arguments that build on the minimum-volatility frontier for stochastic discount factors proposed by [19] have become a litmus test to detect anomalies in asset pricing models (see, e.g. [24](p. 89)). I refer to the textbook presentations in [25](Chapter 13) and [26](Chapters 5 and 21), and the review articles by [27] and [28], for a careful presentation of this literature.
In macroeconomics, [29], [30], and [31] proposed bounds for impulse response functions in sign-restricted structural vector autoregression models, and carried out Bayesian inference with a non-informative prior for the non-identified parameters. I refer to [32](Chapter 13) for a careful presentation of this literature.
In microeconomic theory, bounds were derived from inequalities resulting as necessary and sufficient conditions that data on an individual's choice need to satisfy in order to be consistent with optimizing behavior, as in the research pioneered by [33] and advanced early on by [34] and [35]. [36] and [37] extended this research program to revealed preference extrapolation. Notably, in this work no stochastic terms enter the analysis. [38], [39], [40], [41], [42], and [43], extended revealed preference arguments to random utility models, and obtained bounds on the distributions of preferences. I refer to the survey articles by [44] and [45](Chapter XXX in this Volume) for a careful presentation of this literature.
A complementary approach to partial identification is given by sensitivity analysis, advocated for in different ways by, e.g., [46], [47], [48], [49], [50], and others. Within this approach, the analysis begins with a fully parametric model that point identifies the parameter of interest. One then reports the set of values for this parameter that result when the more suspicious assumptions are relaxed.
Related literatures, not discussed in this chapter, abound also outside Economics. For example, in probability theory, [51] and [52] put forward bounds on the joint distributions of random variables, and [53], [54], and [55] on the sum of random variables, when only marginal distributions are observed. The literature on probability bounds is discussed in the textbook by [56](Appendix A). Addressing problems faced in economics, sociology, epidemiology, geography, history, political science, and more, [57] derived bounds on correlations among variables measured at the individual level based on observable correlations among variables measured at the aggregate level. The so called ecological inference problem they studied, and the associated literature, is discussed in the survey article by [58] and references therein.
Goals and Structure of this Chapter
To carry out econometric analysis with partial identification, one needs: (1) computationally feasible characterizations of the parameters' sharp identification region; (2) methods to estimate this region; and (3) methods to test hypotheses and construct confidence sets. The goal of this chapter is to provide insights into the challenges posed by each of these desiderata, and into some of their solutions. In order to discuss the partial identification literature in microeconometrics with some level of detail while keeping this chapter to a manageable length, I focus on a selection of papers and not on a complete survey of the literature. As a consequence, many relevant contributions are left out of the presentation and the references. I also do not discuss the important but separate topic of statistical decisions in the presence of partial identification, for which I refer to the textbook treatments in [59][17] and to the review by [60](Chapter XXX in this Volume).
The presumption in identification analysis that the distribution from which the data are drawn is known allows one to keep separate the identification question from the distinct question of statistical inference from a finite sample. I use the same separation in this chapter. I assume solid knowledge of the topics covered in first year Economics PhD courses in econometrics and microeconomic theory.
I begin in Section with the analysis of what can be learned about features of probability distributions that are well defined in the absence of an economic model, such as moments, quantiles, cumulative distribution functions, etc., when one faces measurement problems. Specifically, I focus on cases where the data is incomplete, either due to selectively observed data or to interval measurements. I refer to [15][16][17] for textbook treatments of many other cases. I lay out formally the maintained assumptions for several examples, and then discuss in detail what is the source of the identification problem. I conclude with providing tractable characterizations of what can be learned about the parameters of interest, with formal proofs. I show that even in simple problems, great care may be needed to obtain the sharp identification region. It is often easier to characterize an outer region, i.e., a collection of values for the parameter of interest that contains the sharp one but may contain also additional values. Outer regions are useful because of their simplicity and because in certain applications they may suffice to answer questions of great interest, e.g., whether a policy intervention has a nonnegative effect. However, compared to the sharp identification region they may afford the researcher less useful predictions, and a lower ability to test for misspecification, because they do not harness all the information in the observed data and maintained assumptions.
In Section I use the same approach to study what can be learned about features of parameters of structural econometric models when the model is incomplete [61][62][63]. Specifically, I discuss single agent discrete choice models under a variety of challenging situations (interval measured as well as endogenous explanatory variables; unobserved as well as counterfactual choice sets); finite discrete games with multiple equilibria; auction models under weak assumptions on bidding behavior; and network formation models. Again I formally derive sharp identification regions for several examples.
I conclude each of these sections with a brief discussion of further theoretical advances and empirical applications that is meant to give a sense of the breadth of the approach, but not to be exhaustive. I refer to the recent survey by [64] for a thorough discussion of empirical applications of partial identification methods.
In Section I discuss finite sample inference. I limit myself to highlighting the challenges that one faces for consistent estimation when the identified object is a set, and several coverage notions and requirements that have been proposed over the last 20 years. I refer to the recent survey by [65] for a thorough discussion of methods to tests hypotheses and build confidence sets in moment inequality models.
In Section I discuss the distinction between refutable and non-refutable assumptions, and how model misspecification may be detectable in the presence of the former, even within the partial identification paradigm. I then highlight certain challenges that model misspecification presents for the interpretation of sharp identification (as well as outer) regions, and for the construction of confidence sets.
In Section I highlight that while most of the sharp identification regions characterized in Section can be easily computed, many of the ones in Section are more challenging. This is because the latter are obtained as level sets of criterion functions in moderately dimensional spaces, and tracing out these level sets or their boundaries is a non-trivial computational problem. In Section I conclude providing some considerations on what I view as open questions for future research. I refer to [66] for an earlier review of this literature, and to [67] for a careful presentation of the many notions of identification that are used across the econometrics literature, including an important historical account of how these notions developed over time.
Random Set Theory as a Tool for Partial Identification Analysis
Throughout Sections and Section, a simple organizing principle for much of partial identification analysis emerges. The cause of the identification problems discussed can be traced back to a collection of random variables that are consistent with the available data and maintained assumptions. For the problems studied in Section, this set is often a simple function of the observed variables. The incompleteness of the data stems from the fact that instead of observing the singleton variables of interest, one observes set-valued variables to which these belong, but one has no information on their exact value within the sets. For the problems studied in Section, the collection of random variables consistent with the maintained assumptions comprises what the model predicts for the endogenous variable(s). The incompleteness of the model stems from the fact that instead of making a singleton prediction for the variable(s) of interest, the model makes multiple predictions but does not specify how one is chosen.
The central role of set-valued objects, both stochastic and nonstochastic, in partial identification renders random set theory a natural toolkit to aid the analysis.[Notes 2] This theory originates in the seminal contributions of [68], [69], and [70], with the first self contained treatment of the theory given by [71]. I refer to [72] for a textbook presentation, and to [73][74] for a treatment focusing on its applications in econometrics.
[75] introduce the use of random set theory in econometrics to carry out identification analysis and statistical inference with incomplete data. [76][77] propose it to characterize sharp identification regions both with incomplete data and with incomplete models. [78] propose the use of optimal transportation methods that in some applications deliver the same characterizations as the random set methods. I do not discuss optimal transportation methods in this chapter, but refer to [79] for a thorough treatment.
Over the last ten years, random set methods have been used to unify a number of specific results in partial identification, and to produce a general methodology for identification analysis that dispenses completely with case-by-case distinctions. In particular, as I show throughout the chapter, the methods allow for simple and tractable characterizations of sharp identification regions. The collection of these results establishes that indeed this is a useful tool to carry out econometrics with partial identification, as exemplified by its prominent role both in this chapter and in Chapter XXX in this Volume by [80], which focuses on general classes of instrumental variable models. The random sets approach complements the more traditional one, based on mathematical tools for (single valued) random vectors, that proved extremely productive since the beginning of the research program in partial identification.
This chapter shows that to fruitfully apply random set theory for identification and inference, the econometrician needs to carry out three fundamental steps. First, she needs to define the random closed set that is relevant for the problem under consideration using all information given by the available data and maintained assumptions. This is a delicate task, but one that is typically carried out in identification analysis regardless of whether random set theory is applied. Indeed, throughout the chapter I highlight how relevant random closed sets were characterized in partial identification analysis since the early 1990s, albeit the connection to the theory of random sets was not made. As a second step, the econometrician needs to determine how the observable random variables relate to the random closed set. Often, one of two cases occurs: either the observable variables determine a random set to which the unobservable variable of interest belongs with probability one, as in incomplete data scenarios; or the (expectation of the) (un)observable variable belongs to (the expectation of) a random set determined by the model, as in incomplete model scenarios. Finally, the econometrician needs to determine which tool from random set theory should be utilized. To date, new applications of random set theory to econometrics have fruitfully exploited (Aumann) expectations and their support functions, (Choquet) capacity functionals, and laws of large numbers and central limit theorems for random sets. Appendix reports basic definitions from random set theory of these concepts, as well as some useful theorems. The chapter explains in detail through applications to important identification problems how these steps can be carried out.
[math](\Omega,\salg,\P)[/math] | Nonatomic probability space |
[math]\R^d,\|\cdot\|[/math] | Euclidean space equipped with the Euclidean norm |
[math]\cF,\cG,\cK[/math] | Collection of closed, open, and compact subsets of [math]\R^d[/math] (respectively) |
[math]\Sphere = \{x \in \R^d: \|x\| = 1\}[/math] | Unit sphere in [math]\R^d[/math] |
[math]\Ball = \{x \in \R^d: \|x\| \leq 1\}[/math] | Unit ball in [math]\R^d[/math] |
[math]\conv(A),\cl(A),|B|[/math] | Convex hull and closure of a set [math]A\subset\R^d[/math] (respectively), and cardinality of a finite set [math]B\subset\R^d[/math] |
[math]\ex,\ey,\ez,\dots[/math] | Random vectors |
[math]x,y,z,\dots[/math] | Realizations of random vectors or deterministic vectors |
[math]\eX,\eY,\eZ,\dots[/math] | Random sets |
[math]X,Y,Z,\dots[/math] | Realizations of random sets or deterministic sets |
[math]\epsilon,\eps,\nu,\zeta[/math] | Unobserved random variables (heterogeneity) |
[math]\Theta,\theta,\vartheta[/math] | Parameter space, data generating value for the parameter vector, and a generic element of [math]\Theta[/math] |
[math]\sR[/math] | Joint distribution of all variables (observable and unobservable) |
[math]\sP[/math] | Joint distribution of the observable variables |
[math]\sQ[/math] | Joint distribution whose features one wants to learn |
[math]\sM[/math] | A joint distribution of observed variables implied by the model |
[math]\sq_{\tau}(\alpha)[/math] | Quantile function at level [math]\alpha \in (0,1)[/math] for a random variable distributed [math]\tau\in\{\sR,\sP,\sQ\}[/math] |
[math]\E_\tau[/math] | Expectation operator associated with distribution [math]\tau\in\{\sR,\sP,\sQ\}[/math] |
[math]\sT_\eX(K)=\Prob{\eX\cap K\neq\emptyset},\, K\in\cK[/math] | Capacity functional of random set [math]\eX[/math] |
[math]\sC_\eX(F)=\Prob{\eX\subset F},\, F\in\cF[/math] | Containment functional of random set [math]\eX[/math] |
[math]\stackrel{p}{\rightarrow},\stackrel{\text{a.s.}}{\rightarrow},\Rightarrow[/math] | Convergence in probability, convergence almost surely, and weak convergence (respectively) |
[math]\ex\edis\ey[/math] | [math]\ex[/math] and [math]\ey[/math] have the same distribution |
[math]\ex\independent\ey[/math] | Statistical independence between random variables [math]\ex[/math] and [math]\ey[/math] |
[math]x^\top y[/math] | Inner product between vectors [math]x[/math] and [math]y[/math], [math]x,y\in\R^d[/math] |
[math]\bU,\bu[/math] | Family of utility functions and one of its elements |
[math]\crit_\sP[/math] | Criterion function that aggregates violations of the population moment inequalities |
[math]\crit_n[/math] | Criterion function that aggregates violations of the sample moment inequalities |
[math]\idr{\cdot}[/math] | Sharp identification region of the functional in square brackets (a function of [math]\sP[/math]) |
[math]\outr{\cdot}[/math] | An outer region of the functional in square brackets (a function of [math]\sP[/math]) |
Notation
This chapter employs consistent notation that is summarized in Table. Some important conventions are as follows: [math]\ey[/math] denotes outcome variables, [math](\ex,\ew)[/math] denote explanatory variables, and [math]\ez[/math] denotes instrumental variables (i.e., variables that satisfy some form of independence with the outcome or with the unobservable variables, possibly conditional on [math]\ex,\ew[/math]). I denote by [math]\sP[/math] the joint distribution of all observable variables. Identification analysis is carried out using the information contained in this distribution, and finite sample inference is carried out under the presumption that one draws a random sample of size [math]n[/math] from [math]\sP[/math]. I denote by [math]\sQ[/math] the joint distribution whose features the researcher wants to learn. If [math]\sQ[/math] were identified given the observed data (e.g., if it were a marginal of [math]\sP[/math]), point identification of the parameter or functional of interest would attain. I denote by [math]\sR[/math] the joint distribution of all variables, observable and unobservable ones; both [math]\sP[/math] and [math]\sQ[/math] can be obtained from it. In the context of structural models, I denote by [math]\sM[/math] a distribution for the observable variables that is consistent with the model. I note that model incompleteness typically implies that [math]\sM[/math] is not unique. I let [math]\idr{\cdot}[/math] denote the sharp identification region of the functional in square brackets, and [math]\outr{\cdot}[/math] an outer region. In both cases, the regions are indexed by [math]\sP[/math], because they depend on the distribution of the observed data.
General references
Molinari, Francesca (2020). "Microeconometrics with Partial Identification". arXiv:2004.11751 [econ.EM].
Notes
- [1] deduce a duality relation with the mean variance theory of [2] and [3], but the relation does not apply to the sharp bounds they derive. In the Arbitrage Pricing Model [4], bounds on extensions of existing pricing functions, consistent with the absence of arbitrage opportunities, were considered by [5] and [6].
- The first idea of a general random set in the form of a region that depends on chance appears in [7], originally published in 1933. For another early example where confidence regions are explicitly described as random sets, see [8](p. 67). The role of random sets in this chapter is different.
References
- 1.0 1.1 Cite error: Invalid
<ref>
tag; no text was provided for refs namedkoo:rei50
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedmat07
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedmat13
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedwas00
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedcly:geo04
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedmar:and44
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedlea81
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedgin21
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedfri34
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedrei41
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedkle:lea84
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedlea87
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedman89
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedman90
- 15.0 15.1 Cite error: Invalid
<ref>
tag; no text was provided for refs namedman95
- 16.0 16.1 Cite error: Invalid
<ref>
tag; no text was provided for refs namedman03
- 17.0 17.1 17.2 Cite error: Invalid
<ref>
tag; no text was provided for refs namedman07a
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedman13book
- 19.0 19.1 Cite error: Invalid
<ref>
tag; no text was provided for refs namedhan:jag91
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedlut96
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedhan:hea:lut95
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedshi82
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedhan82comment
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedshi03
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedlju:sar04
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedcoc05
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedfer03
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedcam14
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedfau98
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedcan:den02
- Cite error: Invalid
<ref>
tag; no text was provided for refs nameduhl05
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedkil:lut17
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedsam38
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedhou50
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedric66
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedafr67
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedvar82
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedblo:mar60
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedmar60
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedhal73
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedmcf75
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedfal78
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedmcf:ric91
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedcra:der14
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedblu19
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedgil:lea83
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedros:rub83
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedlea85
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedros95
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedimb03
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedhoe40
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedfre51
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedmak81
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedrus82
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedfra:nel:sch87
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedsho:wel09
- Cite error: Invalid
<ref>
tag; no text was provided for refs nameddun:dav53
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedcho:man09
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedman05
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedhir:por19
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedtam03
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedhai:tam03
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedcil:tam09
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedho:ros17
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedcan:sha17
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedtam10
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedlew18
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedcho53
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedaum65
- Cite error: Invalid
<ref>
tag; no text was provided for refs nameddeb67
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedmat75
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedmo1
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedmol:mol14
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedmol:mol18
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedber:mol08
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedber:mol:mol11
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedber:mol:mol12
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedgal:hen11
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedgal16
- Cite error: Invalid
<ref>
tag; no text was provided for refs namedche:ros19