key: cord-1043907-k06eh53k authors: Toulis, Panos title: Estimation of Covid-19 prevalence from serology tests: A partial identification approach date: 2020-10-20 journal: J Econom DOI: 10.1016/j.jeconom.2020.10.005 sha: 397f8b6889f6fd9120a0ad908d2163aa5b4166a4 doc_id: 1043907 cord_uid: k06eh53k We propose a partial identification method for estimating disease prevalence from serology studies. Our data are results from antibody tests in some population sample, where the test parameters, such as the true/false positive rates, are unknown. Our method scans the entire parameter space, and rejects parameter values using the joint data density as the test statistic. The proposed method is conservative for marginal inference, in general, but its key advantage over more standard approaches is that it is valid in finite samples even when the underlying model is not point identified. Moreover, our method requires only independence of serology test results, and does not rely on asymptotic arguments, normality assumptions, or other approximations. We use recent Covid-19 serology studies in the US, and show that the parameter confidence set is generally wide, and cannot support definite conclusions. Specifically, recent serology studies from California suggest a prevalence anywhere in the range 0%-2% (at the time of study), and are therefore inconclusive. However, this range could be narrowed down to 0.7%–1.5% if the actual false positive rate of the antibody test was indeed near its empirical estimate ([Formula: see text] 0.5%). In another study from New York state, Covid-19 prevalence is confidently estimated in the range 13%–17% in mid-April of 2020, which also suggests significant geographic variation in Covid-19 exposure across the US. Combining all datasets yields a 5%–8% prevalence range. Our results overall suggest that serology testing on a massive scale can give crucial information for future policy design, even when such tests are imperfect and their parameters unknown. presents a non-exhaustive summary of such studies around the world. For example, in Germany, serology tests in early April showed a 14% prevalence in a sample of 500 people. 4 In the Netherlands, a study in mid-April showed a lower prevalence at 3.5% in a small sample of blood donors. 5 6 In the US, in a recent and relatively large study in Santa Clara, California, Bendavid et al. (2020) estimated an in-sample prevalence of 1.5% from 50 positive test results in a sample of 3330 patients. Using a reweighing technique, the authors extrapolated this estimate to 2-4% prevalence in the general population. A follow-up study in LA County found 35 positives out of 846 tests. What is unique about these last two studies is that data from a prior validation study are also available, where, say, 401 "true negatives" were tested with 2 positive results, implying a false positive rate of 0.5%. Upon publication, these studies received intense criticism because the false positive rate appears to be large enough compared to the underlying disease prevalence. For example, the Agresti-Coull and Clopper-Pearson 95% confidence intervals for the false positive rate are [0.014%, 1.92%] and [0.06%, 1.79%], respectively. These intervals for the false positive rate are not incompatible even with a 0% prevalence, since a 1.5% false positive rate achieves 0.015 × 3330 ≈ 50 (false) positives on average, same as the observed value in the sample. Such standard methods, however, are justified based on approximations, asymptotic arguments, prior specifications (for Bayesian methods), or normality assumptions, which are always suspect in small samples. In this paper, we develop a method that can assess finite-sample statistical significance in a robust way. The key idea is to treat all unknown quantities as parameters, and explore the entire parameter space to assess agreement with the observed data. Our method adopts the partial identification framework, where the goal is not to produce point estimates, but to identify sets of plausible parameter values (Wooldridge and Imbens, 2007; Tamer, 2010; Chernozhukov et al., 2007; Manski, 2003 Manski, , 2010 Manski, , 2007 Shaikh, 2008, 2010; Honoré and Tamer, 2006; Imbens and Manski, 2004; Beresteanu et al., 2012; Stoye, 2009; Kaido et al., 2019) . Within that literature, our proposed method appears to be unique in the sense that it constructs a procedure that is valid in finite samples given the correct distribution of the test statistic. Importantly, the choice of the test statistic can affect only the power of our method, but not its validity. Such flexibility may be especially valuable in choosing a test statistic that is both powerful and easy to compute. Thus, the main benefit of our approach is that it is valid with enough computation, whereas more standard methods are only valid with enough samples. (Spellberg et al., 2020) . 13.7% USA 03/22 PCR Sample of 215 pregnant women in NYC (Sutton et al., 2020) . 0.34% USA 03/17 model (Yadlowsky et al., 2020 The rest of this paper is structured as follows. In Section 2 we describe the problem formally. In Section 3.1 we describe the proposed method on a high level. A more detailed analysis along with a modicum of theory is given in Section 3.2. In Section 4 we apply the proposed method on data from the Santa Clara study, the LA County study, and a recent study from New York state. This medical antibody test can be represented by a function t : {0, 1} → {0, 1}, and determines whether someone is positive or negative. As usual, the categorization of the test results can be described through the following table: true status, x = 0 1 t(x) = 0 true negative false negative 1 false positive true positive We will assume that each test result is an independent random outcome, such that the true positive rate and false positive rate, denoted respectively by q and p, 7 are constant: This assumption may be untenable in practice. In general, patient characteristics, or test target and delivery conditions can affect the test results. For example, Bendavid et al. (2020) report slightly different test performance characteristics depending on which antibody (either IgM or IgG) was being detected. We note, however, that this assumption is not strictly necessary for the validity of our proposed inference procedure. It is only useful in order to obtain a precise calculation for the distribution of the test statistic (see Theorem 1 and remarks). To determine test performance characteristics, and gain information about the true/false positive rates of the antibody test, there is usually a validation study where the underlying status of participating individuals is known. In the Covid-19 case, for example, such validation study could include pre-Covid-19 blood samples that have been preserved, and are thus "true negatives". To simplify, we assume that in the validation study there is a set I − c of participating individuals, where it is known that everyone is a true negative, and a set I + c where everyone is positive; i.e., x i = 0, for all i ∈ I − c , and x i = 1, for all i ∈ I + c . There is also the main study with a set I m of participating individuals, where the true status is not known. We assume no overlap between sets I − c , I + c and I m , which is a realistic assumption. We define N − c = |I − c | and N + c = |I + c | as the respective number of participants in the validation study, and N m = |I m | as the number of participants in the main study. These numbers are observed, but the full patient sets or the patient characteristics, may not be observed. We also observe the positive test results in both studies: Thus, S − c is the number of false positives in the validation study since we know that all individuals in I − c are true negatives. Similarly, S + c is the number of true positives in the validation study since all individuals in I + c are known to be positive. These numbers offer some simple estimates of the false positive rate and true positive rate of the medical test, respectively: We use (s − c,obs , s + c,obs , s m,obs ) to denote the observed values of test positives (S − c , S + c , S m ), respectively, which are integer-valued random variables. The statistical task is therefore to use observed data {(N − c , N + c , N m ), (s − c,obs , s + c,obs , s m,obs )} and do inference on the quantity: i.e., the unknown disease prevalence in the main study. We emphasize that π is a finite-population estimand -we discuss (briefly) the issue of extrapolation to the general population in Section 5. The challenge here is that S m generally includes both false positives and true positives, which depends on the unknown test parameters, namely the true/false positive rates q and p. Since πN m is the (unknown) number of infected individuals in the main study, we can use Assumptions (A1) and (A2) to write down this decomposition formally: where Binom denotes the binomial random variable. For brevity, we define S = (S − c , S + c , S m ) as our joint data statistic, and θ = (p, q, π) as the joint parameter value. The independence of tests implies that the density of S can be computed exactly as follows. where d(k; n, s) denotes the probability of k successes in a binomial experiment with n trials and s probability of success. There are several ways to implement Equation (4) efficiently -we defer discussions on computational issues to Section 5. J o u r n a l P r e -p r o o f We begin with an illustrative example to describe the proposed method on a high level. We give more details along with some theoretical guarantees in the section that follows. Let us consider the Santa Clara study (Bendavid et al., 2020) with observed data: 401, 197, 3330) , and (s − c,obs , s + c,obs , s m,obs ) = (2, 178, 50). The unknown quantities in our analysis are q, p and π: the true positive rate of the test, the false positive rate, and the unknown prevalence in the main study, respectively. Assume zero prevalence (π = 0%), 90% true positive rate (q = 0.9), and 1.5% false positive rate (p = 0.015). We ask the question: "Is the combination (p, q, π) = (0.015, 0.90, 0) compatible with the data?". Naturally, this can be framed in statistical terms as a null hypothesis: H 0 : (p, q, π) = (0.015, 0.90, 0). To The next step is to decide whether the observed value of S, namely s obs = (2, 178, 50), is compatible with the distribution of Figure 1 . We see that the mode of the distribution is around the point (S − c , S m ) = (5, 45), whereas the point (2, 50) is at the lower edge of the distribution. If the observed values were even further, say at (S − c , S m ) = (2, 80), then we could confidently reject H 0 since the density at (2, 80) is basically zero. Here, we have to be careful because the actual observed values are still somewhat likely under H 0 . Our method essentially accepts H 0 when the density of this distribution at the observed value s obs of statistic S is above some threshold c 0 , that is, we decide based on the following rule: To test H 0 we need to calculate some kind of "p-value" for the observed point. In our construction, we simply test whether the density at the observed values exceeds an appropriately chosen threshold (see Section 3.2). (6) is reminiscent of the likelihood ratio test, the key difference being that our test does not require maximizations of the likelihood function over the parameter space, which is computationally intensive, and frequently unstable numerically -we make a concrete comparison in the application of Section 4.3. Our test essentially uses the density of S as the test statistic for H 0 , while threshold c 0 generally depends on the particular null values being tested. Assuming that the test of Equation (6) has been defined, we can then test for all possible combinations of our parameter values, θ ∈ Θ, in some large enough parameter space Θ, and then invert this procedure in order to construct the confidence set. As usual, we would like this confidence set to cover the true parameters with some minimum probability (e.g., 95%). In the following section, we show that this is possible through an appropriate construction of the test in Equation (6), which takes into account the level sets of the density function depicted in Figure 1 . The overall procedure is computationally intensive, but is valid in finite samples without the need of asymptotic or normality assumptions. The details of this construction, including the appropriate selection of the test threshold and the proof of validity, are presented in the following section. 7 J o u r n a l P r e -p r o o f Journal Pre-proof . . , N m }, and let θ = (p, q, π) ∈ Θ be the model parameters. We take Θ to be finite and discrete; e.g., for probabilities we take a grid of values between 0 and 1. Let s obs = (s − c,obs , s + c,obs , s m,obs ) denote the observed value of S in the sample. Let f (S|θ) denote the density of the joint statistic conditional on the model parameter value θ, as defined in Equation (4). Suppose that θ 0 is the true unknown parameter value, and assume that Assumption (A4) basically posits that our discretization is fine enough to include the true parameter value with probability one. In our application, this assumption is rather mild as we are dealing with parameters that are either probabilities or integers, and so bounded within well-defined ranges. Moreover, this assumption is implicit essentially in all empirical work since computers operate with finite precision. Our goal is to construct a confidence set where α is some desired level (e.g., α = 0.05). Trivially, Θ 1−α = Θ satisfies this criterion, so we will aim to make Θ as narrow as possible. We will also need the following definition: Function ν depends on level sets of f , and counts the number of sample data points (over the sample space S) with likelihood at θ that is smaller than the observed likelihood at θ. We can now prove the following theorem. Theorem 1. Suppose that Assumption (A4) holds. Consider the following construction for the confidence set: Proof. For any fixed θ ∈ Θ consider the function g(z, θ) = z · ν(z, θ), z ∈ [0, 1]. Note that, for fixed θ, function g(z, θ) is monotone increasing and generally not continuous with respect to z. Let F θ = {f (s|θ) : s ∈ S}, and define as z * θ the unique fixed point for which g(z * θ , θ) − α = 0, if that point exists; if not, define the point as z * θ = max{z ∈ F θ : g(z, θ) ≤ α}. It follows that g(z * θ , θ) ≤ α, for any θ, and so the event {f (s obs |θ) · ν(f (s obs |θ), θ) ≤ α} = {g(f (s obs |θ), θ) ≤ α} is the same as the event {f (s obs |θ) ≤ z * θ }. Now we can bound the coverage probability as follows. In the first line we used the test definition and Assumption (A4); in the second line, we used the monotonicity of g, and the fact that θ 0 is the true parameter value; in the last line, we used the definition of z * θ and the uniform bound on g. When z * θ is not a discontinuity point of g, for all θ, then our test is exact in the sense that P (θ 0 ∈ Θ 1−α ) = 1 − α. In general, however, this condition will not hold for all Θ, and so the confidence set of Equation (8) may be conservative and lose power. We could potentially achieve more power if instead we define the confidence set as follows: Theorem 2. Suppose that Assumption (A4) holds. Consider the construction of confidence set Θ alt 1−α as defined in Equation (10). Proof. The proof is almost identical to Theorem 1, if we replace the definition of g with g(z, θ) = With this definition g is a smoother function, which explains intuitively why this construction will generally lead to more power. It is straightforward to see that Θ alt 1−α ⊆ Θ 1−α almost surely since Since both constructions are valid in finite samples, the choice between Θ 1−α or Θ alt 1−α should be mainly based on computational feasibility. The construction of Θ 1−α may be easier to compute in practice as it depends on a summary of the distribution f (s|θ) through the level set function ν, while the construction of Θ alt 1−α requires full knowledge of the entire distribution. If it is computationally feasible, however, Θ alt 1−α should be preferred because it is contained in Θ 1−α with probability one, as argued above. This leads to sharper inference. See also the applications on serology studies in Section 4 for more details, where the construction of Θ alt 1−α is feasible. J o u r n a l P r e -p r o o f Journal Pre-proof Theorems 1 and 2 imply the following simple procedure to construct a 95% confidence set: Procedure 1 1. Observe the value s obs = (s − c,obs , s + c,obs , s m,obs ) of test positives in the validation study and the main study. Create a grid Θ ⊂ [0, 1] 3 for the unknown parameters θ = (p, q, π). 2. For every θ in Θ, calculate f (s obs |θ) as in Equation (4), and ν(s obs , θ) as in Equation (7). 3. Reject all values θ ∈ Θ for which f (s obs |θ)·ν(f (s obs |θ), θ) ≤ 0.05; alternatively, we can reject 4. The remaining values in Θ form the 95% confidence sets Θ 0.95 or Θ alt 0.95 , respectively. Remark 3.1 (Computation). Procedure 1 is fully parallelizable over θ, and so the main computational difficultly is the need to sum over the sample space S. Note, however, that Procedure 1 can work for any choice of S given its density f (S|θ). Thus, our method offers valuable flexibility for inference; for instance, f could be simulated, or S could be a simple statistic (e.g., sample averages) and not necessarily an "arg max" estimator. See Section 5 for more discussion on computation. Remark 3.2 (Identification). Procedure 1 is not a typical partial identification method in the sense that there are settings under which the model of Equation (4) is point identified (i.e., when N m , N − c , N + c → ∞). However, we choose to describe Procedure 1 as a partial identification method for two main reasons. First, it is more plausible, in practice, that the calibration studies are small and finite (N − c , N + c < ∞), since a calibration study needs to have high-quality, ground-truth data. Second, it can happen that we don't have both kinds of calibration studies available (i.e., it could be that either N − c = 0 or N + c = 0). In both of these settings, the underlying model is no longer point identified, and so Procedure 1 is technically a partial identification method. Remark 3.3 (Conservativeness). Procedure 1 generally produces conservative confidence intervals. This value is very small, in general (e.g., ∼ 10 −3 in the Santa Clara study). In this case, the alternative construction is "approximately exact" in the sense that the coverage probability of Θ alt 1−α is almost equal to (1 − α). Remark 3.4 (Marginal inference). The parameter of interest in our application could only be the disease prevalence, whereas the true/false positive rates of the antibody test may be considered "nuisance". In this paper, we directly project Θ 1−α (or Θ alt 1−α ) on a single dimension to perform marginal inference (see Section 4), but this is generally conservative, especially at the boundary of the parameter space (Stoye, 2009; Kaido et al., 2019; Chen et al., 2018) . A sharper way to do marginal inference with our procedures is an interesting direction for future work. How does our method compare to a more standard frequentist or Bayesian approach? Here, we discuss two key differences. First, as we have repeatedly emphasized in this paper, our method is valid in finite samples under only independence of test results, which is a mild assumption. In contrast, a standard frequentist approach, say based on the bootstrap, is inherently approximate and relies on asymptotics, while a Bayesian method requires the specification of priors and posterior sampling. Of course, our procedure requires more computation, mainly compared to the bootstrap, and can be conservative for marginal inference (see Remark 3.4), but this is arguably a small price to pay in a critical application such as the estimation of Covid-19 prevalence. A second, more subtle, difference is the way our method performs inference. Specifically, we decide whether any θ ∈ Θ is in the confidence set based on the entire density f (s|θ) over all s ∈ S, whereas both frequentist and Bayesian methods typically perform inference "around the mode" of the likelihood function f (s obs |θ) with fixed s obs ∈ S (we ignore how the prior specification affects Bayesian inference to simplify exposition). This can explain, on an intuitive level, how the inferences of the respective methods may differ. Figure 2 illustrates the difference. On the left panel, we plot the likelihood, f (s obs |θ), as a function of θ ∈ Θ. Typically, in frequentist or Bayesian methods, the confidence set is around the mode, sayθ. We see that a parameter value, say θ 1 , with a likelihood value, f (s obs |θ 1 ), that is low in absolute terms will generally not be included in the confidence set. However, in our approach, the value f (s obs |θ 1 ) is not important in absolute terms for doing inference, but is only important relative to all other values {f (s|θ 1 ) : s ∈ S} of the test statistic distribution f (s|θ 1 ). Such inference will typically include the mode,θ, but will also include parameter values at the tails of the likelihood function, such as θ 1 . As such, our method is expected to give more accurate inference in small-sample problems, or in settings with poor identifiability where the likelihood is non-smooth and multimodal. We argue that we actually see these effects in the application on Covid-19 serology studies analyzed in the following sectionsee also Section 4.3 and Appendix D for concrete numerical examples. In this section, we apply the inference procedure of Section 3.3 to several serology test datasets in the US. Moreover, we present results for combinations of these datasets, assuming that the tests have identical specifications. This is likely an untenable assumption, but it helps to illustrate how we can use our approach to flexibly combine all evidence. Before we present the analysis, we first discuss some data on serology test performance to inform our inference. Figure 2 : Illustration of main difference between standard methods of inference, and the partial identification method in this paper. Left: In standard methods, inference is typically based on the likelihood f (s obs |θ) as a function over Θ, and around some modeθ. Parameter values with low likelihood, such as θ 1 , are not included in the confidence set. Middle & right: In our method, inference is based on the entire distribution function f (s|θ) over the statistic parameter space, S. As such,θ will usually be in the confidence set (middle plot). Moreover, θ 1 will also be in the confidence set if f (s obs |θ 1 ) is high relative to the rest of f (s|θ 1 ) even when f (s obs |θ 1 ) is small relative to the mode f (s obs |θ) (right plot). An important aspect of serology studies are the test performance characteristics. As of May 2020, there are perhaps more than a hundred commercial serology tests in the US, but they can differ substantially across manufacturers and technologies. In our application, we use data from Bendavid et al. (2020), who applied a serology testing kit distributed by Premier Biotech. Bendavid et al. (2020) used validation test results provided by the test manufacturer, and also performed a local validation study in the lab. The combined validation study estimated a true positive rate of 80.3% (95% CI: 72.1%-87%), and a false positive rate of 0.5% (95% CI: 0.1%-1.7%). To get an idea about how these performance characteristics relate to other available serology tests we use a dataset published by the FDA based on benchmarking 12 other testing kits to grant emergency use authorization (EUA) status. The dataset is summarized in Table 2 . We see that In the Santa Clara study, Bendavid et al. (2020) report a validation study and main study, with 401, 197, 3330) participants, respectively. The observed test positives are s obs = (s − c,obs , s + c,obs , s m,obs ) = (2, 178, 50), respectively. Given these data, we produce the 95% confidence sets for (p, q, π) following both procedures in (8) and (10) described in Section 3.3. In Figure 11 of Appendix C, we jointly plot all triples in the 3-dimensional space Θ 0.95 of Equation (8), with additional coloring based on prevalence values. We see that the confidence set is a convex space tilting to higher prevalence values as the false positive rate of the test decreases. The true positive rate does not affect prevalence, as long as it stays in the range 80%-95%. To better visualize the pairwise relationships between the model parameters, we also provide Figure 3 that breaks down Figure 11 into two subplots, one visualizing the pairs (π, p) and another visualizing the pairs (π, q). The figure visualizes both Θ 0.95 and Θ alt 0.95 to illustrate the differences between the two constructions. From Figure 3 , we see that the Santa Clara study is not conclusive about Covid-19 prevalence. A prevalence of 0% is plausible, given a high enough false positive rate. However, if the true false positive rate is near its empirical value of 0.5%, as estimated by Bendavid et al. (2020), then the identified prevalence rate is estimated in the range 0.4%-1.8% in Θ 0.95 . Under this assumption, we see that Θ alt 0.95 offers a sharper inference, as expected, with an estimated prevalence in the range 0.7%-1.5%. Even though, strictly speaking, the statistical evidence is not sufficient here for definite inference on prevalence, we tend to favor the latter interval because (i) common sense precludes 0% prevalence in the Santa Clara county (total pop. of about 2 million); (ii) the interval generally agrees with the test performance data presented earlier, and (iii) it is still in the low end compared to prevalence estimates from other serology studies (see Table 1 ). Regardless, pinning down the false positive rate is important for estimating prevalence, especially when prevalence is as low as it appears to be in the Santa Clara study. Roughly speaking, a decrease of 1% in the false positive rate implies an increase of 1.3% in prevalence. In this section, we aim to discuss how our method practically compares to more standard methods using data from the Santa Clara study. In our comparison we include Bayesian methods, a classical likelihood ratio-based test, and the Monte Carlo-based approach to partial identification proposed by Chen et al. (2018) . 8 Due to initial criticism, the authors of the original Santa Clara study published a revision of their work, where they use a bootstrap procedure to calculate confidence intervals for prevalence in the range 0.7%-1.8%. 9 Some recent Bayesian analyses report wider prevalence intervals in the range 0.3%-2.1% (Gelman and Carpenter, 2020). In another Bayesian multi-level analysis, Levesque and Maybury (2020) report similar findings but mention that posterior summarization here may be subtle, since the posterior density of prevalence in their specification includes 0%. These results are in agreement with our analysis in the previous section only if we assume that the true false positive rate of the serology test was near its empirical estimate (∼0.5%). We discussed intuitively the reason for such discrepancy in Section 3.4, where we argued that standard methods typically do inference "around the mode" of the likelihood, and may thus miscalculate the amount of statistical information hidden in the tails. For a numerical illustration, consider two parameter values, namely θ 1 = (0.5%, 90%, 1.2%) and θ 2 = (1.5%, 80%, 0%), where the components denote the false positive rate, true positive rate, and prevalence, respectively. In the Santa Clara study, f (s obs |θ 1 ) = 2.2 × 10 −3 and f (s obs |θ 2 ) = 9.58 × 10 −8 , that is, θ 2 (which implies 0% prevalence) maps to a likelihood value that is many orders of magnitude smaller than θ 1 . In fact, θ 1 is close to the mode of the likelihood, and so frequentist or Bayesian inference is mostly based around that mode, ignoring the tails of the likelihood function, such as θ 2 . For our method, however, the small value of f (s obs |θ 2 ) is moreor-less irrelevant -what matters is how this value compares to the entire distribution f (s|θ 2 ). It turns out that f (s obs |θ 2 )ν(f (s obs |θ 2 ), θ 2 ) = 0.137, that is, 13.7% of the mass of f (s|θ 2 ) is below the observed level f (s obs |θ 2 ) = 9.58 × 10 −8 . As such, θ 2 cannot be rejected at the 5% level (see also Appendix D). This highlights the key difference of our procedure compared to frequentist or Bayesian procedures. More generally, we expect to see such important differences between the inference from our method and the inference from other more standard methods in settings with small samples or poor identification (e.g., non-separable, multimodal likelihood). As briefly described in Section 3.1, our test is related to the likelihood ratio test (Lehmann and Romano, 2006, Chapter 3) . Here, we study the similarities and differences between the two tests, both theoretically and empirically through the Santa Clara study. Specifically, consider testing a null hypothesis that the true parameter is equal to some value θ 0 using the likelihood ratio statistic, . Since f is known analytically from Equation (4), the null distribution of t(S|θ 0 ) can be fully simulated. An exact p-value can then be obtained by comparing this null distribution with the observed value t(s obs |θ 0 ). We can see that this method is similar to ours in the sense that both methods use the full density f (S|θ 0 ) in the test, and both are exact. The main difference, however, is that our method is using a summary of the density values f (S|θ 0 ) that are below the observed value f (s obs |θ 0 ), which avoids the expensive (and sometimes numerically unstable) maximization in the denominator of the likelihood ratio test in (11). Our proposed method turns out to be orders of magnitude faster than the likelihood ratio approach as we get 50-fold to 200-fold speedups in our setup -see Section 5.1 for a more detailed comparison in computational efficiency. To efficiently compare the inference between the two tests, we sampled 5,000 different parameter values from inside Θ 0.95 -i.e., the 95% confidence set from the basic test in Equation (8) The likelihood ratio test rejected 3% of the values from the first set, and 98% of the values from the second set, indicating a good amount of overlap between the two tests. The correlation between the p-value from the likelihood ratio test, and the values f (s obs |θ)ν(f (s obs |θ), θ), which our basic test uses to make a decision in Equation (8), was equal to 0.94. The correlation with the alternative confidence set construction is 0.90, using instead the values I f (s|θ) ≤ f (s obs |θ) f (s|θ) in the above calculation. Since the likelihood ratio test is exact, these results suggest that our test procedures are generally high-powered. In Figures 7 and 8 of Appendix A, we plot the 95% confidence sets from the likelihood ratio test described above for the Santa Clara study and the LA county study (of the following section). The estimated prevalence is 0%-1.9% for Santa Clara, which is shorter than Θ 0.95 but wider than Θ alt 0.95 , as reported earlier; the same holds for LA county. As with our method, prevalence here is estimated through direct projection of the confidence set, which may be conservative. It is also possible that with more samples the likelihood ratio test could achieve the same interval as Θ alt 0.95 (we used only 100 samples), but this would come at an increased computational cost. Overall, the likelihood ratio test produces very similar results to our method, but it is not as efficient computationally. identified models. The idea is to sample from a quasi-posterior distribution, and then calculate q n , the 95% percentile of {f (s obs |θ (j) ), j = 1, . . .}, where θ (j) denotes the j-th sample from the posterior. The 95% confidence set is then defined as: We implemented this procedure with an MCMC chain that appears to be mixing well -see Appendix B and Figure 9 for details. The 95% confidence set, Θ, is given in Figure 10 of Appendix B. Simple projection, yields a prevalence in the range 0.9%-1.43%. This suggests that our MCMC "spends more time" around the mode of the likelihood, which we back up with numerical evidence in Appendix B. Finally, we also tried Procedure 3 of Chen et al. (2018), which does not require MCMC simulations but is generally more conservative. Prevalence was estimated in the range 0.12%-1.65%, which is comparable to our method and the likelihood ratio test. Next, we analyze the results from a recent serology study in Los Angeles county, which estimated a prevalence of 4.1% over the entire county population. 10 We use the same validation study as before since this study was executed by the same team as the Santa Clara one. Here, the main study had N m = 846 participants with s m,obs = 35 positives. 11 For inference, we only use the alternative construction, Θ alt 0.95 , of Equation (10) to simplify exposition. The results are shown in Figure 4 . In contrast to the Santa Clara study, we see that the results from this study are conclusive. The prevalence rate is estimated in the range 1.7%-5.2%. If the false positive rate is, for example, closer to its empirical estimate (0.5%) then the identified prevalence is relatively high, somewhere in the range 3%-5.2%. We also see that the true positive rate is estimated in the range 85%-95%, which is higher than the empirical point estimate of 80% provided by Bendavid et al. (2020) . In fact, the empirical point estimate is not even in the 95% confidence set. Finally, as an illustration, we combine the data from the Santa Clara and LA county studies. The assumption is that the characteristics of the tests used in both studies were identical. The results are shown in Figure 12 of Appendix C. We see that 0% prevalence is consistent with the combined study as well. Furthermore, prevalence values higher than 2.5% do not seem plausible in the combined data. Θ alt 0.95 in LA county study Figure 4 : Visualization of Θ alt 0.95 for the LA county study. We see that the evidence for Covid-19 prevalence are stronger than the Santa Clara study. Prevalence is estimated in the range 1.7%-5.2%. This is shortened to 3%-5.2% if we assume a 0.5% false positive rate for the antibody test. Recently, a quasi-randomized study was conducted in New York state, including NYC, which sampled individuals shopping in grocery stores. Details about this study were not made available. Here, we assume that the medical testing technology used was the same as in the Santa Clara and LA county studies, or at least similar enough that the comparison remains informative. Under this assumption, we can use the same validation study as before, with (N − c , N + c ) = (401, 197) participants in the validation study, and (s − c,obs , s + c,obs ) = (2, 178) positives, respectively. The main study in New York had N m = 3000 participants with s m,obs = 420 observed test positives. 12 The Θ alt 0.95 confidence set on this dataset is shown in Figure 5 . We see that the evidence in this study is much stronger than the Santa Clara/LA county studies with an estimated prevalence in the range 12.9%-16.6%. The true positive rate is now an important identifying parameter in the sense that knowing its true value could narrow down the confidence set even further. We see that this study gives strong evidence for prevalence in the range 13%-16.6%. Finally, in Figure 13 of Appendix C we present prevalence estimates for a combination of all datasets presented so far, while using both constructions, Θ 0.95 and Θ alt 0.95 , to illustrate their differences. As mentioned earlier, this requires the assumption that the antibody testing kits used in all three studies had identical specifications, or at least very similar so that the comparison remains informative. This assumption is most likely untenable given the available knowledge. However, we present the results there for illustration and completeness. The general picture in the combined study is a juxtaposition of earlier findings. For example, both false and positive rates are now important for identification. The identified prevalence is in the range 5.2%-8.2% in Θ alt 0.95 (and 3.2%-8.9% in Θ 0.95 ). These numbers are larger than the Santa Clara/LA county studies but smaller than the New York study. J o u r n a l P r e -p r o o f Journal Pre-proof The procedure described in Section 3.3 is computationally intensive for two main reasons. First, we need to consider all values of θ ∈ Θ, which is a three-dimensional grid. Second, given some θ, we need to calculate f (s|θ) for each s ∈ S, which is also a three-dimensional grid. To deal with the first problem we can use parallelization, since the test decisions in step 3 of our procedure are independent of each other. For instance, the results in Section 4 were obtained in a computing cluster (managed by Slurm) comprised of 500 nodes, each with x86 architecture, 64-bit processors, and 16GB of memory. The total wall clock time to produce all results of the previous section was about 1 hour. The results for, say, the Santa Clara study can be obtained in much shorter time (a few minutes) because they contain few positive test results. To address the second computational bottleneck we can exploit the independence property between S − c , S + c , and S m , as shown in the product of Equation (4). Since any zero term in this product implies a zero value for f , we can ignore all individual term values that are very small. Through numerical experiments, we estimate that this computational trick prunes on average 97% of S leading to a significant computational speedup. For example, to test one single value θ 0 ∈ Θ takes about 0.25 seconds in a typical high-end laptop, which is a 200-fold speedup compared to 50 seconds required by the likelihood ratio test of Section 4.3.2 -see Appendix A for more details. As mentioned earlier, prevalence π in Equation (2) is a finite-population estimand, that is, it is a number that refers to the particular population in the study. Theorem 1 shows that our procedure is valid for π only under Assumption (A4). However, to extrapolate to the general population we generally need to assume that I − c , I + c , I m are random samples from the population. This is currently an untenable assumption. For example, in the Santa Clara study the population of middle-aged white women was overrepresented, while the population of Asian or Latino communities was underrepresented. The impact from such selection bias on the inferential task is very hard to ascertain in the available studies. Techniques such as post-stratification or reweighing can help, but at this early stage any extrapolation using distributional assumptions would be too speculative. However, selection bias is a well-known issue among researchers, and can be addressed as widespread and carefully designed antibody testing catches on. We leave this for future work. J o u r n a l P r e -p r o o f In this paper, we presented a partial identification method for estimating prevalence of Covid-19 from randomized serology studies. The benefit of our method is that it is valid in finite samples, as it does not rely on asymptotics, approximations or normality assumptions. We show that some recent serology studies in the US are not conclusive (0% prevalence is in the 95% confidence set). However, the New York study gives strong evidence for high prevalence in the range 12.9%-16.6%. A combination of all datasets shifts this range down to 5.2%-8.2%, under a test uniformity assumption. Looking ahead, we hope that the method developed here can contribute to a more robust analysis of future Covid-19 serology tests. Here, we illustrate the differences between our proposed inferential method and likelihood-based methods (both Bayesian and frequentist) through a simple numerical example. Suppose that S is such that |S| = N , with N extremely large, and fix some parameter value θ 1 to test. Suppose also that the conditional density f (S|θ 1 ) is defined as: f (s 0 |θ 1 ) = 0.95 − 1 , f (s 1 |θ 1 ) = , and f (s|θ 1 ) = (0.05 − + 1 )/(N − 2), ∀s ∈ S \ {s 0 , s 1 }. Set both and 1 to be infinitesimal values. As such, under θ 1 we observe s 0 with probability roughly equal to 0.95, or s 1 with some very small probability , or observe any other remaining value from S uniformly at random. Suppose we observe s obs = s 1 in the data. Should we reject or accept θ 1 ? Since we can make arbitrarily small, an inferential method that focuses only on the likelihood function, would conclude that any θ ∈ Θ is more plausible than θ 1 , as long as f (s obs |θ) >> . Both frequentist and Bayesian methods would agree to such conclusion, and typically would perform inference around the mode of f (s obs |θ) with respect to θ. However, our procedure makes a different conclusion, and actuall accepts θ 1 (at the 5% level)! The reason is that s∈S I{f (s|θ 1 ) ≤ f (s obs |θ 1 )}f (s|θ 1 ) = + 0.05 − + 1 N − 2 (N − 2) = 0.05 + 1 > 0.05. That is, even though f (s obs |θ 1 ) is equal to a tiny value, there is still 5% of the mass of f (s|θ 1 ) at or below that value. Partial identification in econometrics Estimating the covid-19 infection rate: Anatomy of an inference problem Inference for identifiable parameters in partially identified econometric models Inference for the identified set in partially identified econometric models Community prevalence of sars-cov-2 among patients with influenzalike illnesses presenting to a los angeles medical center More on confidence intervals for partially identified parameters Repeated seroprevalence of anti-sarscov-2 igg antibodies in a population-based sample from geneva Universal screening for sars-cov-2 in women admitted for delivery Partial identification in econometrics Estimate of covid-19 case prevalence in india based on surveillance data of patients with severe acute respiratory illness What's new in econometrics? lecture 9: partial identification Estimation of sars-cov-2 infection prevalence in santa clara county A More details on the likelihood ratio testThe concrete testing procedure for the likelihood ratio test of Section 4.3.2 is as follows.1. Define the test statistic:2. Calculate the observed value t obs = t(s obs |θ 0 ).3. Sample {s (j) , j = 1, . . . , r} from f (S|θ 0 ) (we set r = 1, 000 samples).4. Calculate the one-sided p-value as 1For the maximization in the denominator of t(s|θ 0 ) we use the standard BFGS algorithm on a natural re-parameterization. Specifically, we define the natural parameters as follows:where (p, q, π) ≡ θ are the original model parameters, i.e., false positive rate, true positive rate, and prevalence, respectively; and logit(z) = log(z/(1 − z)), z ∈ (0, 1). To avoid numerical instabilities we define logit(0) = log( /(1 − )) and logit(1) = log((1 − )/ ), where is a small constant, e.g., =1e-8. Since the natural parameter ψ = (ψ 0 , ψ 1 , ψ 2 ) is unconstrained, the optimization routine becomes faster and easier; mapping back from ψ to θ is also straightforward.The maximization takes about 0.05 seconds of wall-clock time in a typical high-end laptop. 13It therefore takes a total of 50 seconds to test one single hypothesis based on 1,000 samples of the likelihood ratio. In contrast, our partial identification method takes 0.25 seconds of wall-clock time to test the same single null hypothesis, a 200-fold speedup. As explained in Section 5.1 this is because the computation of f (s|θ) can be done very efficiently due to the decomposition of f into three independent terms in Equation (4).Since the likelihood ratio test cannot be fully implemented, we chose to sample randomly 5,000 parameter values from the basic confidence set, Θ 0.95 , and 5,000 values from Θ \ Θ 0.95 , and then test each value using the likelihood ratio test. The idea is to explore the agreement of the two tests. The overlap between the likelihood ratio test decisions and the basic construction is 97.3% for the values from Θ 0.95 , and 97.7% for the values from Θ \ Θ 0.95 . There was even more