key: cord-134926-dk28wutc authors: Dasgupta, Anirban; Sengupta, Srijan title: Scalable Estimation of Epidemic Thresholds via Node Sampling date: 2020-07-28 journal: nan DOI: nan sha: doc_id: 134926 cord_uid: dk28wutc Infectious or contagious diseases can be transmitted from one person to another through social contact networks. In today's interconnected global society, such contagion processes can cause global public health hazards, as exemplified by the ongoing Covid-19 pandemic. It is therefore of great practical relevance to investigate the network trans-mission of contagious diseases from the perspective of statistical inference. An important and widely studied boundary condition for contagion processes over networks is the so-called epidemic threshold. The epidemic threshold plays a key role in determining whether a pathogen introduced into a social contact network will cause an epidemic or die out. In this paper, we investigate epidemic thresholds from the perspective of statistical network inference. We identify two major challenges that are caused by high computational and sampling complexity of the epidemic threshold. We develop two statistically accurate and computationally efficient approximation techniques to address these issues under the Chung-Lu modeling framework. The second approximation, which is based on random walk sampling, further enjoys the advantage of requiring data on a vanishingly small fraction of nodes. We establish theoretical guarantees for both methods and demonstrate their empirical superiority. Infectious diseases are caused by pathogens, such as bacteria, viruses, fungi, and parasites. Many infectious diseases are also contagious, which means the infection can be transmitted from one person to another when there is some interaction (e.g., physical proximity) between them. Today, we live in an interconnected world where such contagious diseases could spread through social contact networks to become global public health hazards. A recent example of this phenomenon is the Covid-19 outbreak caused by the so-called novel coronavirus (SARS-CoV-2) that has spread to many countries Zhu et al., 2020; Wang et al., 2020; Sun et al., 2020) . This recent global outbreak has caused serious social and economic repercussions, such as massive restrictions on movement and share market decline (Chinazzi et al., 2020) . It is therefore of great practical relevance to investigate the transmission of contagious diseases through social contact networks from the perspective of statistical inference. Consider an infection being transmitted through a population of n individuals. According to the susceptible-infected-recovered (SIR) model of disease spread, the pathogen can be transmitted from an infected person (I) to a susceptible person (S) with an infection rate given by β, and an infected individual becomes recovered (R) with a recovery rate given by µ. This can be modeled as a Markov chain whose state at time t is given by a vector (X t 1 , . . . , X t n ), where X t i denotes the state of the i th individual at time t, i.e., X t i ∈ {S, I, R}. For the population of n individuals, the state space of this Markov chain becomes extremely large with 3 n possible configurations, which makes it impractical to study the exact system. This problem was addressed in a series of three seminal papers by Kermack and McKendrick (Kermack and McKendrick, 1927 , 1932 , 1933 . Instead of modeling the disease state of each individual at at a given point of time, they proposed compartmental models, where the goal is to model the number of individuals in a particular disease state (e.g., susceptible, infected, recovered) at a given point of time. Since their classical papers, there has been a tremendous amount of work on compartmental modeling of contagious diseases over the last ninety years (Hethcote, 2000; Van den Driessche and Watmough, 2002; Brauer et al., 2012) . Compartmental models make the unrealistic assumption of homogeneity, i.e., each individual is assumed to have the same probability of interacting with any other individual. In reality, individuals interact with each other in a highly heterogeneous manner, depending upon various factors such as age, cultural norms, lifestyle, weather, etc. The contagion process can be significantly impacted by heterogeneity of interactions Rocha et al., 2011; Galvani and May, 2005; Woolhouse et al., 1997) , and therefore compartmental modeling of contagious diseases can lead to substantial errors. In recent years, contact networks have emerged as a preferred alternative to compartmental models (Keeling, 2005) . Here, a node represents an individual, and an edge between two nodes represent social contact between them. An edge connecting an infected node and a susceptible node represents a potential path for pathogen transmission. This framework can realistically represent the heterogeneous nature of social contacts, and therefore provide much more accurate modeling of the contagion process than compartmental models. Notable examples where the use of contact networks have led to improvements in prediction or understanding of infectious diseases include Bengtsson et al. (2015) and Kramer et al. (2016) . Consider the scenario where a pathogen is introduced into a social contact network and it spreads according to an SIR model. It is of particular interest to know whether the pathogen will die out or lead to an epidemic. This is dictated by a set of boundary conditions known as the epidemic threshold, which depends on the SIR parameters β and µ as well as the network structure itself. Above the epidemic threshold, the pathogen invades and infects a finite fraction of the population. Below the epidemic threshold, the prevalence (total number of infected individuals) remains infinitesimally small in the limit of large networks (Pastor-Satorras et al., 2015) . There is growing evidence that such thresholds exist in real-world host-pathogen systems, and intervention strategies are formulated and executed based on estimates of the epidemic threshold. (Dallas et al., 2018; Shulgin et al., 1998; Wallinga et al., 2005; Pourbohloul et al., 2005; Meyers et al., 2005) . Fittingly, the last two decades have seen a significant emphasis on studying epidemic thresholds of contact networks from several disciplines, such as computer science, physics, and epidemiology (Newman, 2002; Wang et al., 2003; Colizza and Vespignani, 2007; Chakrabarti et al., 2008; Gómez et al., 2010; Wang et al., 2016 . See Leitch et al. (2019) for a complete survey on the topic of epidemic thresholds. Concurrently but separately, network data has rapidly emerged as a significant area in statistics. Over the last two decades, a substantial amount of methodological advancement has been accomplished in several topics in this area, such as community detection (Bickel and Chen, 2009; Zhao et al., 2012; Rohe et al., 2011; Sengupta and Chen, 2015) , model fitting and model selection (Hoff et al., 2002; Handcock et al., 2007; Krivitsky et al., 2009; Wang and Bickel, 2017; Yan et al., 2014; Bickel and Sarkar, 2016; Sengupta and Chen, 2018) , hypothesis testing (Ghoshdastidar and von Luxburg, 2018; Tang et al., 2017a,b; Bhadra et al., 2019) , and anomaly detection (Zhao et al., 2018; Sengupta, 2018; Komolafe et al., 2019) , to name a few. The state-of-the-art toolbox of statistical network inference includes a range of random graph models and a suite of estimation and inference techniques. However, there has not been any work at the intersection of these two areas, in the sense that the problem of estimating epidemic thresholds has not been investigated from the perspective of statistical network inference. Furthermore, the task of computing the epidemic threshold based on existing results can be computationally infeasible for massive networks. In this paper, we address these gaps by developing a novel sampling-based method to estimate the epidemic threshold under the widely used Chung-Lu model (Aiello et al., 2000) , also known as the configuration model. We prove that our proposed method has theoretical guarantees for both statistical accuracy and computational efficiency. We also provide empirical results demonstrating our method on both synthetic and real-world networks. The rest of the paper is organized as follows. In Section 2, we formally set up the prob-lem statement and formulate our proposed methods for approximating the epidemic threshold. In Section 3, we desribe the theoretical properties of our estimators. In Section 4, we report numerical results from synthetic as well as real-world networks. We conclude the paper with discussion and next steps in Section 5. Definition and Description λ(A) spectral radius of the matrix A d i degree of the node i of the network δ i expected degree of the node i of the network S(t), I(t), R(t) number of susceptible (S), infected (I), and recovered/removed (R) individuals in the population at time t β infection rate: probability of transmission of a pathogen from an infected individual to a susceptible individual per effective contact (e.g. contact per unit time in continuous-time models, or per time step in discrete-time models) µ recovery rate: probability that an infected individual will recover per unit time (in continuous-time models) or per time step (in discrete-time models) Consider a set of n individuals labelled as 1, . . . , n, and an undirected network (with no self-loops) representing interactions between them. This can represented by an nby-n symmetric adjacency matrix A, where A(i, j) = 1 if individuals i and j interact and A(i, j) = 0 otherwise. Consider a pathogen spreading through this contact network according to an SIR model. From existing work (Chakrabarti et al., 2008; Gómez et al., 2010; Prakash et al., 2010; Wang et al., 2016 , we know that the boundary condition for the pathogen to become an epidemic is given by where λ(A) is the spectral radius of the adjacency matrix A. The left hand side of Equation (1) is the ratio of the infection rate to the recovery rate, which is purely a function of the pathogen and independent of the network. As this ratio grows larger, an epidemic becomes more likely, as new infections outpace recoveries. The right hand side of Equation (1) is the spectral radius of the adjacency matrix, which is purely a function of the network and independent of the pathogen. Larger the spectral radius, the more connected the network, and therefore an epidemic becomes more likely. Thus, the boundary condition in Equation (1) connects the two aspects of the contagion process, the pathogen transmissibility which is quantified by β/µ, and the social contact network which is quantified by the spectral radius. If β µ < 1 λ(A) , the pathogen dies out, and if β µ > 1 λ(A) , the pathogen becomes an epidemic. Given a social contact network, the inverse of the spectral radius of its adjacency matrix represents the epidemic threshold for the network. Any pathogen whose transmissiblity ratio is greater than this threshold is going to cause an epidemic, whereas any pathogen whose transmissiblity ratio is less than this threshold is going to die out. Therefore, a key problem in network epidemiology is to compute the spectral radius of the social contact network. Realistic urban social networks that are used in modeling contagion processes have millions of nodes (Eubank et al., 2004; Barrett et al., 2008) . To compute the epidemic threshold of such networks, we need to find the largest (in absolute value) eigenvalue of the adjacency matrix A. This is challenging because of two reasons. 1. First, from a computational perspective, eigenvalue algorithms have computational complexity of Ω(n 2 ) or higher. For massive social contact networks with millions of nodes, this can become too burdensome. 2. Second, from a statistical perspective, eigenvalue algorithms require the entire adjacency matrix for the full network of n individuals. It can be challenging or expensive to collect interaction data of n individuals of a massive population (e.g., an urban metropolis). Furthermore, eigenvalue algorithms typically require the full matrix to be stored in the random-access memory of the computer, which can be infeasible for massive social contact networks which are too large to be stored. The first issue could be resolved if we could compute the epidemic threshold in a computationally efficient manner. The second issue could be resolved if we could compute the epidemic threshold only using data on a small subset of the population. In this paper, we aim to resolve both issues by developing two approximation methods for computing the spectral radius. To address these problems, let us look at the spectral radius, λ(A), from the perspective of random graph models. The statistical model is given by A ∼ P , which is short-hand for A(i, j) ∼ Bernoulli(P (i, j)) for 1 ≤ i < j ≤ n. Then λ(A) converges to λ(P ) in probability under some mild conditions (Chung and Radcliffe, 2011; Benaych-Georges et al., 2019; Bordenave et al., 2020) . To make a formal statement regarding this convergence, we reproduce below a slightly paraphrased version (for notational consistency) of an existing result in this context. Lemma 1 (Theorem 1 of Chung and Radcliffe (2011)). Let be the maximum expected degree, and suppose that for some > 0, ∆ > 4 9 log(2n/ ) for sufficiently large n. Then with probability at least 1 − , for sufficiently large n, To make note of a somewhat subtle point: from an inferential perspective it is tempting to view the above result as a consistency result, where λ(P ) is the population quantity or parameter of interest and λ(A) is its estimator. However, in the context of epidemic thresholds, we are interested in the random variable λ(A) itself, as we want to study the contagion spread conditional on a given social contact network. Therefore, in the present context, the above result should not be interpreted as a consistency result. Rather, we can use the convergence result in a different way. For massive networks, the random variable λ(A), which we wish to compute but find it infeasible to do so, is close to the parameter λ(P ). Suppose we can find a random variable T (A) which also converges in probability to λ(P ), and is computationally efficient. Since T (A) and λ(A) both converge in probability to λ(P ), we can use T (A) as an accurate proxy for λ(A). This would address the first of the two issues described at the beginning of this subsection. Furthermore, if T (A) can be computed from a small subset of the data, that would also solve the second issue. This is our central heuristic, which we are going to formalize next. So far, we have not made any structural assumptions on P , we have simply considered the generic inhomogeneous random graph model. Under such a general model, it is very difficult to formulate a statistic T (A) which is cheap to compute and converges to λ(P ). Therefore, we now introduce a structural assumption on P , in the form of the well-known Chung-Lu model that was introduced by Aiello et al. (2000) and subsequently studied in many papers (Chung and Lu, 2002; Chung et al., 2003; Decreusefond et al., 2012; Pinar et al., 2012; Zhang et al., 2017) . For a network with n nodes, let δ = (δ 1 , . . . , δ n ) be the vector of expected degrees. Then under the Chung-Lu model, This formulation preserves E[d i ] = δ i , where d i is the degree of the i th node, and is very flexible with respect to degree heterogeneity. Under model (2), note that rank(P ) = 1, and we have Recall that we are looking for some computationally efficient T (A) which converges in probability to λ(P ). We now know that under the Chung-Lu model, λ(P ) is equal to the ratio of the second moment to the first moment of the degree distribution. Therefore, a simple estimator of λ(P ) is given by the sample analogue of this ratio, i.e., ( We now want to demonstrate that approximating λ(A) by T 1 (A) provides us with very substantial computational savings with little loss of accuracy. The approximation error can be quantified as and our goal is to show that e 1 (A) → 0 in probability, while the computational cost of T 1 (A) is much smaller than that of λ(A). We will show this both from a theoretical perspective and an empirical perspective. We next describe the empirical results from a simulation study, and we postpone the theoretical discussion to Section 3 for organizational clarity. We used n = 5000, 10000, and constructed a Chung-Lu random graph model where P (i, j) = θ i θ j . The model parameters θ 1 , . . . , θ n were uniformly sampled from (0, 0.25). Then, we randomly generated 100 networks from the model, and computed λ(A) and T 1 (A). The results are reported in Table 2 . Average runtime for the moment based estimator, T 1 (A), is only 0.07 seconds for n = 5000 and 0.35 seconds for n = 10000, whereas for the spectral radius, λ(A), it is 78.2 seconds and 606.44 seconds respectively, which makes the latter 1100-1700 times more computationally burdensome. The average error for T 1 (A) is very small, and so is the SD of errors. Thus, even for moderately sized networks where n = 5000 or n = 10000, using T 1 (A) as a proxy for λ(A) can reduce the computational cost to a great extent, and the corresponding loss in accuracy is very small. For massive networks where n is in millions, this advantage of T 1 (A) over λ(A) is even greater; however, the computational burden for λ(A) becomes so large that this case is difficult to illustrate using standard computing equipment. Thus, T 1 (A) provides us with a computationally efficient and statistically accurate method for finding the epidemic threshold. The first approximation, T 1 (A), provides us with a computationally efficient method for finding the epidemic threshold. This addresses the first issue pointed out at the beginning of Section 2.1. However, computing T 1 (A) requires data on the degree of all n nodes of the network. Therefore, this does not solve the second issue pointed out at the beginning of Section 2.1. We now propose a second alternative, T 2 , to address the second issue. The idea behind this approximation is based on the same heuristic that was laid out in Section 2.2. Since λ(P ) is a function of degree moments, we can estimate these moments using observed node degrees. In defining T 1 (A), we used observed degrees of all n nodes in the network. However, we can also estimate the degree moments by considering a small sample of nodes, based on random walk sampling. The algorithm for computing T 2 is given in Algorithm 1. Algorithm 1 RandomWalkEstimate 1: procedure ESTIMATE(G, r, t * ) 2: x ← 1. while t ≤ t * do 4: x ← random neighbor of x, chosen uniformly. v ← 0. while i ≤ r do x ← random neighbor of x, chosen uniformly. 9: return T 2 = v/r. Note that we only use (t * + r) randomly sampled nodes for computing T 2 , which implies that we do not need to collect or store data on the n individuals. Therefore this method overcomes the second issue pointed out at the beginning of Section 2.1. The approximation error arising from this method can be defined as and we want to show that e 2 (A) → 0 in probability, while the data-collection cost of T 2 (A) is much less than that of T 1 (A). In the next section, we are going to formalize this. In this section, we are going to establish that the approximation errors e 1 (A) and e 2 (A), defined in Equations (4) and (5), converge to zero in probability. From Theorem 2.1 of Chung et al. (2003) , we know that when holds, then for any > 0, Therefore, under (6), it suffices to show that, for any > 0, We would like to show that, under reasonable conditions, for any > 0, We will show that for any > 0, We first prove that (8) implies (7). Equation (8) Note that m 2 /m 1 is a strictly increasing function of m 2 and a strictly decreasing function of m 1 . Therefore, for outcomes belonging to the above event, Note that 1 − 1 − 1 + = 2 1 + < 2 , and 1 + 1 − − 1 = 2 1 − < 4 , given that < 1/2. Now, fix > 0 and let = /4. Then, Thus, proving (8) is sufficient for proving (7). Next, we state and prove the theorem which will establish (8). Theorem 2. If the average of the expected degrees goes to infinity, i.e., 1 n i δ i → ∞, and the spectral radius dominates log 2 (n), i.e., i δ 2 i i δ i = ω(log 2 n), then for any > 0, Proof. We will use Hoeffding's inequality (Hoeffding, 1994) for the first part, and we begin by stating the inequality for the sum of Bernoulli random variables. Let B 1 , . . . , B m be m independent (but not necessarily identically distributed) Bernoulli random variables, and S m = m i=1 B i . Then for any t > 0, In our case, and we know that {A(i, j) : 1 ≤ i < j ≤ n} are independent Bernoulli random variables. Fix > 0 and note that E[ i λ min (L) of the Laplacian of G, It follows above that ε(Q) = 1−λ 2 (Q) = 1−λ 2 (D −1/2 AD −1/2 ) = λ n−1 (I−D −1/2 AD −1/2 ) = 1 − o(1). Putting these together, we get the following corollary on the total number of node queries. Corollary 6.1. For a graph generated from the expected degrees model, with probability 1 − 1/n, Algorithm 1, needs to query ≤ 6dmax d min , but this is a loose bound, better bounds can be derived for power law degree distributions, for instance. Thus, we have proved that the approximation error for T 2 (A) goes to zero in probability. In addition, Corollary 6.1 shows that the number of nodes that we need to query in order to have an accurate approximation is much smaller than n. Furthermore, computing T 2 only requires node sampling and counting degrees, and therefore the runtime is much smaller than eigenvalue algorithms. Therefore, T 2 (A) is a computationally efficient and statistically accurate approximation of the epidemic threshold, while also requiring a much smaller data budget compared to T 1 (A). In this section, we characterize the empirical performance of our sampling algorithm on two synthetic networks, one generated from the Chung-Lu model and the second generated from the preferential attachment model of . Our first dataset is a graph generated from the Chung-Lu model of expected degrees. We generated a powerlaw sequence (i.e. fraction of nodes with degree d is proportion Data Nodes Edges λ(A) T 1 (A) Chung-Lu 50k 72k 43.83 48.33 Pref-Attach 50k 250k 37 32.8 Table 3 : Statistics of the two synthetic datasets used. to d −β ) with exponent β = 2.5 and then generated a graph with this sequence as the expected degrees. Table 3 notes that, as expected, the first eigenvalue The second dataset is generated from the preferential attachment model , where each incoming node adds 5 edges to the existing nodes, the probability of choosing a specific node as neighbor being proportional to the current degree of that node. While the preferential attachment model naturally gives rise to a directed graph, we convert the graph to an undirected one before running our algorithm. It is interesting to note that even in this case the Chung-Lu model does not hold, our first approximation, T 1 (A), is close to λ(A). In each of the networks, the random walk algorithm presented in Algorithm 1 was used for sampling. The random walk was started from an arbitrary node and every 10 th node was sampled (to account for the mixing time) from the walk. These samples were then used to calculate T 2 (A). This experiment was repeated 10 times. These gave estimates T 1 2 , . . . , T 10 2 . We then calculate two relative errors ∀i ∈ {1, 2, . . . , 10}, We plot the averages of { T 1−T 2 i } and { λ−T 2 i } against the actual number of nodes seen by the random walk. Note that the x-axis accurately reflect how many times the algorithm actually queried the network, not just the number of samples used. Measuring the cost of uniform node sampling in this setting, for instance, would need to keep track of how many nodes are touched by a Metropolis-Hastings walk that implements the uniform distribution. Figure 1 demonstrates the results. For the two synthetic networks, the algorithm is able to get a 10% approximation to the statistic T 1 (A) by exploring at most 10% of the network. With more samples from the random walk, the mean relative errors settle to around 4-5%. However, once we measure the mean relative errors with respect to λ(A), it becomes clearer that the estimator T 2 (A) does better when the graph is closer to the assumed (i.e. Chung-Lu) model. For the Chung-Lu graph, the mean error λ−T 2 essentially is very similar to T 1−T 2 , which is to be expected. For the preferential attachment graph too, it is clear that the estimate T 2 is able to achieve a better than 10% relative error approximation of λ(A). Note that, if we were instead counting only the nodes whose degrees were actually used for estimation, the fraction of network used would be roughly 1 − 2% in all the cases, the majority of the node cost actually goes in making the random walk mix. In this work, we investigated the problem of computing SIR epidemic thresholds of social contact networks from the perspective of statistical inference. We considered the two challenges that arise in this context, due to high computational and data-collection complexity of the spectral radius. For the Chung-Lu network generative model, the spectral radius can be characterized in terms of the degree moments. We utilized this fact to develop two approximations of the spectral radius. The first approximation is computationally efficient and statistically accurate, but requires data on observed degrees of all nodes. The second approximation retains the computationally efficiency and statistically accuracy of the first approximation, while also reducing the number of queries or the sample size quite substantially. The results seem very promising for networks arising from the Chung-Lu and preferential attachment generative models. There are several interesting and important future directions. The methods proposed in this paper have provable guarantees only under the Chung-Lu model, although it works very well under the preferential attachment model. This seems to indicate that the degree based approximation might be applicable to a wider class of models. On the other hand, this leaves open the question of developing a better "model-free" estimator, as well as asking similar questions about other network features. In this work we only considered the problem of accurate approximation of the epidemic threshold. From a statistical as well as a real-world perspective, there are several related inference questions. These include uncertainty quantification, confidence intervals, onesample and two-sample testing, etc. Social interaction patterns vary dynamically over time, and such network dynamics can have significant impacts on the contagion process Leitch et al. (2019) . In this paper we only considered static social contact networks, and in future we hope to study epidemic thresholds for time-varying or dynamic networks. We do realize that in the face of the current pandemic, while it is important to pursue research relevant to it, it is also important to be responsible in following the proper scientific process. We would like to state that in this work, the question of epidemic threshold estimation has been formalized from a theoretical viewpoint in a much used, but simple, random graph model. We are not yet at a position to give any guarantees about the performance of our estimator in real social networks. We do hope, however, that the techniques developed here can be further refined to work to give reliable estimators in practical settings. A random graph model for massive graphs Emergence of Scaling in Random Networks Emergence of scaling in random networks Episimdemics: an efficient algorithm for simulating the spread of infectious disease over large realistic social networks Largest eigenvalues of sparse inhomogeneous erdős-rényi graphs Using mobile phone data to predict the spatial spread of cholera A bootstrap-based inference framework for testing similarity of paired networks A nonparametric view of network models and Newman-Girvan and other modularities Hypothesis testing for automated community detection in networks Spectral radii of sparse random matrices. Annales de l'Institut Henri Poincare (B) Probability and Statistics Mathematical models in population biology and epidemiology Epidemic thresholds in real networks The effect of travel restrictions on the spread of the 2019 novel coronavirus The average distances in random graphs with given expected degrees Eigenvalues of random power law graphs On the spectra of general random graphs. the electronic journal of combinatorics Invasion threshold in heterogeneous metapopulation networks Experimental evidence of a pathogen invasion threshold Large graph limit for an sir process in random network with heterogeneous connectivity Modelling disease outbreaks in realistic urban social networks Dimensions of superspreading Practical methods for graph two-sample testing Discretetime markov chain approach to contact-based disease spreading in complex networks Model-based clustering for social networks The mathematics of infectious diseases Probability inequalities for sums of bounded random variables Latent space approaches to social network analysis Clinical features of patients infected with 2019 novel coronavirus in wuhan, china. The Lancet The implications of network structure for epidemic dynamics Containing papers of a mathematical and physical character Contributions to the mathematical theory of epidemics. ii.the problem of endemicity Contributions to the mathematical theory of epidemics. iii.further studies of the problem of endemicity Statistical evaluation of spectral methods for anomaly detection in static networks Spatial spread of the west africa ebola epidemic Representing degree distributions, clustering, and homophily in social networks with latent cluster random effects models Toward epidemic thresholds on temporal networks: a review and open questions Chernoff-type bound for finite markov chains Network theory and SARS: predicting outbreak diversity Spread of epidemic disease on networks Epidemic processes in complex networks The similarity between stochastic kronecker and chung-lu graph models Modeling control strategies of respiratory pathogens Got the Flu (or Mumps)? Check the Eigenvalue! Simulated Epidemics in an Empirical Spatiotemporal Network of 50,185 Sexual Contacts Spectral clustering and the high-dimensional stochastic blockmodel Anomaly detection in static networks using egonets Spectral clustering in heterogeneous networks. Statistica Sinica A block model for node popularity in networks with community structure Pulse vaccination strategy in the sir epidemic model Early epidemiological analysis of the coronavirus disease 2019 outbreak based on crowdsourced data: a population-level observational study. The Lancet Digital Health A semiparametric two-sample hypothesis testing problem for random graphs A nonparametric two-sample hypothesis testing problem for random graphs Reproduction numbers and subthreshold endemic equilibria for compartmental models of disease transmission A measles epidemic threshold in a highly vaccinated population A novel coronavirus outbreak of global health concern Predicting the epidemic threshold of the susceptible-infected-recovered model Unification of theoretical approaches for epidemic spreading on complex networks Epidemic spreading in real networks: an eigenvalue viewpoint Likelihood-based model selection for stochastic block models Heterogeneities in the transmission of infectious agents: Implications for the design of control programs Model selection for degree-corrected block models Random graph models for dynamic networks Performance evaluation of social network anomaly detection using a moving windowbased scan method Consistency of community detection in networks under degree-corrected stochastic block models A novel coronavirus from patients with pneumonia in china