key: cord-0004321-osol7wdp authors: Ma, Junling title: Estimating epidemic exponential growth rate and basic reproduction number date: 2020-01-08 journal: Infect Dis Model DOI: 10.1016/j.idm.2019.12.009 sha: 0d83a837839b75104b3925a1a0be27ea2883b419 doc_id: 4321 cord_uid: osol7wdp The initial exponential growth rate of an epidemic is an important measure of the severeness of the epidemic, and is also closely related to the basic reproduction number. Estimating the growth rate from the epidemic curve can be a challenge, because of its decays with time. For fast epidemics, the estimation is subject to over-fitting due to the limited number of data points available, which also limits our choice of models for the epidemic curve. We discuss the estimation of the growth rate using maximum likelihood method and simple models. This is a series of lecture notes for a summer school in Shanxi University, China in 2019. The contents are based on Ma et al. (Ma, Dushoff, Bolker, & Earn, 2013) . We will study the initial exponential growth rate of an epidemic in Section 1, the relationship between the exponential growth rate and the basic reproduction number in Section 2, an introduction to the least square estimation and its limitations in Section3, an introduction to the maximum likelihood estimation in Section 4, and the maximum likelihood estimation of the growth rate in Section 5. Epidemic curves are time series data of the number of cases per unit time. Common choices for the time unit include a day, a week, a month, etc. It is an important indication for the severeness of an epidemic as a function of time. For example, Fig. 1 shows the cumulative number of Ebola cases during the 2014e16 Ebola outbreak in western Africa. The cumulative cases during the initial growth phase form an approximately linear relationship with time in log-linear scale. Thus, in linear scale, the number of deaths increases exponentially with time. The mortality curve (the number of deaths per unit time) shows a similar pattern, as demonstrated by the daily influenza deaths in Philadelphia during the 1918 influenza pandemic shown in Fig. 2 . In fact, most epidemics grow approximately exponentially during the initial phase of an epidemic. This can be illustrated by the following examples. where S is the fraction of susceptible individuals, I is the fraction of infectious individuals, and R is the fraction of recovered individuals; b is the transmission rate per infectious individual, and g is the recovery rate, i.e., the infectious period is exponentially distributed with a mean 1=g. Linearize about the disease-free equilibrium (DFE) ð1; 0; 0Þ, dI dt zðb À gÞI: (2) Thus, if b À g > 0, then IðtÞ grows exponentially about the DFE. In addition, initially, Sz1, thus, the incidence rate (number of new cases per unit time) C ¼ bSI also increases exponentially. It is similar for an Susceptible-Exposed-Infectious-Recovered (SEIR) model, as illustrated by the following example. Example 2. Lets consider an SEIR model: where E is the fraction of latent individuals (infected but not infectious), s the rate that latent individuals leaving the class, i.e; , the mean latent period is exponentially distributed with mean 1=s; S, I, R, b and g are similarly defined as in Example 1. Again, ð1; 0; 0; 0Þ is a disease free equilibrium representing a completely susceptible population. Linearize about this equilibrium, the equations for E and I are decoupled, and become dE dt Note that the Jacobian matrix J ¼ Às b s Àg ! has two real eigenvalues, namely, l 1 ¼ Àðs þ gÞ þ ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi ðs À gÞ 2 þ 4sb q 2 ; l 2 ¼ Àðs þ gÞ À ffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi ffi ðs À gÞ 2 þ 4sb q 2 : Thus, about the DFE, the solution of the model is asymptotically exponential with a rate l 1 . Similar to Example 1, the incidence rate also grows exponentially initially. In general, suppose the infection states of an individual can be characterized by the following vector ð S ! ; I ! Þ, where S ! represents multiple susceptible states, and I ! represents multiple infectious (or latent) states. We also use S ! and I ! represent the number of individuals in each state. Also assume that the epidemic can be modeled by the following generic system Þ is a DFE, and the initial number of infectious individuals I ! ð0Þ is very small, then, initially, the dynamics of I is governed by the following linearized system If the DEF is unstable, then IðtÞ grows asymptotically exponentially. 2. The exponential growth rate and the basic reproduction number The exponential growth rate is, by itself, an important measure for the speed of spread of an infectious disease. It being zero is, like the basic reproduction number R 0 ¼ 1, a disease threshold. The disease can invade a population if the growth rate is positive, and cannot invade (with a few initially infectious individuals) if it is negative. In fact, it can be used to infer R 0 .There are two approaches to infer R 0 from the exponential growth rate, a parametric one, and a non-parametric one. For the parametric approach, we need an underlying model that gives both the growth rate and R 0 . Example 3. Consider the SIR model (1) in Example 1. Note that ð1; 0; 0Þ is an disease free equilibrium, representing a completely susceptible population. As we discussed above, the exponential growth rate is l ¼ b À g. Note that the basic reproduction number is R 0 ¼ b=g . If, for example, g is estimated independently to l, then, Lets look at a more complicated example. express b in terms of l and substitute it into R 0 , then Thus, if the mean infectious period 1=g and the mean latent period 1=s can be independently estimated on l, then R 0 can be inferred from l. Typically, for an epidemic model that contains a single transmission rate b, if all other parameters can be estimated independently to the exponential growth rate l, then l determines b, and thus determines R 0 . Models can be overly simplified for mathematical tractability. For example, Both the SIR model in Example 1 and the SEIR model in Example 2 assume exponentially distributed infectious period. However, the infectious period and the latent period are mostly likely not exponential. Wallinga and Lipsitch (Wallinga & Lipsitch, 2006 ) developed a non-parametric method to infer the basic reproduction number from the exponential growth rate without assuming a model. Let hðaÞ be the probability that a random individual remain infectious a time units after being infected (i.e., a is the infection age); bðaÞ is the rate of transmission at the infection age a. Then, tðaÞ ¼ hðaÞbðaÞ is the transmissibility of a random infectious individual at the infection age a, assuming that the whole population is susceptible. Thus, In addition, we assume that the population is randomly mixed, i.e., every pair of individuals have identical rate of contact. Let cðtÞdt be the number of new infections during the time interval ½t;t þ dt, that is, cðtÞ is the incidence rate, and SðtÞ be the average susceptibility of the population, i.e., the expected susceptibility of a randomly selected individual. In addition, new infections at time t is the sum of all infections caused by infectious individuals infected a time unit ago (i.e., at time t À a) if they remain infectious at time t (with an infectious age a) and their contact is susceptible. That is, and thus cðtÞ ¼ SðtÞ To compute R 0 , we need to normalize tðaÞ as a probability density function, Note that wðaÞda is the probability that a secondary infection occurs during the infection age interval ½a; a þ da. That is, wðaÞ is the probability density function of the generation time, i.e., the time from being infected to generate a secondary infection. This generation time is also called the serial interval. With the serial interval distribution wðtÞ, This means that the cðtÞ is only determined by R 0 , wðtÞ and SðtÞ. At the beginning of an epidemic, where the epidemic grows exponentially (with an exponential growth rate l), SðtÞz1 and cðtÞ ¼ c 0 e lt where c 0 is the initial number of cases at where MðxÞ ¼ R ∞ 0 e xa wðaÞda is the moment generating function of the serial time distribution wðaÞ. Equation (5) links the exponential growth rate to the basic reproduction number though the serial interval distribution only. That is, if we can estimate the serial interval distribution and the exponential growth rate independently, that we can infer the basic reproduction number. Note that the serial interval distribution wðtÞ can be estimated independently to the exponential growth rate. For example, it can be estimated empirically using contact tracing. Alternatively, one can also assume an epidemic model. Here we discuss a few simple examples. Example 5. Consider an SIR model. Let FðaÞ be the cumulative distribution function of the infectious period, and a constant transmission rate b. The probability that an infected individual remains infectious a time units after being infected is and thus the transmissibility is tðaÞ ¼ b½1 À FðaÞ; and the serial interval distribution is where m is the mean infectious period. For the special case that the infectious period is exponentially distributed with a rate g, i.e., FðaÞ ¼ 1 À e Àga , this model becomes Model (1). Then the density function of serial interval distribution is which is identical to the density function of infectious period distribution. The moment generating function is Note that the exponential growth rate is l ¼ b À g, then Lets consider a more complex example with multiple infected states. Example 6. Consider an SEIR model with a constant transmission rate b. Let FðaÞ and GðaÞ be the cumulative distribution functions of the infectious period and the latent period, respectively. Given the latent period T L ¼ [ a, the probability that an infectious individual is infectious a time units after being infected is 1 À Fða À [Þ:Thus, Hence, the serial interval distribution is For the special case that the latent period is exponentially distributed with a rate s (i.e., FðaÞ ¼ 1 À e Àga ) and the latent period is exponentially distributed with a rate s (i.e., GðaÞ ¼ 1 À e Àsa ), this model becomes Model (3), and wðaÞ ¼ gse Àga Z a 0 e ðgÀsÞs ds ¼ ðge Àga ÞÃðse Àsa Þ: That is, if both distributions are exponential, the serial interval distribution is the convolution of the latent period distribution and the infectious period distribution. In this case, the basic reproduction number is where M I ðxÞ and M L ðxÞ are the moment generating functions of the infectious period and latent period, respectively. In Equation (4), R ðtÞ ¼ R 0 SðtÞ is the reproduction number, and thus this equation can be used to estimate the production number at any time t during the epidemic given the incidence curve cðtÞ, namely, This is similar to, but different from, the nonparametric method developed by Wallingua and Teunis (Wallinga & Teunis, 2004) . The least squares method is one of the most commonly used methods for parameter estimation in mathematical biology. This method is in fact a mathematical method. For a family of curves f ðt; q ! Þ, where q ! 2R m is a vector of parameters of the family, this method finds the curve f ðt; b qÞ in the family that minimizes the distance between the curve and a set of points , and x ! be the Euclidean norm in R n , then the mathematical formulation of the least squares method is where argmin gives the parameter q ! that minimizes the objective function. For our purpose, the observations fðt i ; x i Þg nÀ1 i¼0 is the epidemic curve, i.e., x 0 is the number of initially observed cases, and x i is the number of new cases during the time interval ðt iÀ1 ; t 1 . We aim to find an exponential function f ðt; c 0 ; lÞ ¼ c 0 e lt that minimizes its distance to the epidemic curve, i.e., the parameters q ¼ ðc 0 ; lÞ. There are two commonly use methods to estimate the exponential growth rate l: 1. Nonlinear least square to fit to f ðt; c 0 ; lÞ ¼ c 0 e lt directly; 2. Linear least square to fit fðt i ; lnx i Þg to ln f ðt; c 0 ; lÞ ¼ lnc 0 þ lt. The nonlinear least squares method does not have an analytic solution. Numerical optimization is needed to solve the minimization problem (6). The linear least square method has an analytic solution: Let [ 0 ¼ lnc 0 , then the least squares problem becomes The objective function is a quadratic function of [ 0 and l, thus, the minimum is achieved at i¼0 y i , which represents the average of any sequence fy i g n i¼0 , then, and thus the best fit exponential growth rate ls b l ¼ Do these two methods yield the same answer? To compare, we simulate an epidemic curve of the stochastic SEIR model in Example 2, using the Gillespie method (Gillespie, 1976) . The simulated daily cases (number of individuals showing symptom on a day) are then aggregated into weekly cases. Then, we use both methods to fit an exponential curve to the simulated epidemic curve. The simulated epidemic curve and the fitting results are shown in Fig. 3 . This exercise illustrates a challenge of fitting an exponential model to an epidemic curve: how to determine the time period to fit the exponential model. The exponential growth rate of an SEIR model decreases with time as the susceptible population decreases. In Fig. 3 , The epidemic curve peaks in week 13. We choose a sequence of nested fitting windows starting in the first week and ending in a week w for w ¼ 3; 4;…;13. The SEIR model has an asymptotic exponential growth, so the fitted exponential growth rate is not monotonic near the beginning of the epidemic. For larger fitting windows, both methods give an exponential growth rate that decreases with the length of the fitting window. We need more data points to reduce the influence of the stochasticity. However, using more data points also risks of obtaining an estimate that deviates too much from the true exponential growth rate. There is no reliable method to choose a proper fitting window. Fig. 3 also shows that the linear and nonlinear least squares methods may not yield the same estimate. This is because of a major limitation of both least squares methods: they implicitly assume that the deviations jx i À f ðt i ; q ! Þj carry identical weights. With the nonlinear method, later data points (at larger times) deviate more from the exponential curve than the earlier data points, because the exponential growth slows down with time. Thus, the method is more biased to the later data points. With the linear method, the deviations in lnx i are more even than in x i , and thus the linear method is less biased to the later data points than the nonlinear method does. The least squares method, as mentioned above, is a mathematical problem. It does not explicitly assume any error distributions, and thus cannot give us statistical information about the inference. For example, if we use two slightly different fitting windows and get two slightly different estimates, is the difference of the two estimates statistically significant? Such a question cannot easily be answered by the least squares method. Interestingly, the least squares methods make many implicit assumptions to the deviations. We have mentioned the implicit equal-weight assumption above. It also implicitly assumes that the order of the observations does not matter, and that positive and negative deviations are equivalent. Thus, they implicitly assume that the deviations are independently identically and symmetrically distributed. In statistics, the least squares method is commonly used in linear and nonlinear regression with an addition assumption that the errors are independently and identically normally distributed. However, these assumption on the errors may not be appropriate. For example, the new cases at time t þ 1 may be infected by those who are infected at time t. Thus, the number of new cases at different times may not be independent. Also, the number of cases is a counting variable, and thus its mean and variance may be closely related, meaning that the error may not be identically normally distributed. In the next section, we address some of these problems using the maximum likelihood method. The maximum likelihood method is a commonly used statistical method for parameter inference; see, e.g., [(Bolker, 2008 To construct the likelihood function we need to make assumptions on the error distribution. There are two types of error: the process error and the observation error. The observation error is the error in the observation process. For example, most people with influenza do not go to see a doctor, and thus there is no record of these cases, resulting in an under-reporting of the number influenza cases. Also, many influenza related deaths are caused by complications such as pneumonia, and influenza may not be recorded as the cause. Typos, miscommunication, etc, can all result in observation errors. The process error originates from the stochasticity of the system that is independent to observation. For example, the disease dynamics is Fig. 3 . The simulated SEIR epidemic curve (upper) and the fitted exponential growth rate as a function of the end of the fitting window (lower). The epidemic curve is simulated stochastically from the SEIR model in Example 2 using the Gillespie method (Gillespie, 1976) with the parameters b ¼ 0:3, s ¼ 1, g ¼ 0:2, The rates have a time unit of a day. The daily cases are then aggregated by week. The data points are taken at times t i ¼ i, i ¼ 0; 1; 2; …13 weeks. The theoretical exponential growth rate is l ¼ 0:547 per week. intrinsically stochastic. The time that an infectious individual recovers, and the time that a susceptible individual is infected, are all random variables that affects the number of new infections at any time, even if we eliminate all observation errors. These two types of errors have very different nature, and thus need very different assumptions. For example, it is reasonable to assume that observation errors are independent to each other, but process errors at a later time are commonly dependent on the process errors at earlier times. If observation errors are large and process errors are negligible, then we assume that the random variable X i corresponding to the observation x i is independently distributed with a probability mass function p i ðk; q ! Þ where k is the values that X i can take. Then, the likelihood function is The maximization of this likelihood function rarely has an analytic solution, and commonly needs to be solved numerically. Note that each factor (probability) can be very small, and thus the product may be very difficult to minimize numerically because of rounding errors (from the binary representation of real numbers in computers). It is a common practice to maximize the log-likelihood function For example, we assume that the number of cases xðt i Þ at time t i is independently Poisson distributed with mean m i ¼ c 0 e lti . Then, the log-likelihood function Note that the observed cases x i are constants, and thus the last term can be ignored for maximization. Thus, This maximization problem can only be solved numerically. We choose Poisson distribution because its simple form greatly simplifies the log-likelihood function. In addition, it does not introduce more parameters, which is valuable to avoid over-fitting when the number of data points available is small. If the process error is not completely negligible, then choosing an overly dispersed distribution, such as the negative binomial distribution may be desirable. A negative binomial distribution has two parameters, the success probability q ! 0 and the shape parameter r > 0. For simplicity, we assume that the shape parameter r is the same at each time t i , and will; be estimated together with the model parameters q ! ; but q depend on t i . The probability mass function is and the log-likelihood function is Again, the last term can be ignored for the optimization problem. In addition, there is a constraint r > 0. If process errors are large and observation errors are negligible, then we cannot assume that the observed values X iþ1 and X i are independent to each other. Instead, for all i ¼ 0; 1; …; n À 2, we compute the probability mass function of X iþ1 given fX j ¼ x j g i j¼0 , namely, q iþ1 ðk; q ! fx j g i j¼0 Þ. Then, the likelihood function is For simplicity, assume that X iþ1 is Poisson distribution with mean m iþ1 ¼ X i e lðtiþ1ÀtiÞ . Note that, since we assumed no observation error, the initial condition c 0 ¼ x 0 is exact, and thus there is a single parameter l for the model. Thus, and thus the log-likelihood function is x iÀ1 e lðt i Àt iÀ1 Þ þ x i lðt i À t iÀ1 Þ þ x i lnx i À lnx i !: Again, the last two terms can be ignored in maximization because they are constants. Thus, l ¼ argmax l x iÀ1 e lðt i Àt iÀ1 Þ þ ðt i À t iÀ1 Þx i l: It is much harder to formulate the likelihood function if process errors and observation errors must both be considered. We can simplify the problem by ignoring the process error and use an overly dispersed observation error distribution as a compensation. Note that this simplification mainly affects the confidence intervals. The maximum likelihood method gives a point estimate, i.e., one set of parameter values that makes it mostly likely to observe the data. However, it is not clear how close the point estimates are to the real values. To answer this question we use an interval estimate, commonly known as a confidence interval. A confidence interval with a confidence level a is an interval that has a probability a that contains the true parameter value. A commonly used confidence level is 95%, which originates from a normal distribution. If a random variable X is normally distributed with a mean m and a standard deviation s, then the probability that X2½m À2s; m þ2s is 95%. The confidence interval can be estimated using the likelihood ratio test [ (Bolker, 2008), p.192] . Let c q !^b e the point estimate of the parameters. A value l 0 is in the 95% confidence interval is equivalent to accepting with 95% probability that l 0 is a possible growth rate. To determine this we fit a nested model by fixing the growth rate l ¼ l 0 , suppose its point estimate is b q 0 . We then compute the likelihood ratio The Wilks' theorem (Wilks, 1938) guarantees that, as the sample size becomes large, the statistics À2lnL ¼ 2½[ð b qÞ À[ð b q 0 Þ is c 2 distributed with a degree of freedom 1. We thus can compare À2lnL with the 95% quantile of the c 2 distribution and determine if l 0 should be in the confidence interval or not. We can thus perform a linear search on both sides of the point estimate to determine the boundary of the confidence interval. We still have not addressed the problem of choosing a fitting window for an exponential model. Recall that the challenge arises because the exponential growth rate of an epidemic decreases with time. Instead of finding heuristic conditions for choosing the fitting window, we circumvent this problem by incorporating the decrease of the exponential growth rate into our model. We have two choices, using either a mechanistic model such as an SIR or SEIR model, or a phenomenological model. Naturally, if we know that a mechanistic model is a good description of the disease dynamics, fitting such a model to the epidemic curve is a good option (see, e.g., (Chowell, Ammon, Hengartner, & Hyman, 2006; Pourabbas, d'Onofrio, & Rafanelli, 2001) ,). We use an SIR model as an example. For simplicity, we assume that the process error is negligible, and the incidence rate is Poisson distributed with a mean CðtÞ given by an SIR model (CðtÞ ¼ bSIN where N is the population size). To construct the log-likelihood function, we need to calculate CðtÞ, i.e., numerically solve the SIR model. To do so, we need the transmission rate b. the recovery rate g, the initial fraction of infectious individuals Ið0Þ ¼ I 0 (with the assumption that Rð0Þ ¼ 0, Sð0Þ ¼ 1 À I 0 , and thus I 0 determines the initial conditions), in addition to the population size N. Thus, the parameters of the model is q ! ¼ ðb; g; I 0 ; NÞ. Thus the log-likelihood function is (ignoring the constant terms) where the number of new cases cðt i Þ in the time interval ½t i ; t iþ1 is cðt i Þ ¼ Sðt iþ1 Þ À Sðt i Þ ; and Sðt i Þ is solved numerically from the SIR model. Thus, [ implicitly depend on b, g and I 0 through SðtÞ. One draw back using such a mechanistic model is its high computational cost, since each evaluation of the log-likelihood function requires solving the model numerically, and numerical optimization algorithms can be very hungry on function evaluations, especially if the algorithm depends on numerical differentiation. Another draw back is that these mechanistic models can be overly simplified, and may not be a good approximation to the real disease dynamics. For example, for seasonal influenza, due to the fast evolution of the influenza virus, individuals have different history of infection, and thus have different susceptibility to a new strain. Yet simple SIR and SEIR models assume a population with a homogeneous susceptibility. Thus using a simple SIR to fit to an influenza epidemic may be an over simplification. However, realistic mechanistic models can be overly complicated, and involve too many parameters that are at best difficult to estimate. For example, a multi-group SIR model depends on a contact matrix consisting of transmission rates between groups, which contains a large number of parameters if the model uses many groups. If all we need to estimate is the exponential growth rate, we only need a model that describes the exponential growth that gradually slows down. Most cumulative epidemic curves grow exponentially initially, and then saturates at the final epidemic size. A simple phenomenological model can be used to describe the shape of the cumulative epidemic curve, but the model itself may not have realistic biological meaning. However, if simple mechanistic models cannot faithfully describe the epidemic process, using a simple phenomenological model with an analytical formula may be a better choice, at least numerically, because repetitively solving a system differential equations numerically, and differentiating the log-likelihood function numerically, can both be avoided with the analytical formula. Here we discuss some examples for such models. The logistic model is the simplest model that shows an initial exponential growth followed a gradual slowing down and a saturation. The cumulative incidences CðtÞ (the total number of cases by time t) can be approximated by d dt CðtÞ ¼ rCðtÞ 1 À CðtÞ K : where r is the exponential growth rate, and K ¼ lim t/∞ CðtÞ. Let C 0 ¼ Cð0Þ, its solution is The new cases cðt i Þ in a time period ½t i ; t iþ1 is thus The model parameters are q ! ¼ ðr; K; C 0 Þ. Note that it is less than the number of parameters of the simplest mechanistic model (i.e., the SIR model). The logistic model has a fixed rate of slowing down of the exponential growth rate. To be more flexible, we can use the Richards model (Richards, 1959) for the cumulative incidence curve. The Richards model, also called the power law logistic model, can be written as d dt CðtÞ where ais the parameter that controls the steepness of the curve. Note that the logistic model is a special case with a ¼ 1. Its solution is The new cases cðt i Þ in a time period ½t i ; t iþ1 is also given by (8). The parameters are q ! ¼ ðr; K; C 0 ; aÞ. To compare the performance of both the SIR model and the phenomenological models, we fit these models to the stochastically simulated SEIR epidemic curve of weekly cases that we introduced in Section 3 (Fig. 3) . We assume that the process error is negligible, and the observations are Poisson distributed about the mean that is given by the corresponding models. We use the maximum likelihood method. The results are shown in Fig. 4 . The predictions of the exponential model, as discussed before, quickly decreases as more data points are used. Both the logistic model and the Richards model give robust estimates with fitting windows ending up to the peak of the epidemic. The SIR model gives a robust estimate for all fitting windows up to the whole epidemic curve. Thus, the SIR model is a good model to use to fit the exponential growth rate, even if it may not be the correct mechanistic model. (e.g., it ignores the latent period in this example). It requires more computational power, because the epidemic curve lacks an analytic formula, and needs to be numerically solved from a system of ordinary differential equations. The logistic model and the Richards model can be used for all data points up to the peak of the epidemic. Fig. 4 also show that the SIR model and the logistic model give the narrowest confidence intervals. However, narrower confidence intervals may not be desirable if it has a large chance that it does not contain the true value. Due to errors, especially process errors, each realization of the underlying stochastic epidemic process yields a different epidemic curve. These epidemic curves may exhibit different exponential growth rates even if the underlying parameter values are the same. An observed epidemic curve is just a single realization of the epidemic process. Does the estimated confidence intervals contain the theoretical exponential growth rate of the epidemic process? This question is answered by the "coverage probability", which is the probability that the confidence interval contains the true value. If the confidence interval properly considers all sources of stochasticity, then the coverage probability should be equal to its confidence level. To illustrate this, we numerically compute the coverage of the confidence intervals by simulating the SEIR model 400 times and compute confident interval of the exponential growth rate for each realization, and compute the fraction of the confident intervals containing the theoretical value l ¼ 0:537. The results is summarized in below: logistic model Richards model coverage probability 43% 65% That is, even though the logistic model gives a narrow confidence interval, its coverage probability is low. The coverage probability of the confidence interval given by the Richards model is also significantly lower than the confidence level. This is indeed caused by treating process errors as observation errors. If there is under reporting, that is, only a fraction p of the cases can be observed, then the observation error becomes larger as p decreases (i.e., more under reporting). The coverage will become larger as a result. For example, the case fatality ratio of the 1918 pandemic influenza is about 2% (Frost, 1920) . Thus, the mortality curve can be treated as the epidemic curve with a large under reporting ratio, and thus the observation error dominates. In this case ignoring the process error is appropriate. Ecological models and data in R Transmission dynamics of the great influenza pandemic of 1918 in geneva, Switzerland: Assessing the effects of hypothetical interventions Statistics of influenza morbidity. with special reference to certain factors in case incidence and case-fatality A general method for numerically simulating the stochastic time evolution of coupled chemical reactions A method to estimate the incidence of communicable diseases under seasonal fluctuations with application to cholera A flexible growth function for empirical use How generation intervals shape the relationship between growth rates and reproductive numbers Different epidemic curves for severe acute respiratory syndrome reveal similar impacts of control measures The comparison of the results of fitting the SIR, exponential, logistic, and Richards models to a simulated weekly incidence curve, as a function of the end point of the fitting window (upper). The epidemic curve (lower) is shown as a reference. The epidemic curve and the theoretical exponential This research is partially supported by a Natural Sciences and Engineering Research Council Canada discovery grant, and National Natural Science Foundation of China (No.11771075).