key: cord-0509195-ium7t8ye authors: Li, Wentao; Tong, Jiayi; Anjum, Md.Monowar; Mohammed, Noman; Chen, Yong; Jiang, Xiaoqian title: Federated Learning Algorithms for Generalized Mixed-effects Model (GLMM) on Horizontally Partitioned Data from Distributed Sources date: 2021-09-28 journal: nan DOI: nan sha: 101e4f496f06cb1a6ab8fad03d75dedc94140c45 doc_id: 509195 cord_uid: ium7t8ye Objectives: This paper develops two algorithms to achieve federated generalized linear mixed effect models (GLMM), and compares the developed model's outcomes with each other, as well as that from the standard R package (`lme4'). Methods: The log-likelihood function of GLMM is approximated by two numerical methods (Laplace approximation and Gaussian Hermite approximation), which supports federated decomposition of GLMM to bring computation to data. Results: Our developed method can handle GLMM to accommodate hierarchical data with multiple non-independent levels of observations in a federated setting. The experiment results demonstrate comparable (Laplace) and superior (Gaussian-Hermite) performances with simulated and real-world data. Conclusion: We developed and compared federated GLMMs with different approximations, which can support researchers in analyzing biomedical data to accommodate mixed effects and address non-independence due to hierarchical structures (i.e., institutes, region, country, etc.). There is an increasing surge of interest in analyzing biomedical data to improve health. Biostatisticians and machine learning researchers are keen to access personal health information for a deeper understanding of diagnostics, disease development, and potential preventive or treatment options [1] . In the US, healthcare and clinical data are often collected by local institutions. For many situations, combining these datasets would increase statistical power in hypothesis testing and provide better means to investigate regional differences and subpopulation bias (e.g., due to differences in disease prevalence or social determinants). However, such an information harmonization process needs to respect the privacy of individuals, as healthcare data contain sensitive information about personal characteristics and health conditions. As a minimum requirement, HIPAA (Health Insurance Portability and Accountability Act) [2] specifies PHIs (protected health information) and regulations to de-identify the sensitive information (i.e., safe harbor mechanism). But HIPAA compliance does not mean full protection of the data, as several studies demonstrated re-identifiability of HIPAA de-identified data [3, 4, 5] . Ethical healthcare data sharing and analysis should also respect the "minimum necessary" principle to reduce the unnecessary risk of potential data leakage, which might increase the likelihood of information leakage. The recent development of federated learning, which intends to build a shared global model without moving local data from their host institutions ( Fig. 1) , shows good promise in addressing the challenge in data sharing mentioned above. Despite the exciting progress, there is still an important limitation as existing models cannot effectively handle mixedeffects (i.e., both fixed and random effects), which is very important to analyzing nonindependent, multilevel/hierarchical, longitudinal, or correlated data. The goal of this paper is to fill the technology gap and provide practical solutions with open-source implementation and to allow ordinary biomedical/healthcare researchers to build federated mixed effect learning models for their studies. Federated learning for healthcare data analysis is not a new topic, and there have been many previous studies. However, most of the existing methods assume the observations are independent and identically distributed [6, 7] . In the presence of non-independence due to hierarchical structures (e.g., due to institutional or regional differences), existing federated models have strong limitations in ignoring the regional differences. The generalized linear mixed model (GLMM), which takes the heterogeneous factors into consideration, is more amenable to accommodate the heterogeneity across healthcare systems. There have been very few studies in this area and one relevant work is a privacy-preserving Bayesian GLMM model [8] , which proposed an Expectation-Maximization (EM) algorithm to fit the model collaboratively on horizontally partitioned data. The convergence process is relatively slow (due to the Metropolis-Hastings sampling in the E-step) and it is also not very stable (likely to be trapped in local optima [9] in high-dimensional data). In the experiment, a loose threshold (i.e., 0.08) was used as a convergence condition [8] while typical federated learning algorithms [10] in healthcare use much stringent convergence threshold (i.e., 10 −6 ). Another related work to fit GLMM in a federated manner is the distributed penalized quasi-likelihood (dPQL) algorithm [11] . This algorithm reduces the computational complexity by considering the target function of penalized quasi-likelihood, which is motivated from Laplacian approximation. The model has communication efficiency over the EM approach and can converge in a few shots. However, the target function PQL can have first order asymptotic bias [12] due to the Laplacian approximation of the integrated likelihood. There is an alternative strategy, Gauss-Hermite (GH), which supports high-order approximation. It is computationally more intensive and requires special techniques to handle the numerical instability of the logSumExp operation (due to the overflow issue when the dimensionality grows in the sum of the exponential terms). We will explain both models in this manuscript and compare their performance on simulated and real-world data. In this section, we will discuss the statistic model along with challenges to be tackled. A high-level schema of the method is shown in algorithm 1. Before we introduce the formation of GLMM, let us define some notations. Let us provide the formation of the GLMM. Define P is the distribution of interest and depending on patient-level data X ij , y ij . Define φ as the distribution of random effects. We can compose the joint distribution as following Now we have the log-likelihood function of the joint distribution: From the log-likelihood function Eq. (1), one can see that it does not support direct linear decomposition. In order to support federated learning, we will leverage approximation strategies to make the objective linearly decomposable with simple summary statistics. We will compare Laplace approximation and Gauss-Hermite approximation in the following sections. Let us explain the Laplace approximation. Denote that and one can see the log term inside the log-likelihood function that Applying a Taylor expansion on g(µ i , θ), and we chooseμ i that maximized g(µ i , θ). See thatμ i satisfies g µ (μ i , θ) = 0 and g µµ (μ i , θ) < 0, we have , which is plugged into Eq. (2). With Laplace approximation [13] , one can see that Finally, the objective is to maximize the following formula with respect to θ , for which the terms are linearly decomposable from local sites. Site i needs to calculate the following aggregated data: • scalar of random effect:μ i Gauss-Hermite approximation [14] implements Hermite interpolation concerning Eq. (2). The Hermite polynomial H k (x) and weight h k are defined as followings, Thus, with the Gauss-Hermite approximation, Eq. (2) can be approximated by notice that when l = 1, it is a Laplace approximation. Because GH is more generalizable, we will describe the distributed federated learning model on the GLMM problem with the formation of Gauss-Hermite approximation Eq. (5). Our final objective function for GH is , then site i needs to calculate and transmit the following aggregated data: • p × p matrix: • scalar of random effect:μ i The convergence of the approximation of the likelihood function may be compromised due to over-fitting. Also, for those spatially correlated data, the convergence of them may lead to a complex model. Hence, L2 regularization is added to the local log-likelihood function of Gauss-Hermite approximation form, and as shown below note that when K = 1, it is represented as regularized Laplace approximation to the problem. To evaluate and find the optimum λ, we steadily increased the value of λ in range [0, 10] by 1. Set λ opt as the optimized regularization term with largest m i l i . And chooseβ opt as the optimized estimator for β. Due to the limited computation digits, computers are not able to calculate the correct results of the local log-likelihood function l i of the Gauss-Hermite approximation form as stated above. Such problem is also known as the Log-Sum-Exponential problem and can be solved by shifting the center of the exponential sum for easier computation, where a is an arbitrary number. Thus, the global problem of maximizing m i l i can be divided into several local maximization problems (8) . Each local site i will update the regression intermediates, and they will be combined to update the iteration status. Specifically, in each iteration of the federated GLMM algorithm, the following statistics are exchanged from each site to contribute aggregated data for the global model Gauss-Hermite, Eq. 6, 7; 3. Send the intermediate statistics to center server, and then the center server will aggregate them; 4. Update β and l approx (β, λ) in center server and send back to each client i; end end end Return: The largest l approx (β,λ), the coefficientsβ global =β, and the regularization termλ To test the performance of our proposed methods, we first designed a stress test based on a group of synthetic data, which include 8 different settings (Tab.1), and each set contains 20 datasets. In each dataset, it consists of 4 categorical variables with value in {0, 1}; 6 categorical variables with value in range [−1, 1.5] ∈ R; 1 outcome variable with value in {0, 1}; Site ID, represents the id of which site the entry belongs to; Site sample size, represents the number of samples in this specific setting; Log-odds ratio for each sample; Number of true positive, true negative, false positive, false positive, false negative. To evaluate which method can reach better performance, we proposed the following evaluation measurements: discrimination of the estimated coefficientsβ, the test power of each coefficient, and the precision and recall of the number of significant coefficients. The valuation experiments were conducted among federated GLMM with Laplace approximation, federated GLMM with Gauss-Hermite approximation, and centralized GLMM (all of the data stored in single host) in the R package. And the stress test will be run in 160 different datasets in 8 different settings as mentioned in Tab.1. Although we tested the data sets with the state-of-art benchmark algorithm for centralized GLMM in R, the regression is not perfect for the ground truth coefficients we used to generate the data (Fig.2) . So, it is also important to have the P-values of variables into consideration when interpreting the model. Thus, We made comparisons among centralized GLMM, Laplace method, and Gauss-Hermite method concerning the p-values of coefficients. Tables in the appendix captured the performance of different methods. Fig.3 shows the precision and recall results of centralized, Laplace, and Gauss-Hermite methods. Noted that we set our Gauss-Hermite approximation to 2-degree. The simulation results showed the federated Gauss-Hermite approximation performed better than the method based on Laplace approximation on every variable. Also, the federated Gauss-Hermite method achieved higher test power (Fig.4) . In sum, one-degree increase of the approximation function in LA with our developed GH method, GH outperformed LA methods for federated GLMM implementation. We analyzed the data of COVID-19 electronic health records collected by Optum from February 2020 to January 28, 2021, from a network of healthcare providers. The dataset has been de-identified and based on HIPAA statistical de-identification rules and managed by Optum customer data user agreement. In this database, there are 56,898 unique positive tested COVID-19 patients. After removing the patients with missing data, the final cohort contains 4,531 patients who died and the rest population (41,781) survived. The database contains a regional variable with five levels (Midwest, Northwest, South, West, Others/unknown) to provide privacy-preserving area information to indicate where the samples were collected. We have conducted a GLMM model (considering region-distinct random effect) using this dataset with the following predictors: age, gender, race, ethnicity, Chronic obstructive pulmonary disease (COPD), Congestive heart failure (CHF), Chronic kidney disease (CKD), Multiple sclerosis (MS), Rheumatoid arthritis (RA), LU (other lung diseases), High blood pressure (HTN), ischemic heart disease (IHD), diabetes (DIAB), Asthma (ASTH), obesity (Obese). Our proposed method with GH approximation performed the best with both the smallest AIC and BIC according to the table of the goodness of fit (Tab.2). We also compared the ROC curves (Fig.5 ) between our proposed GH method and centralized method to check their performance. And the result showed that GH approximation (AUC=0.72) outperforms the centralized method without regularization (AUC=0.68). Indicating GH-based GLMM method has better classification performance than the GLMM based on LA approximation. In our proposed model, it showed variables: Unknown race, Chronic kidney disease (CKD), Multiple sclerosis (MS), and other lung diseases (LU) are not significant to the mortality of COVID-19. The result of the regression is in the Appendix (Tab.B.6, B.7, B.8). We compared two federated GLMM algorithms (LA vs. GH) and demonstrated the performance of the federated GLMM based on the GH method surpassed the method based on LA in terms of the accuracy of estimation, power of tests, and AUC. Although the GH method is requiring slightly more computations than the LA method, it is still acceptable for more accurate results. For example, in the prediction of COVID-19 mortality rates, the accuracy of prediction will be more reliable, as we have shown in the previous section. During the optimization iterations, we noticed that some sites have already achieved convergence in very few steps. If those sites stop communicating with the central server, they can be released from extra computations. We would investigate more efficient algorithms based on such a strategy of 'lazy regression' for minimizing communication for federated learning models. Another limitation of the proposed federated GLMM model is not yet differentially private and iterative summary statistics exchange can lead to incremental information disclosure, which might increase the re-identification risk over time. There are several strategies to improve the model based on secure operations like homomorphic encryption and differential privacy, which we have previously studied in GLM models [15] . Finally, in practice, there can be extra heterogeneity that cannot be explained by random intercepts only, it is of interest to further develop our algorithms toward GLMM that allows multiple random effects including random coefficients in the regression models. and the distribution P follows density of logit and φ is a univariate normal, see that where π ij is a Sigmoid function of µ i and defined as with the Gauss-Hermite approximation set up, the objective function can be approximated as Appendix A.1.1. Step 1: Maximize g(µ i ) To maximize g(µ i ), we need to get the derivatives and see that it is a convex problem, using newton's method, we can deriveμ i = arg max µ i g(µ i ) that is global optimum. Appendix A.1.2. Step 2: Maximization preparation of β in LOCAL See the derivative and the second derivative Notice that µ i in (3), (4) and (5) are replaced byμ i + √ 2πωx k whereμ i is the maximand of function g(·) with respect to µ i . Reminds that L = m i=1 log L i , then another Newton's method is applied in global loglikelihood function, Now, focus on β (n+1) = β (n) − L (β (n) ) L (β (n) ) , deduce that Appendix A.2. Synthetic data generation There are 8 settings of data sets generated from the process, and each setting can be summarized in table 1. Fig.A.6 shows the distribution of each setting about different variables. We set true sensitivity and specificity as sen = 0.6 and sp = 0.9 and β = (−1.5, 0.1, −0.5, −0.3, 0.4, −0.2 Also define X 1 = 1 N as the intercept, and X 2 , X 3 , X 4 are generated with Bernoulli distribution (2) with probability p = 0.1, 0.3, 0.5 respectively. then X 5 , X 6 , X 7 are generated from normal distributions N (0, 0.5), N (0, 1), N (0, 1.5) respectively. Lastly, X 8 , X 9 , X 10 are generate from uniform distributions U(−0.5, 0.5), U(−0.7, 0.7), U(−1, 1) respectively. We also generate the random effect µ using trivariate normal distribution and with the settings, we can deduce the log-odds ratio with following formula where f is the sigmoid function defined as Now, we generate the outcomes y for each sample with Bernoulli distribution (2) where the log-odds ratio served as the probability p. Also, the sensitivity and specificity can be calculated with Binomial distribution with probability sen + µ 2 and sp + µ 3 . Treating medical data as a durable asset Hipaa privacy rule, 45 c.f.r. s Linking temporal medical records using non-protected health information data Re-identification risk in hipaa de-identified datasets: The mva attack Reidentification risks in hipaa safe harbor data: A study of data from one environmental health study Generalized linear models Applied regression analysis Privacy-preserving construction of generalized linear mixed model for biomedical computation On convergence properties of the EM algorithm for Gaussian mixtures G rid binary lo gistic re gression (glore): building shared models without sharing data dpql: a lossless distributed algorithm for generalized linear mixed model with application to privacy-preserving hospital profiling, medRxiv Bias correction in generalized linear mixed models with multiple components of dispersion Generalized linear models with clustered data: Fixed and random effects models A note on gauss-hermite quadrature