key: cord-0670549-jcclnmp6 authors: Hitaj, Dorjan; Pagnotta, Giulio; Hitaj, Briland; Perez-Cruz, Fernando; Mancini, Luigi V. title: FedComm: Federated Learning as a Medium for Covert Communication date: 2022-01-21 journal: nan DOI: nan sha: b0154ea599c1f2e3b070e1a276eeb5c169bbd98c doc_id: 670549 cord_uid: jcclnmp6 Proposed as a solution to mitigate the privacy implications related to the adoption of deep learning, Federated Learning (FL) enables large numbers of participants to successfully train deep neural networks without having to reveal the actual private training data. To date, a substantial amount of research has investigated the security and privacy properties of FL, resulting in a plethora of innovative attack and defense strategies. This paper thoroughly investigates the communication capabilities of an FL scheme. In particular, we show that a party involved in the FL learning process can use FL as a covert communication medium to send an arbitrary message. We introduce FedComm, a novel multi-system covert-communication technique that enables robust sharing and transfer of targeted payloads within the FL framework. Our extensive theoretical and empirical evaluations show that FedComm provides a stealthy communication channel, with minimal disruptions to the training process. Our experiments show that FedComm successfully delivers 100% of a payload in the order of kilobits before the FL procedure converges. Our evaluation also shows that FedComm is independent of the application domain and the neural network architecture used by the underlying FL scheme. The single biggest problem in communication is the illusion that it has taken place. Deep Learning (DL) is the key factor for an increased interest in research and development in the area of Artificial Intelligence (AI), resulting in a surge of Machine Learning (ML) based applications that are reshaping entire fields and seedling new ones. Variations of Deep Neural Networks (DNN), the algorithms residing at the core of DL, have successfully been implemented in a plethora of domains, including here but not limited to image classification [20] , [47] , [99] , natural language processing [6] , [28] , [92] , speech recognition [44] , [48] , data (image, text, audio) generation [8] , [54] , [58] , [78] , cyber-security [21] , [25] - [27] , [50] - [52] , [86] , and even aiding with the COVID-19 pandemic [68] , [73] . DNNs can ingest large quantities of training data and autonomously extract and learn relevant features while constantly improving in a given task. However, DNN models require significant amounts of information-rich data and demand hardware to support these models' computation needs. These requirements limit the use of DNNs to institutions that can satisfy these requirements and push entities that do not have the necessary resources to pool their data into thirdparty resources. Strategies like transfer learning alleviate these drawbacks, but their adoption is not always possible. Also, as highlighted and emphasized by prior research [49] , [97] , sharing the data in third-party resources is not a viable solution for certain entities because they would risk potential privacy violations, infringing on current laws designed to protect the privacy and ensure data security. To address the issues described above, Shokri and Shmatikov [97] introduce collaborative learning, a DL scheme that allows multiple participants to train a DNN without needing to share their proprietary training data. In the collaborative learning scheme, participants train replicas of the target DNN model on their local private data and share the model's updated parameters with the other participants via a global parameter server. This process allows the participants to train a DNN without having access to other participants' data, or pooling their data in third-party resources. In the same line of thought, McMahan et al. propose federated learning (FL), a decentralized learning scheme that scales the capabilities of the collaborative learning scheme to thousands of devices [75] , successfully being incorporated into the Android Gboard [76] . Additionally, FL introduces the concept of secure aggregation, which includes an additional layer of security in the process [12] , [75] . Currently, a growing body of work proposes variations of FL-schemes [71] , [82] , [106] , novel attacks on existing schemes [5] , [49] , [77] , [108] , and approaches for mitigating such adversarial threats [1] . In this paper, our investigation focuses on the extent to which FL schemes can be exploited. To this end, we ask the following question: Is it possible for a subset of participants to use the shared model's parameters as a medium to establish a covert-communication channel to other members of an FL training process? Figure 1 : High level overview of the FedComm communication scheme. We demonstrate that the transmission of information "hidden" within the model's parameters is feasible by encoding such information using the Code-Division Multiple Access (CDMA). Our proposed covert-communication scheme, Fed-Comm, stealthily transmits information that is hidden from the global parameter server and from the other participants who are not senders or receivers of the communication. Figure 1 shows the high-level operation of the FedComm scheme. First, the sender encodes the message in its model's updated parameters and then sends them to the global parameter server. The receiver obtains the global model from the parameter server and can precisely decode the message that was sent. We demonstrate the transmission of covert content varying from simple text messages, e.g., "HELLO WORLD!", to complex files, such as images. Furthermore, we prove theoretically and empirically that the FedComm scheme causes no disruptions to the FL process and has a negligible effect on the performance of the learned model. We bring to the attention of our readers that the work proposed here differs from the work on backdoors [5] , [45] , [102] , [108] , trojanning [67] , [103] , and watermarking mechanisms [2] , [80] , [112] . Our proposed approach does not aim at changing the behavior of the DNN model in the presence of triggers. FedComm uses an FL scheme as a medium to covertly transmit additional content to other participants without altering the behavior of the resulting ML model. Our contributions include the following: This paper is organized as follows: Section 2 provides the necessary background information. Section 3 introduces the threat model. Section 4 describes FedComm, the covert communication channel procedure for FL proposed in this paper. Section 5 and Section 6 provide details about the experimental set up and the evaluation of FedComm. Section 7 provides relevant information and the implications of our covert communication technique. Section 8 covers the related work, and Section 9 concludes the paper and presents future directions. Deep Learning relies heavily on the use of neural networks, which are ML algorithms inspired by the human brain and are designed to resemble the interactions amongst neurons [81] . While standard ML algorithms require the presence of handcrafted features to operate, NNs determine relevant features on their own, learning them directly from the input data during the training process [42] . Despite this crucial advantage, it was not until 2012 1 , when the seminal work by Krizhevsky et al., the resulting NN model referred to by the research community as AlexNet [61] , won the ImageNet classification challenge by using convolutional neural networks (CNNs), a NN variant widely adopted in image-related tasks [20] , [47] , [99] . Two main requirements underline the success of AlexNet and NNs in general: 1) substantial quantities of rich training data, and 2) powerful computational resources. Large amounts of diverse training data enable NNs to learn features suitable for the task at hand, while simultaneously preventing them from memorizing (i.e., overfitting) the training data. Such features are better learned when NNs have multiple layers, thus the deep neural networks (DNN). Research has shown that the single-layer, shallow counterparts are not good at learning meaningful features and are often outperformed by other ML algorithms [43] . DNN training translates to vast numbers of computations requiring powerful resources, with graphical processing units (GPUs) a prime example. Federated learning (FL) enables multiple parties to jointly contribute to the training of a DNN without the need to share the actual training data [12] , [75] . The participants locally train replicas of the target DNN model and only share aggregated model updates with other participants. In doing so, the DNN models get trained without actually "seeing" the others data, thus making FL an attractive alternative for entities interested in benefiting from the DL, but do not possess large quantities of data and powerful resources, or possess sensitive data that cannot be easily distributed. A typical ML scenario requires a homogeneously partitioned dataset across multiple servers connected through high-throughput and low-latency connections to enable its optimization algorithm to run effectively. In the case of FL, the dataset is distributed unevenly across millions of devices with significantly higher latency, low-speed connections, reduced computing power, and intermittent availability for training. McMahan et al. [75] alleviate these limitations by enabling the training of DNNs using up to over 100 times less communication than the typical cloud training procedure. This reduction in communication is possible because few local training iterations can produce high-quality updates, and these updates can be uploaded more efficiently by compressing them beforehand. To limit the ability of the parameter server to observe individual updates, Bonawitz et al. [12] developed Secure Aggregation protocol that uses cryptographic techniques so that the parameter server can decrypt the parameter update only if 100s of 1000s of users have participated. The secure aggregation protocol prohibits the parameter server from inspecting individual user updates before averaging. This averaging protocol is practical and efficient for DNN-sized tasks and takes into account realworld connectivity constraints. 2.2.1. How does FL work?. We denote the weight parameters of a DNN by W. FL is typically organized in rounds (time-steps t here). At time t, a subset of users, n ≤ n out of n participants that have signed up for collaborating is selected for improving the global model. Each user trains their model and computes the new model: Where W t are the DNN weights at time t, and ∇W k t is the mini-batch gradient for user k at time t. Participants of the FL learning scheme send to the global parameter server the gradient ∇W k t . On receiving the gradients, the server recomputes the model at time t + 1, as follows: where n is the number of updates obtained by the parameter server (i.e., the number of participants taking part in the current training round) and α is a weight factor chosen by the global parameter server. In digital communications, spread-spectrum techniques [104] are methods by which a signal (e.g., an electrical, electromagnetic, or acoustic) with a particular bandwidth is deliberately spread in the frequency domain. These techniques enable the spreading of a narrowband information signal over a wider bandwidth. On receiving the signal, the receiver knowing the spreading mechanism can recover the original bandwidth signal. These techniques were developed in the 1950s for military communications because they resist the enemy's efforts to jam the communication channel and hide the fact that the communication is taking place [107] . Practically, two main techniques are used to spread the bandwidth of a signal: frequency hopping and direct sequence. In frequency hopping, the narrowband signal is transmitted for a few (milli or micro) seconds in a given band that is constantly changed using a pseudorandom frequency band that has been agreed upon with the receiver. The receiver, in coordination, tunes its filter to the agreed-on frequency band to recover the message. Direct Sequence, the spreading technique we use in FedComm, works by directly coding the data at a higher frequency by using pseudo-random generated codes that the receiver knows. In the 1990s, Direct Sequence Spread Spectrum was proposed as a multiple-access technique (i.e., Code Division Multiple Access or CDMA) in the IS-95 standard for mobile communications in the US, and it was adopted worldwide as the Universal Mobile Telecommunications System (UMTS) standard in the early 2000s. This standard is better known as 3G. In CDMA, if several mobile users want to transmit information to a base station, they all transmit at the same time over the same frequency but with different codes. The base station would correlate the code of each user with its spreading code to detect the transmitted bits. The other users' information would contribute to the noise level. CDMA is proven to have a higher capacity than (TDMA) Time Division Multiple Access [107] used in the GSM Standard. CDMA has been relegated in LTE/4G and 5G for Orthogonal Frequency Division Multiple Access (OFDMA) because OFDMA is more robust against channel bandwidth limitations. Spreading the spectrum of a signal is fairly straightforward at both the transmitter and the receiver. For example, let assume that we have a binary sequence: 0, 1 and 1, that we are transmitting with Phase-Shift Keying (PSK) [91] , (i.e., the logical 0 is transmitted as a − cos(ωt) for 0 ≤ t < T and the logical 1 as cos(ωt) and ω is the transmitting frequency). If we transmit each bit at a rate of one every millisecond (T = 1ms), the bandwidth of that signal will be approximately 1 KHz. For simplicity, we assume that ω = 0, but without loss of generality any other value is possible. In this case, using Phase-Shift Keying (PSK) [91] , the above-mentioned sequence is translated into −1, 1, 1 (referred as chips in CDMA parlance). If we multiply this sequence with a 5-chip spreading code (e.g., −1, 1, −1, −1 and 1), we will get the following 15-bit sequence: 1, −1, 1, 1, −1, −1, 1, −1, −1, 1, −1, 1, −1, −1 and 1. The chips in this sequence are transmitted every 0.2 milliseconds, so the time for transmitting a bit will be the same (i.e., 1ms), and we would have increased the bandwidth to 5Khz. The correlation to recover the original signal is simple. Every 0.2 milliseconds, the received sequence is convolved by the time-reversed spreading code. When the spreading code and the sequence are aligned, in the noiseless case, we get a value of 5 (if a 1 was transmitted) or −5 (if −1 was transmitted). When they are not aligned, we get a small value of the order of ±1. Also, if we transmit the same total energy, the energy per KHz is divided by 5. If we make the spreading code long enough, the transmitted sequence can be hidden under the noise level, so it is not detectable by the unintended user, but the receiver can recover it once we add all the contributions from the code. Typically, the spreading codes are in the tens to hundreds of bits, so the signal is only visible when the spreading code is known, and the gain of using CDMA is proportional to the length of the code. A covert channel [40] is an indirect communication channel between unauthorized parties that violates a security policy by using shared resources in a way in which these resources are not initially designed, bypassing mechanisms that do not permit direct communication between these unauthorized parties in the process. As such, covert channels emerge as a threat to information-sensitive systems in which leakage to unauthorized parties may be unacceptable (e.g., military systems). Initial research in covert channels focused on single-systems (i.e., File-lock, disk-arm, and buscontention covert channels) [57] , [88] , [96] . However, shortly after, covert channel techniques encompassing multi-systems, such as a network of devices, were also devised, giving birth to a new kind of threat, that of multi-system covert channels [31] . Multiple techniques have been crafted to set-up multi-system covert-communication channels ranging through a variety of protocols including TCP, IP, HTTP, ICMP, AODV, and MAC protocols [3] , [7] , [38] , [39] , [46] , [64] , [65] , [70] , [83] , [95] , [100] . The core concept behind these covert-communication approaches is encapsulating the covert channels in legitimate protocols to bypass firewalls and content filters. Multi-system covert channels are an advanced class of security threats to distributed systems. Using this type of covert channels, an adversary can exfiltrate secret information from compromised machines without raising suspicion from firewalls, which typically inspect only the packet payload. Typically there are two major categories of multi-system covert channels: i) timing-based and ii) storagebased. Timing-based multi-system covert channels [15] , [37] , [62] , [70] can exfiltrate secret data by modulating the interpacket delays (IPDs) of network traffic, e.g., by using large (small) IPDs to encode ones (zeros). Storage-based multi-system covert channels exploit optional or unused TCP/IP header fields, such as ToS, Urgent Pointer, and IPID fields [33] , essential header fields such as the TCP initial sequence number [24] or even relying on the inherent nondeterminism of network traffic, e.g., embedding data into the receive window size or ACK fields [69] . The above-mentioned "traditional" multi-system covert channels can be hampered or mitigated [9] , [15] , [24] , [37] , [89] , [110] . Timing-based multi-system covert channels can be detected and mitigated by observing the network traffic distribution because modulated traffic traces would have different IPD distributions from regular network traffic. Performing statistical analysis on the network traffic and looking for statistical deviations between a given IPD distribution and a known-good distribution of regular network traffic can make it possible for the detection of an ongoing covert communication channel [89] . After detection, these timingbased channels can be mitigated by buffering or injecting random delays to network traffic in order to disrupt the IPD modulation [9] . Storage-based multi-system covert channels can be detected by inspecting all header fields and looking for the existence of header fields that are rarely used or contain suspicious values [24] , [110] . However, the giant leaps the technological advancements of the last decade have broadened the space of opportunities for the adversaries. In this paper, we move away from "traditional" mediums for multi-system cover channels and shed light to alternatives that make use of new, yet unconventional platforms, for covert communication, such as federated learning schemes. Our threat model is composed of three actors: Alice, Eve, and Bob. Alice represents the sender, a participant in the FL scheme aiming to secretly communicate information to Eve (the receiver participant), all while sending correctly formatted messages to Bob, which in our case corresponds to the global parameter server. To achieve her goal, Alice needs to exploit a shared resource, setting up a covert communication channel with Eve. Alice succeeds if she can secretly share information with Eve, while remaining undetected by Bob. More specifically: • Alice and Eve are two (or more) participants in an FL scheme interested in covertly communicating with one another. • Alice's objective consists in conveying (transmitting) a secret message to Eve. • Alice makes use of the FL platform as a medium for communicating with Eve. This is a useful strategy, particularly in scenarios where "traditional" covert communication channels fail or are impossible to deploy due to firewalls or intrusion detection systems, 2.4. • Differently from prior work [49] , [77] , Alice and Eve do not seek to violate the privacy of the other (honest) participants. • Furthermore, Alice does not control the global parameter server (i.e., Bob), nor does it seek to compromise its' operation. The same holds true for Eve, as well. • Similar to other participants, Alice has a local replica of the DNN model that needs to be trained via FL. • Alice makes use of FedComm, our proposed technique, to establish a multi-system covert communication channel on top of FL. • Using FedComm, Alice is able to embed and hide a desired message in her model parameters. The updated parameters are then shared with the other participants via the the global parameter server (Bob). FedComm ensures that the model parameters will not disrupt the ongoing FL training procedure. • While multiple participants (if not all) receive the updated model, only Eve is able to decode the hidden message. Note that Alice and Eve agree on the encode-decode procedure beforehand. We bring to the attention of the reader that federated learning relies on secure aggregation [12] where the parameter server is oblivious of the individual updates and does not possess any tracking mechanism. In our threat model, we assume the global parameter server (Bob) is able to inspect and perform statistical analysis on the participant's parameter updates. This enables us to thoroughly evaluate the stealthiness of FedComm's covert communication channel. This section introduces FedComm, our covertcommunication channel technique built on top of the FL scheme. In its core, FedComm employs CDMA to build this covert communication channel. In this view, each weight of the NN is a time instant in which we can encode information, i.e., a chip in CDMA parlance. The channel synchronization is described by an ordering of the weights of the NN that is predefined by the sender and the receiver 2 . Let's assume that the sender wants to transmit a payload of P bits b = [b 0 , . . . , b P −1 ]. The bits are encoded as ±1 and the code for each bit is represented by c i , which is a vector of +1 and −1 that it is of the same length of the vector W (the vector of NN weights) 3 , namely R. C is an R by P matrix that collects all the codes. We assume that the codes have been randomly generated with equal probabilities for ±1. The sender joins a federated learning scheme where n users have signed up for collaborating. The aggregator 4 proposes a set of initial weights W 0 , which are distributed to all the users. At every iteration, each user will use their data or a mini-batch of their data to compute the gradient ∇W k t for k = 0, . . . , n − 1 and t = 0, . . . T − 1. The sender will encode the payload on its gradient, where we assume without the loss of generality that the sender is user 0, as follows: 2. In our implementation, the sender and receiver share a seed, which they use to generate the ordering of the weights of the NN necessary for the channel synchronization. 3. For simplicity, consider the weight parameters of the NN as a vector. 4 . In this paper we use the terms aggregator and parameter server interchangeably. γ and β are two gain factors to control the power of the gradient update of the sender. One of the ways to detect that a signal is being added using spread-spectrum techniques is by measuring the power of the noise. If that power is larger than expected the signal can be detected [13] . In this work we will show how to set up the values of γ and β to ensure that the message cannot be detected and that the power of the modified gradient is like the unmodified gradient for the other users. The aggregator updates the weights of the network using equation (2) and distributes W t to all of the users. Now, the receiver can recover the payload that was hidden in the first gradient by correlating weights with the spreading code. For example for bit i, the receiver can recover the payload as follows 5 : and b i = sign(y i ). This operation can also be done for any other t too. How can we know that b i will be equal to b i ? Let's do the math! To derive (8), we have divided matrix C in a column vector c i and a matrix C ¬i that contains all columns except the i-th column, resulting in a R × P − 1 matrix. The vector b ¬i is a P − 1 dimensional vector, which is only missing the i-th entry. In (8), we have also defined w = The distribution for each component of c is a symmetric binomial distribution between ±(P − 1), because the entries of both C ¬i and b ¬i are ±1 with equal probabilities. When we multiply this vector by c i and add all the components together, we get a binomial distribution with values between ±R(P −1). Hence the distribution for C i for large R can be approximated by a zero-mean Gaussian with variance T 2 γ 2 R(P − 1). To compute the distribution of W i , we assume for now that ∇W t k is a zero-mean with a variance σ 2 (it can be any distribution, it does not need to be Gaussian). Each component of w adds up T (n − 1 + β) of these values, by 5 . The pseudocodes for the sender and the receiver are shown on Appendix B the central limit theorem (large enough T (n − 1 + β)) each one of these variables would be zero-mean Gaussian with a variance 6 T nσ 2 . When we multiply this vector by c i and add all the components together, we end up with a zero-mean Gaussian with a variance RT nσ 2 , because the components of c i are ±1; therefore, they do not change the distribution of the components of w. The assumption that each ∇W t k is zero-mean is not restrictive, as we are adding for all the users and all the time instants, so we can see users pulling and pushing in different directions until the right value has been set, so the mean of the distribution should be negligible after many rounds. Moreover, the constant variance is not a limiting factor, because all users' gradients would be normalized by the aggregator, if a user has a gradient that it is significantly larger than the others, it should not be used in the FL procedure, as that user would hijack the whole learning procedure. Finally, given that C, b and ∇W k t are mutually independent, the variable i is zero mean with a variance that it is the sum of the variances of C i and W i and also Gaussian distributed. The distribution of y i is given by y i ∼ N (T γRb i , T 2 γ 2 R(P − 1) + RT nσ 2 ) (we have dropped α/n without loss of generality), and if we further normalized it by T γR, it leads to: To be able to recover b i with some certainty, we need the variance of y i to be less than one. For example, if the variance were one, the probability of making a mistake would be 16%, but this probability reduces, respectively, to 8% and 4%, if this variance drops to 1/2 or 1/3. If we use a long enough error-correcting code, we can ensure errorless communication when the variance is about 1, so we will use this value as a reference in our calculations. We will describe the used Low-Density Parity-Check codes in Section 4.1. To compute the variance of our estimate, we need to set the values of γ and β in Equation (3). If we set β = 0 and γ = σ/ √ P , our modified gradient, would have the same power as our original gradient, but a simple hypothesis testing looking for a binomial or a Gaussian distribution will be able to detect that our gradient is not a true gradient, as we show in Section 6.1. We can also set β = 1 and γ = 0.1σ/ √ P burying our signal in our true gradient. In this case, the information would be impossible to detect, as we would be 10dB under the power of the gradient. In Section 6, we also focus on an in-between case, in which β = 1/ √ 2 and γ = σ/ √ 2P . The power of the gradient and our message would be the same; in this case, the signal might be detected. 6 . We have simplified (n − 1 + β) to n, as we will fix β ≤ 1 and we assume n is large enough. In scenarios where n is not too large, we use an upper bound to the variance of W i For now, we focus on the analysis with β = 1 and γ = 0.1σ/ √ P . In this case the variance of the estimate of b i becomes: For small T (significantly smaller than 100n), the error in b i is driven by the gradients of the other users. If T where larger than 100n the noise would be driven by the other bits in the message, which is the standard scenario of CDMA, and eventually, the message will be able to be decoded. We should expect that we need at least T > 100nP/(R − P ) rounds before the message can be decoded. If the gradients from the other users do not behave as a zero-mean Gaussian with constant variance, we might need more rounds to be able to decode the message (we display this on Section 6). In general, for γ = δσ 2 / √ P , the number of iterations that we need before the message can be seen by the receiver is T > nP/(δ 2 (R − P )), where δ = 0.1 for the stealthy mode and δ = 1 for the non-stealthy mode. If we need to add the payload faster, we can add the same payload by tallying more users that transmit the same information with the same code. This information would be added coherently, even if the users are not transmitting the information at the same time, and would lead to an amplification of the message without additional noise. If we have M senders, instead of one, adding the same payload with the same code to their gradients, y i distribution would be equal to N (M T γRb i , M 2 T 2 γ 2 R(P − 1) + RT nσ 2 ). In this case, the payload will be visible M 2 times quicker, i.e. T > nP/(M 2 δ 2 (R − P )). In the derivation above, we have assumed that all the users send their gradient in each iteration and that all the gradients are used to update the weights. If only a subset n < n of users are included in each iteration, the analysis above would hold if we define each iteration as being n/n communication rounds. If the aggregator uses a round-robin scheme, the analysis will be exactly the same. If the aggregator chooses the participants' gradients at random, it would hold in mean and, given that the number of rounds should be large, the deviation would be negligible (We also test this extreme in Section 6). As a final note, if we do not have access to W 0 when doing the decoding, we would have an additional error source in i , coming from the initialization of the weights c i W 0 . This would become negligible as T and n grows. Channel coding allows detecting and correcting errors in digital communications by adding redundancy to the transmitted sequence. For example, the widely known Hamming (7,4) codes add three redundancy bits to four message bits to be able to correct any received word with one error. In general, Shannon coding theorem [23] tells us the limit on the number of errors that can be corrected for a given redundancy level, as the number of bits tends to infinity. Low-Density Party-Check (LDPC) codes [94] are linear codes that allow for linear-time decoding of the received word, quasi-linear encoding, and approach the capacity as the number of bits tend to infinity. Linear codes and defined by a coding matrix G. The matrix is a k × P matrix that transforms k input bits into a sequence of P bits, i.e. b = (mG) mod 2 (15) All of the operations are binary operations. In general, linear codes can be described over any Galois field. For simplicity, we will only consider binary codes. The bits in b are then transmitted through an additive noise communication channel: At the receiver, we can check if the received word is valid using the parity-check matrix: The parity-check matrix H has P − k rows and n columns and spans the null space of complement of G. s is known as the syndrome and describes the error in the received word. The syndrome is independent of the code-word. If s = 0, the received word is a correct code-word. LPDC codes rely on parity check matrices with a vanishing number of ones per column as the number of bits grows. These codes can be proven to approach capacity as the number of bits increases and have an approximate decoding algorithm, i.e., Belief Propagation, that runs in linear time (see [94] for further details). The decoding algorithm can also be applied when the channel in equation (16) is not binary. For example, e can be a Gaussian random variable. The Belief propagation decoder needs to know the distribution of the errors. If e is a Bernoulli distributed, we need to know the probability of taking the value 1 and flipping a bit. If e is Gaussian distributed, as it is in our case, we need to know its variance. For our implementation, we rely on a rate-1/2 codes k = p/2 regular codes with three ones per row of H. Once the message has been encoded, we append 100 bits to form the transmitted code-word. Those 100 bits will be used to estimate the noise level and decode the received word. These 100 bits are randomly generated and shared between the transmitter and receiver. We conduct a thorough and extensive evaluation of our proposed scheme considering: 1) a range of benchmark image [60] , [63] , text [79] , and audio [90] datasets; 2) wellknown convolutional neural network (CNN), and recurrent neural network (RNN) architectures [35] , [53] ; and 3) different payload sizes. This evaluation demonstrates that FedComm is domain-and model-independent and can be generalized for future areas where FL is deployed. We used the following datasets in our experiments. The MNIST handwritten digits dataset consists of 60,000 training and 10,000 testing grayscale images of dimensions 28x28-pixels, equally divided among 10 classes (0-9) [63] . The CIFAR-10 dataset is another benchmark image dataset consisting of 50,000 training and 10,000 testing samples of 32x32 colour images divided in 10 classes, with roughly 6,000 images per class [60] . The WikiText-2 language modeling dataset, a subset of the larger WikiText dataset, which is composed of approximately 2.5 million tokens representing 720 Wikipedia articles divided into 2,088,628 train tokens, 217,646 validation tokens, and 245,569 testing tokes [79] . The ESC-50 dataset consists of 2,000 labeled environmental recordings equally balanced between 50 classes of 40 clips per class [90] . We adopted different DNN models depending on the task. For the image classification tasks on MNIST and CIFAR-10, we used two CNN-based architectures: a) a standard CNN model composed of two convolutional layers and two fully connected layers; b) a modified VGG model [99] . To address the text classification tasks on WikiText-2, we used an RNN model composed of two LSTM layers. For the audio classification, we used a CNN model composed of four convolutional layers and one fully connected layer. Summaries of these models are found in Appendix A We used three different payloads of different sizes for transmission in our covert communication approach for federated learning. The smallest payloads were two text messages of 96 and 136 bits corresponding to the hello world! and The answer is 42! text phrases. The third payload is a 7904 bits image. For simplicity, we refer to the text messages as SHORT, and the image as the LONG message. This section focuses on the evaluation of FedComm. We rigorously assessed the effectiveness of the proposed scheme along three main axes: i) stealthiness, ii) impact on the overall model performance, and iii) message delivery time (the total number of global rounds needed for the receiver to detect the presence of the message). The following sections provide a step-by-step analysis of all of these metrics. During an FL epoch where the participants perform a round of training over their respective local datasets, the gradients of the updated weights are aggregated and uploaded to the parameter server, which, in turn, updates the global model. Because FL usually relies on secure aggregation [12] , the parameter server is oblivious of the individual updates. While typically, the parameter server of an FL scheme does not possess any tracking mechanism, we assume a hypothetical scenario in which the parameter server can actually observe each of the uploaded updates. Additionally, the server is equipped with additional tooling for performing statistical analysis of the provided updates to detect and mitigate eventual anomalies. We position ourselves in such a scenario, as in this setting, we can adequately evaluate the stealthiness of FedComm and demonstrate that the transmitted messages remain undetected. To demonstrate the stealth of FedComm when using different stealthiness parameters we analyse the distribution of the gradient updates that come as a result of employing FedComm to transmit a message versus the gradient updates when FedComm is not used. We depict this comparison in Figure 2 , in which we compare the distribution of the Fed-Comm gradient updates in two extreme cases; non-stealthy (β=0, γ = σ/ √ P ) and full-stealthy (β=1, γ = 0.1σ/ √ P ), with the regular gradient updates. In Figure 2a , the distribution of typical gradient updates after the local iterations (the light color) differs from the distribution of the updates where the message is being transmitted in a non-stealthy manner. When the message is transmitted in non-stealthy mode, the parameter server can find out that something abnormal is happening and might even choose to discard that particular gradient update. Figure 2b displays the distribution of the gradient updates after a typical local update, and the distribution of the same gradient updates with the message transmitted using FedComm in full-stealthy mode. The two distributions are nearly indistinguishable, and in the eyes of the parameter server, nothing abnormal is happening. The impossibility of distinguishing between the distributions of a typical gradient update and the gradient update that contains the message using FedComm in full-stealthy mode aligns with the theoretical results reported in Section 4, which show that using the stealthiness parameters β = 1, γ = 0.1σ/ √ P allows us to bury the message in the gradients, thus making it undetectable as it would be 10dB under the power of the gradient. To provide additional evidence of FedComm's ability to transmit a message covertly, we compared the vector norm among all the participants (i.e., senders and regular participants) gradient updates. Similar to the above results, the parameter server tries to detect anything unusual in the parameter updates from a particular participant compared to the parameter updates sent by the rest of the participants by employing a different measure, the norm of the gradient update. In this experiment, we used the Frobenius norm [41] . Figure 3 shows that the norm of the gradient update of the sender (highlighted in dark blue) is similar to the norm of the gradients updates of other participants. To measure the impact on the performance of the resulting FL model when using the FedComm covert communication scheme, we ran different experiments on a variety of tasks (MNIST, CIFAR10, ESC-50, WikiText-2) and a variety of DNN architectures (see Section 5) . In this way, we also empirically evaluate the generality of FedComm (i.e., domainand DNN-architecture independent). We fixed the number of participants in the FL scheme to 100 and considered the following cases in terms of the percentage of participants randomly selected to update the parameters in each round (10%, 20%, 50%, 100%). To show that FedComm does not impact the performance of the FL scheme, we performed baseline runs with the same setup as the one in which we used FedComm to transmit the message, and we display those results in Figure 4 . Each plot presents the FL baseline training accuracy against the training accuracy when FedComm is employed. The message is transmitted on each round the sender is selected for participating (i.e. when we use 100% this means the sender is always selected and transmits the message in each round). The vertical yellow line in each plot shows the FL round on which the message is correctly received by the receiver. To display the effect on model performance when the sender employs different levels of FedComm stealthiness, Figure 4a and Figure 4b display the model performances when using FedComm non-stealthy (4a) and FedComm fullstealthy (4b) level to transmit the message. On both cases we can see that the performance of the learned model when FedComm is used is similar to the performance of the learned model when FedComm is not used. Another important benefit that the use of spread-spectrum channel coding techniques brings to the table is that multiple senders can send their respective messages to their respective target receivers. Figure 4c showcases an experiment in which two senders transmit two different text messages (i.e., hello world! and The answer is 42!) to their respective receivers. The performance of the learned model is unaffected on each FL round, and both messages are correctly delivered to their respective receivers. As previously mentioned in Section 4, we tested the case in which a subset of the total participants were selected at random to update the global model in each FL round. We assessed the impact of this approach on FedComm's ability to transmit the message in Figures 4d, 4e, 4f . These figures show that the baseline and FedComm's training accuracy is still closely comparable even when a limited number of participants is selected for averaging (i.e., 10%, 20% and 50%). Figures 4g, 4h , 4i evaluate FedComm's performance when it is applied on different tasks and different DNN architectures. Figure 4g shows that FedComm can transmit the long message in under 400 global rounds while simultaneously training the VGG [99] network on the CIFAR10 imagerecognition task. Figure 4e , displays the baseline vs. Fed-Comm's performance on the ESC-50 [90] audio classification task. The performance of the learned model on each round is not affected by the ongoing covert communication powered by FedComm. Figure 4i displays the baseline vs. FedComm performance on a language-modeling task, WikiText-2, using a LSTM-based recurrent NN. Different from other plots, the performance assessment on this task compared the perplexity of the learned models on each FL round. Perplexity measures how well a probability model predicts a sample, and in our case, the language model is trying to learn a probability distribution over the WikiText-2 [79] dataset. Even in this case, the performance of the FL scheme is not impacted at all by having FedComm transmitting a message alongside the learning process. Figures 4j, 4k, 4l show FedComm's results when using the same settings (task, message and stealthiness level) with changes to the number of simultaneous senders of the same message and demonstrate how this scenario impacts the message delivery time. We elaborate on this in the next section where we discuss about the message delivery time of FedComm. Having shown that employing FedComm we can covertly deliver a message without impacting the ability of the FL scheme to learn a high performing ML model, we focused on measuring the time (in terms of FL rounds) it takes a message to be delivered to its intended receiver. From the various FL configuration experiments demonstrated in Figure 4 , message delivery time varies according to the number of senders in the network and their stealthiness levels. Typically, an FL scheme either runs in a continuous learning fashion (i.e., the learning never stops) or stops when the gradient updates can no longer improve the model. To display the potential of FedComm, we assume the latter case, which is also our worst-case scenario because it requires FedComm to transmit and deliver the message to the receiver before the FL procedure converges (i.e., the training stops). In Figure 4 , the vertical line indicates the global round in which the receiver correctly received the message. On every FL run, the model performance continues improving, even after the FL round when the message is received, so the FL execution does not stop before the message is received. The message delivery time drops significantly as the number M of senders who send the message concurrently increases. In Section 4, we showed that the number of iterations drops with M 2 . We highlight this observation on the experiments displayed on Figures 4j, 4k, 4l , which show the message delivery time when using 1, 2, and 4 senders with the stealthiness parameter β = 1/ √ 2, γ = σ/ √ 2P , and on each round, 100% of participants are selected by the parameter server for averaging. The vertical line in Figure 4j shows that fewer than 400 global rounds are needed for the receiver to be able to correctly decode the message when one sender is used. According to the calculations, two senders would require roughly 400/2 2 = 100 and four senders would require roughly 400/4 2 = 25 FL rounds for the message to be correctly decoded. Figure 4k and Figure 4l show that with 2 and 4 concurrent senders, the receiver can decode the message in under 120 and 30 rounds respectively. The small mismatch between the theory (the number of iterations is reduced by M 2 ) and practice (slightly slower rate of decrease, especially from 1 to 2 senders in Figures 4j and 4k) can be due to several factors. First, the variance of the gradients reduces as the number of iterations increases, so there is more noise present in the first few iterations. Second, the gradients for each sender are different, so all of them might not be adding the same amount of information in each iteration. Third, the natural stochasticity in FL training, as each user has a different gradient in each iteration that depends on all the other gradients in previous iterations. We can also see in Figures 4a and 4b that ten stealthy users take a time that is of the same order of magnitude as one non-stealthy user, as δ 2 and M 2 would cancel each other out in the predicted number of iterations, which is also predicted by the theory in Section 4. Finally, to highlight how parts of the message become visible on the global model after each FL round, Figure 5 displays the decrease in the error rate of the message. This error rate is calculated as the portion of the message that is received by the receiver after each FL round. Note that the use of LPDC codes causes the error rate to drop rapidly to zero in the last few iterations of FedComm. In Section 4, we assumed that the gradients from all the users in every iteration are equally distributed, as a zero-mean Gaussian with a fixed standard deviation σ 2 for tractability of the theoretical analysis. The gradients are not Gaussian distributed and do not have the same standard deviation for every user at each iteration. Hence the resulting noise on our covert communication channel would not be Gaussian distributed and this would have an impact on the number of iterations that are needed to be able to decode the message. In Figure 4a , we can see that we need 120 iterations before the message can be detected. If we apply the theory developed in Section 4, we should expect to decode the message just after two iterations. We have done two experiments to understand where this deviation comes from. First, we recorded the power of all the gradients in each iteration and user and repeated the FL learning procedure, substituting the gradients by Gaussian noise 7 . In this case, we were able to detect the message within 30 iterations, the aggregated noise is not Gaussian in this case either. If we use the same Gaussian distribution for all the users and all the iterations, the resulting noise is Gaussian distributed, and we recover the message in the two iterations, as predicted by the theory. Both the LDPC decoder and the CDMA detector relies on the resulting noise being Gaussian for optimal performance. The degradation observed in our experiments can be mitigated if we had the exact distribution of the noise that our CDMA communication is suffering. But this distribution would depend on the architecture of the DNN, the data that each user has, and each iteration of the FL procedure. It would be impossible to theoretically predict the number of needed iterations. In this work, we have focused on showing that the message can be detected and have left as further work designing the optimal detector. In this section we analyze possible approaches that can be employed as countermeasures to FedComm, and discuss the extent to which these countermeasures can impact the performance of the covert communication channel. Differential privacy [29] , [30] uses random noise to ensure that publicly visible information does not change much if one individual record in the dataset changes. As no individual sample can significantly affect the visible information, attackers cannot confidently infer private information corresponding to any individual sample. When employed in an FL setting, the noise level introduced by DP in the learning scheme has to be lower than the magnitude of the gradients of the participants to avoid impeding the learning process. 7 . Obviously, in this experiment, there is no learning happening. We just want to see when the message is detected. Since we are using FedComm to transmit P bits in an n-participant FL scenario, decoding one of these bits requires FedComm to account for the noise on the channel that comes from two sources; the noise from the gradients of the other users and the noise from the codes of the other P − 1 bits of the message (see Section 4) . When DP is employed on the learning scheme, FedComm has to account for an additional noise source in the channel. Due to the intensity of the noise coming from DP, which must be lower than the magnitude of the gradient to avoid preventing the learning process, it can affect the message transmission only by affecting the time to delivery by slightly slowing it down. However, this potential delay will not significantly impact the transmission time because the noise from DP is very low. Moreover, it also slows down the FL learning process, thus providing FedComm with more time to deliver the message before the model convergence. Parameter pruning is a technique that is commonly used to reduce the size of a neural network while attempting to retain a similar performance as the non-pruned counterpart. Parameter pruning comprises of removing unused or leastused neurons of a neural network. Detecting and pruning these neurons requires a dataset that represents the whole population on which this NN was trained. The least activated neurons are identified and pruned by iteratively querying the model. Pruning is not applicable in federated learning because neither the parameter server nor the participants possess a dataset representing the whole population. Going against the FL paradigm, we assume that the parameter server has such a dataset. Suppose the parameter server performs the pruning during the learning phase. In that case, the sender will find out when it downloads the next model update and can retransmit the message using as target weights the parameters of this new architecture. If a sender wants to increase the chances that a message will not be disrupted by pruning, he can analyze the model updates to discover the most used parameters, which are less likely to be pruned, and use that subset of parameters to transmit the message. Gradient clipping is a technique to mitigate the exploding gradient problem in DNNs [113] . Typically, gradient clipping introduces a pre-determined threshold and then scales down the gradient norms that exceed the threshold to match the norm, introducing a bias in the resulting values of the gradient, which helps stabilize the training process. In a federated learning scenario, the aggregator could employ gradient clipping on participants' gradients. Under this setting, we performed gradient clipping at the aggregator using norm ranges of [0.5, 0.6, . . . , 1.0]. On each global round, the aggregator scales each of the received gradients by the participants to match the pre-set norm. From our experimental evaluation, we observe that this method incurs no penalty on FedComm's ability to transmit the message. Recent years have seen an increasing and constantly evolving pool of attacks against deep-learning models, and also FL is shown to be susceptible to adversarial attacks [55] , [56] , [72] . For instance, while FL is designed with privacy in mind [75] , [97] , attacks such as property-inference [4] , [77] , [114] , model inversion [34] , and other generative adversarial network based reconstruction attacks [49] , have shown that the privacy of the users participating in the FL protocol can be compromised too. One of the first property-inference attacks is [4] from Ateniese et al., which shows how an adversary with white-box access to an ML model can extract valuable information about the training data. Fredrikson et al. [34] extended the work in [4] by proposing a modelinversion attack that exploits the confidence values revealed by ML models. Along this line of work, Song et al. [101] demonstrated that it is possible to design algorithms that can embed information about the training data into the model (i.e., backdooring), and how it is possible to extract the embedded information from the model given only blackbox access. Similarly, Carlini et al. [16] showed that deep generative sequence models can unintentionally memorize training inputs and can extract the memorized inputs in a black-box setting. Ganju et al. [36] extended the work by Ateniese et al. [4] by crafting a property-inference attack against fully connected neural networks, exploiting the fact that fully-connected neural networks are invariant under permutation of nodes in each layer. On the other hand, Zhang et al. [114] extended the above-mentioned propertyinference attacks [4] , [36] in the domain of multi-party learning by devising an attack that can extract the distribution of other parties' sensitive attributes in a black-box setting using a small number of inference queries. Melis et al. [77] crafted various membership-inference attacks against FL protocol, under the assumption that the participants upload their weights to the parameter server after each mini-batch instead of after a local training epoch. Nasr et al. [85] presented a framework to measure the privacy leakage through parameters of fully trained models as well as the parameter updates of models during training both in the standalone and FL settings. Hitaj et al. [49] demonstrated that a malicious participant in a collaborative deep-learning scenario can use generative adversarial networks (GANs) to reconstruct class representatives. On the other hand, Bhagoji et al. [10] presented a model-poisoning attack that can poison the global model while ensuring convergence by assuming that the adversary controls a small number of participants of the learning-scheme. FedComm is not an attack towards the federated learning protocol; we aim to transmit a hidden message within the model's updated parameters, without impeding the learning process (Section 6.2). 8.1.1. Backdooring Federated Learning. Backdoors are a class of attacks [19] , [45] against ML algorithms where the adversary manipulates model parameters or training data in order to change the classification label given by the model to specific inputs. Bagdasaryan et al. [5] were the first to show that FL is vulnerable to this class of attacks. Simultaneously, Wang et al. [108] presented a theoretical setup for backdoor injection in FL, demonstrating how a model that is vulnerable to adversarial attacks is, under mild conditions, also vulnerable to backdooring. Because this class of attacks is particularly disruptive, during the years, many mitigation techniques [17] , [19] , [66] , [105] have been proposed. Burkhalt et al. [14] presented a systematic study to assess the robustness of FL, extending FL's secure aggregation technique proposed in [12] . Burkhalt et al. [14] integrated a variety of properties and constraints on model updates using zero-knowledge proof, which is shown to improve FL's resilience against malicious participants who attempt to backdoor the learned model. With a similar end goal to ours (i.e transmitting a message in the FL setting covertly), Costa et. al. [22] aim at exploiting backdooring of DNNs in FL to encode an information that can be retrieved by observing the predictions given by the global model at that particular round. To do so, they define a communication frame f the number of FL epochs 1, ..., r necessary to transmit a bit. In order to transmit the bit b during the frame f , the sender applies model poisoning techniques to force the model to switch the classification label of a certain input x. The receiver then monitors the classification label of x at the beginning of the frame f , at the beginning, f 1 and at the end, f r . If the transmitted bit is b = 0, then the label assigned to x is unchanged, when b = 1 the label assigned to x is switched. We highlight four-major advantages of FedComm when compared to [22] : 1) The covert channel of [22] relies on backdooring FL [5] , requiring specific tailoring to each domain. FedComm does not rely on backdooring FL, and more importantly FedComm is domain-independent. Our proposed strategy requires no additional modifications when deployed on different tasks/model architectures. 2) Bagdasaryan et al. [5] emphasize that the model needs to be close to convergence to achieve successful backdoor injection. FedComm is not bound by such restrictions. 3) In [22] , transmitting a payload of n-bits requires backdooring the global model n-times. Given that authors [22] do not report details about the dimension of the frame f , let us assume a hypothetical best-case scenario where [22] can add a backdoor per round. If n=370, [22] can only send 370 bits. For n=370, FedComm can covertly transmit 7904 bits (see Figure 4g ). 4) Work on backdoor detection in FL [17] , [66] , [105] , can detect the covert channel introduced by [22] . FedComm is not a backdooring attack and does not attempt to alter in any way the behaviour of the learned model. FedComm employs spread-spectrum techniques to encode extra information within the model's updated parameters without impairing the FL learning process. In FedComm's full-stealth mode (Section 4), the gradient updates of the sender participant do not differ from the updates of other participants of the FL scheme. As such, backdooring defenses cannot prevent FedComm covert communication. In the past years, several proposed attacks to privacy and integrity have demonstrated that distributed deep learning presents new challenges that need to be solved to guarantee a satisfactory level of privacy and security. Shokri et al. [97] were among the first to introduce the concept of distributed deep learning with the privacy of training data as one of its main objectives. They attempted to achieve a level of privacy by modifying the participants' behavior, by requiring them to upload only a small subset of their trained parameters. On the other hand, to defend against membership-inference attacks, techniques such as model regularization are promising [34] , [98] . Differential Privacy, another fundamental defense against privacy attacks, was introduced by Dwork [29] , [30] to guarantee privacy up to a parameter . DP uses random noise to ensure that the publicly visible information does not change much if one individual record in the dataset changes. As no individual sample can significantly affect the output, attackers cannot confidently infer the private information corresponding to any individual sample. Nasr et al. [84] built on the concept of DP by modeling the learning problem as a min-max privacy game and training the model in an adversarial setting, improving membership privacy with a negligible loss in model performance. Other defense strategies have been proposed in [59] , [115] . In an attempt to prevent model-poisoning attacks, several robust distributed aggregators have been proposed [11] , [18] , [87] , [93] , [111] assuming direct access to training data or participants' updates. However, Fang et al. [32] recently demonstrated that these types of resilient distributed aggregators do little to defend against poisoning in a FL setting. FedComm, in the full-stealth mode (i.e., β = 1, γ = 0.1σ/ √ P ) (Section 4), does not introduce any artifact during the learning process because the senders do not behave maliciously such as supplying inconsistent inputs in an attempt to poison the model, or providing updates that differ from those of other participants (Section 6.1). For these reasons, the aforementioned approaches are not directly applicable to disrupt FedComm communication from happening. In this work we introduced FedComm, a covert communication technique that uses the FL scheme as a communication channel. We employ the Code-Division Multiple Access spread-spectrum technique to transmit a desired message reliably and covertly during the ongoing FL procedure. We hide the message in the gradient of the weight parameters of the neural network architecture being trained and then upload those parameters to the parameter server where they are averaged. FedComm does not introduce any particular artifact during the learning process, such as supplying inconsistent inputs, attempting to poison the model, or providing updates that differ significantly from other participants. We empirically show that our covert communication technique does not hamper the FL scheme by observing the accuracy and the loss of the global model at every round when the communication occurs. The performance of the global model behaves almost identically as when the covert communication is not taking place. We also show that transmitting the message in fullstealth mode cannot be detected by the global parameter server even if the server could observe individual gradient updates. We believe that FedComm paves the way for new attacks that can further compromise FL training procedures' security, privacy, and utility. Existing defense strategies do not hinder the effectiveness of FedComm; if not, they make the communication stealthier. To this end, we stress that it is imperative to investigate the correlation between the payload (message) size and model capacity. Understanding this process would allow tuning the countermeasures accordingly. A thorough investigation of such relationships and the applicability of FedComm on FL scheme variants such as [71] , [106] , [109] is left as future work. In this section we present the high level overview of the message embedding and decoding procedures in sender and receiver. In FedComm, the sender behaves as shown in Alg 1. Initially, he performs regular training on his local training set as any other legitimate participant on the FL scheme. Afterwards, he embeds his secret payload in his gradient update following Alg 2. Alg 2 takes as input the gradient update W obtained after the local training and the payload to be transmitted and the desired stealthiness level. Initially, the payload content is encoded using the LDPC encoder (line 2). Once that is done, a preamble that will be used to estimate the noise of the channel from the receiver is concatenated to that encoded bit sequence (lines 4,5). Then, for each bit of this composite sequence (preamble + LDPC encoded payload), a spreading code of the length of the number of the ML model parameters is generated. The spreading code is multiplied by the bit value (translated to ±1), and the γ. The γ value is computed according to the level of stealthiness desired. The resulting vectors are added to the vector of the selected parameters of W (lines 7-20). In FedComm, the receiver behaves as shown in Alg 3. Initially, he receives the global model from the global parameter server like any other legitimate participant on the FL scheme. Afterwards, he extract the payload from the obtained global model following Alg 4. Alg 4 takes as input the model W obtained by the global parameter server. Initially, payload content is retrieved using the spreading codes generated using the same seed used by the sender (lines 6-10). Once that is done, the first 100 chips are used to estimate the gain and sigma of the transmitted signal (lines [11] [12] [13] . With those values we are able to use the LDPC decoder to recover the payload (line 14). Beyond federation: collaborating in ml with confidentiality and privacy Turning your weakness into a strength: Watermarking deep neural networks by backdooring Practical data hiding in tcp/ip Hacking smart machines with smarter ones: How to extract meaningful data from machine learning classifiers How to backdoor federated learning Ask the GRU: Multi-task learning for deep text recommendations New covert channels in http: Adding unwitting web browsers to anonymity sets Invertible residual networks Random delays to limit timing covert channel Analyzing federated learning through an adversarial lens Machine learning with adversaries: Byzantine tolerant gradient descent Practical secure aggregation for privacy-preserving machine learning Detection of direct sequence spread spectrum transmissions without prior knowledge Rofl: Attestable robustness for secure federated learning Ip covert timing channels: Design and detection The secret sharer: Evaluating and testing unintended memorization in neural networks Detecting backdoor attacks on deep neural networks by activation clustering Draco: Byzantine-resilient distributed training via redundant gradients Targeted backdoor attacks on deep learning systems using data poisoning Xception: Deep learning with depthwise separable convolutions Shieldfs: A self-healing, ransomware-aware filesystem Covert channel attack to federated learning systems Elements of Information Theory Active warden for tcp sequence number base covert channel Encod: Distinguishing compressed and encrypted file fragments The naked sun: Malicious cooperation between benign-looking processes Evading behavioral classifiers: a comprehensive analysis on evading ransomware detection techniques BERT: pre-training of deep bidirectional transformers for language understanding Differential privacy Our data, ourselves: Privacy via distributed noise generation Characterizing network covert storage channels Local model poisoning attacks to byzantine-robust federated learning Eliminating steganography in internet traffic with active wardens Model inversion attacks that exploit confidence information and basic countermeasures Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position Property inference attacks on fully connected neural networks using permutation invariant representations Modelbased covert timing channels: Automated modeling and evasion Covert messaging through tcp timestamps Covert channels in lan's Covert Channel Analysis of Trusted Systems. A Guide to Understanding Deep learning Deep Learning Speech recognition with deep recurrent neural networks Badnets: Evaluating backdooring attacks on deep neural networks Hiding data in the osi network model Deep residual learning for image recognition Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups Deep models under the GAN: Information leakage from collaborative deep learning Passgan: A deep learning approach for password guessing Capture the bot: Using adversarial examples to improve captcha robustness to bot attacks Evasion attacks against watermarking techniques found in mlaas systems Long Short-Term Memory ICLR 2018 -International Conference on Learning Representations A taxonomy of attacks on federated learning Advances and open problems in federated learning Storage channels in disk arm optimization A style-based generator architecture for generative adversarial networks SWIFT: Super-fast and robust privacy-preserving machine learning Learning multiple layers of features from tiny images Imagenet classification with deep convolutional neural networks A note on the confinement problem MNIST handwritten digit database PHY covert channels: Can you see the idles? USENIX Association A network layer covert channel in ad-hoc wireless networks Fine-pruning: Defending against backdooring attacks on deep neural networks Trojaning attack on neural networks Open Data Science to Fight COVID-19: Winning the 500k XPRIZE Pandemic Response Challenge Clack: A network covert channel based on partial acknowledgment encoding Tcp covert timing channels: Design and detection CMFL: Mitigating communication overhead for federated learning Threats to federated learning How valencia crushed covid with ai A logical calculus of the ideas immanent in nervous activity Communication-efficient learning of deep networks from decentralized data Federated learning: Collaborative machine learning without centralized training data Exploiting unintended feature leakage in collaborative learning Generating high fidelity images with subscale pixel networks and multidimensional upscaling Pointer sentinel mixture models Adversarial frontier stitching for remote neural network watermarking Machine Learning Ppfl: Privacy-preserving federated learning with trusted execution environments Embedding covert channels into tcp/ip Machine learning with membership privacy using adversarial regularization Comprehensive privacy analysis of deep learning: Passive and active white-box inference attacks against centralized and federated learning Passflow: Guessing passwords with generative flows Justinian's gaavernor: Robust distributed learning with gradient aggregation agent A guide to understanding covert channel analysis of trusted systems On the secrecy of timing-based active watermarking trace-back techniques ESC: Dataset for Environmental Sound Classification Digital Communications Language models are unsupervised multitask learners Detox: A redundancy-based framework for faster and more robust gradient aggregation Modern Coding Theory Covert channels in the tcp/ip protocol suite Program confinement in kvm/370 Privacy-preserving deep learning Membership inference attacks against machine learning models Very deep convolutional networks for large-scale image recognition Covert shells Machine learning models that remember too much Can you really backdoor federated learning An embarrassingly simple approach for trojan attack in deep neural networks Principles of Spread-Spectrum Communication Systems Spectral signatures in backdoor attacks Federated learning with bayesian differential privacy Multiuser Detection Attack of the tails: Yes, you really can backdoor federated learning Federated learning with matched averaging NetWarden: Mitigating network covert channels while preserving performance Byzantine-robust distributed learning: Towards optimal statistical rates Protecting intellectual property of deep neural networks with watermarking Why gradient clipping accelerates training: A theoretical justification for adaptivity Leakage of dataset properties in multi-party machine learning Helen: Maliciously secure coopetitive learning for linear models Here we display the model summaries of the DNN architectures employed on our FedComm evaluation experiments. The VGG architecture that we modified to suit CIFAR10 dataset is found on [99] Algorithm 2: Embed Payload Input: Model: W , Str: payload, Str: stealthiness level Output: Model: W Data: Int: ldpc seed, Int: seed 1 ldpc ← init ldpc(ldpc seed) 2 c ← ldpc.encode(message)