key: cord-1012067-drnhn9f4 authors: Prakash, N.B.; Murugappan, M.; Hemalakshmi, G.R.; Jayalakshmi, M.; Mahmud, Mufti title: Deep transfer learning for COVID-19 detection and infection localization with superpixel based segmentation date: 2021-08-16 journal: Sustain Cities Soc DOI: 10.1016/j.scs.2021.103252 sha: 93288879fe145aa3ae65938c33257e2032c38276 doc_id: 1012067 cord_uid: drnhn9f4 The evolution the novel corona virus disease (COVID-19) as a pandemic has inflicted several thousand deaths per day endangering the lives of millions of people across the globe. In addition to thermal scanning mechanisms, chest imaging examinations provide valuable insights to the detection of this virus, diagnosis and prognosis of the infections. Though Chest CT and Chest X-ray imaging are common in the clinical protocols of COVID-19 management, the latter is highly preferred, attributed to its simple image acquisition procedure and mobility of the imaging mechanism. However, Chest X-ray images are found to be less sensitive compared to Chest CT images in detecting infections in the early stages. In this paper, we propose a deep learning based framework to enhance the diagnostic values of these images for improved clinical outcomes. It is realized as a variant of the conventional SqueezeNet classifier with segmentation capabilities, which is trained with deep features extracted from the Chest X-ray images of a standard dataset for binary and multi class classification. The binary classifier achieves an accuracy of 99.53% in the discrimination of COVID-19 and Non COVID-19 images. Similarly, the multi class classifier performs classification of COVID-19, Viral Pneumonia and Normal cases with an accuracy of 99.79%. This model called the COVID-19 Super pixel SqueezNet (COVID-SSNet) performs super pixel segmentation of the activation maps to extract the regions of interest which carry perceptual image features and constructs an overlay of the Chest X-ray images with these regions. The proposed classifier model adds significant value to the Chest X-rays for an integral examination of the image features and the image regions influencing the classifier decisions to expedite the COVID-19 treatment regimen. The novel corona virus disease (COVID-19) has emerged as a pandemic threat and public health concern over the world. Existing institutional arrangements and prevailing healthcare priorities in COVID-19 management are focused on a person-centered, cure-centric system rather than being socially sustainable. Health systems need to be revamped, be socially and economically sustainable, and address systemic drivers that currently limit accessibility, equity and affordability of care. COVID-19 pandemic has reemphasized the urgent need to redesign health systems to prioritize the broader social determinants of health. This redesign will need to be approached in multiple ways to ensure a long-term healthcare model that acknowledges the current constraints on the existing systems. Recent researches on COVID-19 management advocate the need for building a sustainable and health environment employing artificial intelligence and touch-less approaches [31] . In line with this, an extensive survey on deep learning approaches for COVID-19 detection and containment in smart cities is presented in [14, 32] . The authors review several deep learning paradigms for medical image analysis in COVID-19 outbreak prediction, infection tracking, diagnosis, treatment, and drug research. This paper provides deep insights on various deep learning approaches in combatting COVID-19 and advocates the need to design COVID-19 detection systems with optimum accuracy. The mortalities due to COVID-19 pandemic can be considerably reduced by early detection of the infections, isolation of the subjects and administration of anti-viral drugs. At present, the Reverse Transcription Polymerase Chain Reaction (RT-PCR) [52] test on clinical specimens, is the most widely employed screening protocol which is time consuming, highly sensitive to infinitesimal DNA contamination and less accurate. Along with examinations of symptoms and pathogenic testing, imaging examinations are found indispensable in the diagnosis in the screening, detection, diagnosis and prognosis of COVID-19 [10, 13, 19, 29, 32, 46] . In a study on imaging modalities in the diagnosis of COVID-19, Yang et al. [55] have shown that Computed Tomography (CT) images are very effective in capturing Ground Glass Opacity (GGO), consolidations and patchy areas in the peripherals of the lungs, in the early and advanced stages of infections. This investigation also reveals that Chest X Ray (CXR) images are less sensitive to these characteristics in the early stages whereas progressive opacities and consolidations are captured well in the advanced stages. However, CXR is recommended as an initial screening tool due the difficulties encountered in establishing CT scanning facilities in low resource settings and shifting patients to the CT scanning suites. With the infiltration of COVID-19, time consumption in CT scanning and susceptibility of infections at the CT scanning sites, there is a growing need for the detection of valuable diagnostic features from CXR images. Significant research has been conducted in this context and deep learning models with high sensitivity towards COVID 19 features in CXR images In addition to COVID-19 detection, segmentation of infections can provide great insights to accelerate clinical decisions. Generally, UNet [40] an improved version of the Convolutional Network (CNN) architecture, is widely employed in the semantic segmentation of biomedical images. It follows a symmetric encoder-decoder structure with several upsampling and downsampling layers with skip connections between every level to propagate gradients from low resolution images to higher resolution ones. Several variants of UNet have been introduced in the recent years to harness the potential of the symmetric UNet architecture. The Residual Dilated Attention Gate-UNet (RDA-UNet) [57] is built with residual units and attention gates in each layer. In addition, dilated kernels are adopted to improve the network performance by expanding the receptive field. This model employed in lesion detection from breast ultrasound images was enhanced with a Generative Adversarial Network (GAN) to eliminate false positives and precise segmentation of boundaries. This model called the Residual Dilated Attention Gate UNet Wasserstein GAN (RDA UNET WGAN) [35] trains faster and is highly stable compared to the RDA-UNet. Generally, a GAN comprises a generator and discriminator which compete with each other. The generator produces new samples with the same probability distribution as the training samples and the discriminator attempts to determine whether the samples are genuine or fake. Once the generator can give a set of samples that have high similarity to the training samples, it can generate incrementally higher-quality samples. WGAN 4 is a GAN variant implemented with two separate neural networks and a gradient reversal layer, for computing exact gradients of the 1-Wasserstein distance, contributing to improved stability. The Attention Gate-Dense Network-Improved Dilation Convolution-UNET (ADID [37] , for lung lesion segmentation from chest CT images for COVID-19 detection is implemented with attention gate mechanism, dense networks and dilation convolution mechanisms. The attention module focuses on the target lesions of arbitrary shapes and sizes for precise segmentations. The dense connections between the dilation convolution layers and the skip connection reduce the gradients and enhance the localization ability. The Eff-UNet [11] is built with a compound scaled EfficientNet as encoder and a UNet decoder for segmentation in unstructured environments. This model incorporates low level spatial information and high-level features for precise segmentation of objects from road scenes. The Multiscale Statistical U-Net (MSU-Net) [51] for cardiac MRI segmentation employs a Statistical Convolutional Neural Network (SCNN) which models the inputs as multi scale canonical distributions to speed up the segmentation process, also exploiting spatio-temporal relationships between the samples. Further, the UNet is realized as a parallel architecture to statistically process the inputs. A detailed analysis of the architectures of UNet and its variants reveal that the performance of these networks considerably increases with the addition attention gate, densenet and dilation convolution modules. Evincing the need for an integral model for COVID-19 detection and infection segmentation, we propose a light weight deep learning classifier based on SqueezeNet [28] for COVID-19 detection from CXR images. This model is also coupled with a segmentation module to semantically separate the Region of Interest (ROI) using super pixels for a thorough examination of the images. We call our proposed framework as the COVID-19 Super pixel SqueezNet (COVID-SSNet) model. The contributions of this research are as below. 1. A lightweight model for COVID-19 detection and segmentation which more than 99% accuracy for binary and three class classifications is proposed. 2. It is established that a classification-segmentation pipeline is ideal for COVID-19 detection and infection segmentation. 3. Gradient based class activations coupled with super pixel segmentation provide best classification and segmentation abilities of the SqueezeNet based model. Performance evaluation of our model with a standard dataset and, interpretations of visual and quantitative results signify its superiority compared to the state-of-the-art classification and segmentation models. This paper is organized as below. In Section 2, we present a review of the relevant work in the context of our research. The dataset, methods employed in realizing the framework and details of implementation are described in Section 3. In Section 4, we present the architecture of the proposed framework with schematic diagrams. In Section 5, we describe our experimental results, performance analyses and comparisons. The paper is concluded in Section 6. In this section we give a comprehensive review on the existing deep learning models for COVID-19 detection from CXR images. Radiological studies [41] show that GGOs, peripheral distribution and bilateral involvement are widely observed in CXR images. A deep CNN model for COVID-19 detection must be capable of learning these features from the CXR images. Narin et al. [33] In addition to classification, CNN models are also employed in the segmentation of lungs from CXR images in the COVID-19 diagnostic pipeline. Lung segmentation from CXR images is essential in the detection of lung nodules, quantification and staging of infections. The adversarial U-Net [23] architecture for lung segmentation is found to generalize well with arbitrary A deep learning framework for COVID-19 detection presented in [44] follows a fusion approach with sobel filtering, CNN J o u r n a l P r e -p r o o f 32. This result signifies that better results can be achieved by comparatively less deeper networks trained at optimal learning rates. Though the CNN models are demonstrated to exhibit high accuracy in the discrimination of COVID-19 and other bacterial and viral pneumonia from CT and CXR image, their behavior is not understandable due to their intrinsic black box nature. According to Shi et al. [45] , analyses of CXR images reveal frequent occurrences of GGOs in peripheral, posterior, medial and basal areas, air space consolidation, traction bronchiectasis, traction bronchiectasis and septal thickening in COVID-19 patients. Rather than completely relying on the quantitative performance metrics such as accuracy and precision, examination of the Region of Interest (ROI) activating the classifier models can improve the clinical decisions. In this context, several research works have been performed for visualization and interpretation of the CNN models by localization of the ROIs relevant to the input classified. In [53] , the authors have shown that a disease detection and localization framework can be constructed with a multi class classifier and gradient based algorithm to detect and localize pneumonia in CXR images. Similarly, discrimination of pneumonia versus normal cases and bacterial versus viral pneumonia from CXR images, performed with an Inception V3 model was followed by an occlusion test to identify the significant ROI contributing to the decision of the network in [30] . Similarly, the Chexnet [39] a 121 layered dense CNN trained with CXR images determines the probability of 14 pathologies and localizes pneumonia with activation maps. A two branch Attention Guided CNN (AG-CNN) [24] based on ResNet, fuses global features extracted from the CXR images with 10 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Super pixels are groups of semantically similar pixels, carrying high-level information which provide a compact representation of images. While conventional deep learning models are trained to learn features from raw images, recently computer vision models are constructed as CNNs infused with domain knowledge captured using super pixels. A hybrid model employing a 11 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 detection are yet to be benchmarked. In this research, we strive to bridge this gap with a novel classifier for COVID-19 detection from chest X-Ray images, which can be 12 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 J o u r n a l P r e -p r o o f generalized to other modalities. In this section, we describe our dataset and the methods employed in the construction of the proposed classification-segmentation framework. The proposed model is realized with a classification-segmentation pipeline with the following assumptions. In this research, we have constructed training and testing datasets from the award winning Kaggle CXR public database [16] , comprising 219 COVID-19 positive images, 1345 viral pneumonia images and 1341 normal images. Originally these images are of dimension 1024x1024 in png format. The 13 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 J o u r n a l P r e -p r o o f database is divided into two distinct training and testing subsets, with images resized to 227x227. Then the training dataset is augmented with images generated by applying the translation, rotation and scaling operations on the training images. The description of the dataset is given in Table 1 . The proposed framework is implemented with Matlab 2021b software in an i7-7700K processor with 32GB DDR4 RAM equipped with a NVIDIA GeForce GTX1060 3GB Graphics card. Figure 2 which comprises the convolutional layers, fire modules, 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 The Gaussian Mixture Model (GMM) [47] super pixels exhibit better segmentation accuracies than the SLIC algorithm, as each super pixel is modeled as a Gaussian representation, by randomly choosing a distribution initially from a group of Gaussian distributions. A Gaussian distribution n is associated with a covariance , mean µ n and π the mixing probability, such that the condition in Eq (1) For an arbitrary data point x, the probability that it belongs to a Gaussian n is given in Eq (2) where the latent variable z n takes the value 1 when x belongs to the cluster n and 0 otherwise. For a Gaussian n, the mixing coefficient π n is given in Eq(3) which is the probability that a data point belongs to n. The set of all latent variables 17 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 J o u r n a l P r e -p r o o f Journal Pre-proof is represented as z = z 1 , . . . z N and the probability with which all the data points belong to the Gaussians is given in Eq(4). For a given data point x, the probability that it belongs to a Gaussian n is given in Eq (5). The probability of a latent variable z n such that a data point x exists in a Gaussian n is given in Eq. (6) . The parameters of this model determined by EM are collectively expressed as θ n = {µ n , Σ n }. Given a 2D image I of dimension M×K, the total number of pixels P = M × K labeled p i ∈ 1, 2, · · · P , can be assigned to a super pixel n ∈ 1, · · · N which is analogous to the clustering P data points into N Gaussians. For a pixel pi ∈ 1, 2, · · · P , the superpixel label LB i is computed as in Eq7). It can be seen that these labels are computed from the posterior probability after evaluation of θ n for a cluster n, which facilitates precise grouping of pixels compared to the SLIC algorithm. In this paper, we employ superpixel segmentation to segmentation the ROI from the heat maps. The proposed COVID-SSNet model realized as an integral framework merging the classification and segmentation processes is illustrated in Figure 3 . This is built by extending the standard SqueezeNet classification model with Grad-CAM and super pixel pooling. In this framework, we apply the Grad-CAM on the image feature maps from the final convolutional layer CL12 to construct the activation map. This heat map is given as input to the Super Pixel Pooling Layer (SPP) for segmentation and a normalized 18 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 In this paper, we follow deep transfer learning employing the pre-trained SqueezeNet model which is trained on the ImageNet dataset. We further fine-tune this network training it with the training dataset constructed from [46] . The SqueezeNet model is initially trained on the training dataset for binary and three class classifications and tested with the respective test datasets. As shown in Table 1 , the dataset is separated into training and testing subsets. For binary classification, the images ascribing to the viral 21 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 J o u r n a l P r e -p r o o f pneumonia and the normal cases are merged for training and testing. The hyperparameters of the proposed framework are given in Table 2 . Figure 6 signifying a high training accuracy. We exercise these trained models with the test dataset and show the con- 23 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Though deep learning models demonstrate superior data representation and learning abilities across multiple domains, they appear to be a black 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 We present a comparison of our proposed model with state-of-the-art classifiers based on diverse CNN based classifiers modeled for COVID-19 detection from CXR images in Table 3 . It is seen that the highest classification accuracy is achieved by our model. 26 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 J o u r n a l P r e -p r o o f In addition to the state-of-the art models, the proposed model is compared with two SqueezeNet based COVID-19 detection models. For a fair comparison, the models proposed in [12, 49] are evaluated with the dataset described in Table 1 . The performance metrics including balanced scores and training times are evaluated as in Table 4 . Generally, the decision to use a certain batch size often is driven by some intuition regarding the com- Table 4 . 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 Specif icity = T N T N + F P (10) P recision = T P T P + F P (12) where 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 rates. Further, this inference also aligns with the results of Smith et al. [47] which show that best classification accuracy can be achieved by increasing the batch size without decaying the learning rate. The Dice metric which is the measure of the overlap between the prediction P and the ground truth G is given in Equation (15). The SM which evaluates the similarity between the segmented output and the ground truth mask is given in Equation (16), whereS o , S r ,α,S p and G refer to the object-aware similarity, region-aware similarity, balance factor between S o andS r , prediction and the ground truth respectively. We have evaluated SM with α=0.5, the default specified in [20] . The EM which is a measure of the global and local similarity between binary maps is given in Equation (17), where w and h are the width and height of ground truth mask G and ψ is the enhanced alignment matrix. 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 J o u r n a l P r e -p r o o f 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 J o u r n a l P r e -p r o o f 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 J o u r n a l P r e -p r o o f 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 -Innovative chest X-ray based decision support system -Deep learning-based framework to enhance these images' diagnostic values -Use of SqueezeNet for the detection process -Use of Deep Transfer learning for the detection of COVID-19. Classification of covid-19 in chest x-ray images using detrac deep convolutional neural network Slic superpixels compared to state-of-the-art superpixel methods Covid-caps: A capsule network-based One shot cluster based approach for the detection of covid-19 from chest x-ray images Eff-unet: A novel architecture for semantic segmentation in unstructured environment Superpixel segmentation using gaussian mixture model Rough Sets in COVID-19 to Predict Symptomatic Cases Deep learning and medical image processing for coronavirus (covid-19) pandemic: A survey Superpixel-based domain-knowledge infusion in computer vision Covid-19 radiography database: Covid-19 chest x-ray database Can ai help in screening viral and covid-19 pneumonia? Imagenet: A large-scale hierarchical image database Social-group-optimization assisted kapur's entropy and morphological segmentation for automated detection of covid-19 infection from computed tomography images Structure-measure: A new way to evaluate foreground maps Enhanced-alignment measure for binary foreground map evaluation Covid-resnet: A deep learning framework for screening of covid19 from radiographs Attention u-net based adversarial architectures for chest x-ray lung segmentation Diagnose like a radiologist: Attention guided convolutional neural network for thorax disease classification Covidxnet: A framework of deep learning classifiers to diagnose covid-19 in x-ray images Matrix capsules with em routing Corodet: A deep learning based classification for covid-19 detection using chest x-ray images Squeezenet: Alexnet-level accuracy with 50x fewer parameters and¡ 0.5 mb model size Healthcare Robots to Combat COVID-19 Identifying medical diagnoses and treatable diseases by image-based deep learning Antivirus-built environment: Lessons learned from covid-19 pandemic Artificial intelligence based covid-19 detection using medical imaging methods: A review Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks Application of deep learning techniques for detection of covid-19 cases using chest x-ray images: A comprehensive study Rda-unet-wgan: an accurate breast ultrasound lesion segmentation using wasserstein generative adversarial networks Automated detection of covid-19 cases using deep neural networks with x-ray images Adid-unet-a segmentation model for covid-19 infection from lung ct scans Visualization and interpretation of convolutional neural network predictions in detecting pneumonia in pediatric chest radiographs Chexnet: Radiologist-level pneumonia detection on chest x-rays with deep learning U-net: Convolutional networks for biomedical image segmentation Coronavirus disease 2019 (covid-19): a systematic review of imaging findings in 919 patients Grad-cam: Visual explanations from deep networks via gradient-based localization Detection of coronavirus disease (covid-19) based on deep features and support vector machine Fusion of convolution neural network, support vector machine and sobel filter for accurate detection of covid-19 patients using x-ray images Radiological findings from 81 patients with covid-19 pneumonia in wuhan, china: a descriptive study Covid-19 infection detection from chest x-ray images using hybrid social group optimization and support vector classifier Don't decay the learning rate Computed tomography image processing analysis in covid-19 patient follow-up assessment Covidiagnosis-net: Deep bayessqueezenet based diagnosis of the coronavirus disease 2019 (covid-19) from x-ray images Covid-net: a tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images Msu-net: Multiscale statistical u-net for realtime 3d cardiac mri video segmentation Detection of SARS-CoV-2 in Different Types of Clinical Specimens Hospital-scale chest x-ray database and benchmarks on weaklysupervised classification and localization of common thorax diseases Hyperspectral image classification based on superpixel pooling convolutional neural network with transfer learning The role of imaging in 2019 novel coronavirus pneumonia (covid-19) Learning deep features for discriminative localization An rdau-net model for lesion segmentation in breast ultrasound images