key: cord-0953877-j1vp6rii authors: Adak, Chandranath; Ghosh, Debmitra; Chowdhury, Ranjana Roy; Chattopadhyay, Soumi title: COVID-19–affected medical image analysis using DenserNet date: 2021-05-21 journal: Data Science for COVID-19 DOI: 10.1016/b978-0-12-824536-1.00021-6 sha: e1e5793a64011c9cc29214e07af21d46e679e6d3 doc_id: 953877 cord_uid: j1vp6rii The COrona VIrus Disease (COVID-19) outbreak has been announced as a pandemic by the World Health Organization (WHO) in mid-February 2020. With the current pandemic situation, the testing and detection of this disease are becoming a challenge in many regions across the globe because of the insufficiency of the suitable testing infrastructure. The shortage of kits to test COVID-19 has led to another crisis owing to worldwide supply-demand mismatch, and thereby, widen up a new research area that deals with the detection of COVID-19 without the test kit. In this paper, we investigate medical images, mostly chest X-ray images and thorax computed tomography (CT) scans to identify the attack of COVID-19. In countries, where the number of medical experts is lesser than the expected as recommended by WHO, this computer-aided system can be useful as it requires minimal human intervention. Consequently, this technology reduces the chances of contagious infection. This study may further help in the early detection of people with some similar symptoms of coronavirus. Early detection and intervention can play a pivotal role in coronavirus treatment. The primary goal of our work is to detect COVID-19–affected cases. However, this work can be extended to detect pneumonia because of Severe Acute Respiratory Syndrome, Acute Respiratory Distress Syndrome, Middle East Respiratory Syndrome, and bacteria-like Streptococcus. In this paper, we employ publicly available medical images obtained from various demographics, and propose a rapid cost-effective test leveraging a deep learning-based framework. Here, we propose a new architecture based on a densely connected convolutional neural network to analyze the COVID-19–affected medical images. We name our proposed architecture as DenserNet, which is an improvisation of DenseNet. Our proposed Denser Net architecture achieved 96.18% and 87.19% accuracies on two publicly available databases containing chest X-ray images and thorax CT scans, respectively, for the task of separating COVID-19 and non-COVID-19 images, which is quite encouraging. On the other hand, providing the testing infrastructure considering the massive spread rate of this disease is becoming nearly infeasible. Moreover, because of the infectious nature of the disease, medical experts are also getting infected by this disease while treating the patients. The major problems with conventional diagnostic strategies are as follows: The diagnosis is a time-consuming approach. Infrastructure required to store specimens: special biosafety lab is required to store a polymerase chain reaction (PCR) machine, which is very costly. Shortage of testing-kits: sufficient test kits are not available considering the requirement to test this disease. Moreover, a reverse transcriptase (RT)-PCR kit does not cost-efficient. A phlebotomist is required for testing purposes, who is getting exposed to invasive swab. The testing is prone to human error and bias to a cost-effective approach. Considering the above problems, an alternate coronavirus detection strategy can be very useful along with the conventional testing mechanism. Other methods of diagnosis include clinical approach, medical image (computed tomography (CT) or chest X-ray) analysis, pathogenic test, etc. In this paper, we aim to analyze COVID-19 at an early stage of infection by leveraging chest X-ray and CT scan images. The use of chest X-ray and CT scan images are very common in medical image processing to diagnose various kind of diseases. CT [5e7] is an X-ray measurement obtained from diverse angles to generate cross-sectional images of certain regions of a scanned object, which allows the user to inspect inside the object without any surgery. Magnetic resonance imaging (MRI) [8e13] is another medical imaging technique used to form pictures of the anatomy using nuclear magnetic resonance. Recently, radiography images are also becoming popular [14] , where the image capturing systems are equipped with digital sensors that use X-rays, gamma rays, or similar ionizing/nonionizing radiation to reflect the internal view of an object. In the field of medical image analysis, various computer vision techniques (e.g., segmentation [15, 16] , slicing [17e19], clustering [9, 20] ) have shown to be very effective and played a crucial role in the early detection of major diseases in the brain, kidney, breast, prostate, etc. [21, 22] . For example, diagnosis of heart diseases [23] , tumor detection [24] , bone fracture finding [25] , bone age prediction [26] , etc., are carried out by analyzing medical images. In this paper, we propose a new architecture to analyze the COVID-19eaffected medical images. Our proposed method (say, DenserNet) uses the densely connected convolution neural network. The proposed DenserNet is an improvement over the DenseNet [27] . For experimental analysis, we employ two public databases containing chest X-ray and CT-scan images. The experimental results are quite encouraging. Our contribution to this paper is of two folds comprising a novel solution architecture proposal and its application on COVID-19 study. Solution architecture: We propose a new architecture (DenserNet) to tackle the general classification problem. This architecture is an improvisation of the DenseNet [27] . Application: We propose a framework to analyze medical images, especially X-ray and CT scan images to expedite the study of COVID-19eaffected cases. This paper is organized as follows. Section 2 discusses the related works on medical image analysis. Then Section 3 formulates the undertaken research problem. The proposed methodology is given in Section 4. The experimental results are presented in Section 5. Finally, Section 6 concludes this paper. The medical image processing has acquired great attention in the field of health care since the day digital images came into existence. Some common medical digital imagings are CT [5e7], MRI [8e13], etc. Along with these digital imagings, the recent addition of an analog imaging modality, i.e., radiography [14] equipped with digital sensors, has attracted significant research attention. Many works have been performed using digital images to address several problems of the medical domain [21, 28, 29] . With the alliance of medical imaging and computer vision, many successful works have been proposed in the medical domain, which has played a significant role to perform early identification of major diseases related to the brain, chest, breast, kidney, prostate, and many other organs. Taking assistance from computer vision, the medical image analysis explores various facets, such as segmentation [15, 16, 29, 30] , slicing [17e19], clustering [9, 20] , acuity [21] , etc., for a better view of the subsections with a detailed study. Segmentation [29] of an image into small subsections provides a better view of remote sections. Each subsection contains minute information that is subjected to further processing for information extraction. Often the digital images used in medical science come out to be blurry or having a blunt outline. The quality of images is also sometimes not up to the mark because of which processing becomes tough. Subsequently, the accurate localization of complex boundaries of various tiny isolated parts cannot be performed properly. Kruggel [21] dealt with the quality of digital images by taking the acuity measure and the statistical properties of images. Zhou et al. [29] addressed this problem by exploiting the basic information of the images for a better understanding of the outlines and subsections. They took into account the semantic information of the images for accurate boundary localization. For extracting the features and some other latent information from images, deep learning has played a remarkable role in the field of medical sciences. The convolutional neural network (CNN) has been an important part of analyzing the visual imagery. A decent amount of research works [22, 29, 31] employed deep learningebased approaches and extracted useful information from the medical images. The deep learningebased models are usually dependent on huge training data, but sometimes the availability of sufficient distinct images for training is not available. To deal with this issue, Zhang et al. [22] implemented a two-stage task-oriented deep learning method for finding large-scale anatomical landmarks simultaneously in real-time with limited training data. For the extraction of fine patterns and features in a medical image, another kind of method involves the slicing [17, 18] of images. Slicing often creates fine pieces of an image from various positions so that a diverse view of the image can be obtained for additional processing. Manojlovic et al. [20] dealt with radiology images and dynamically sliced it for further processing of images. With the outbreak of COVID-19 pandemic, multiple research works are being carried to detect the possibilities of positive cases and also to find solutions to its recovery. Medical images of the COVID-19eaffected patients have been taken into account to study the patients. One such convenient medical image is the chest X-ray image of COVID-19eaffected patient, which has been widely used for predictions and classification of positive and nonpositive cases. The combination of medical sciences with computer vision has helped to an extent in figuring out the positive cases of COVID-19. Multiple works [32e36] have been done on COVID-19 by considering the CT scans and chest X-ray images of COVID-19eaffected patients. Most of the studies have employed deep learning techniques [31,32,34,37e39] supported by CNNs for the detection of COVID-19 cases with respect to the chest X-ray images. To determine the COVID-19 positive cases, the task is mostly modeled into a supervised classification [33, 40] of medical images. However, the deep learning techniques are dependent on the training data, for which sufficient data supply is required to train the model properly. Because of the inaccessibility of a sufficient amount of data, it becomes difficult to train the model. This problem can be handled by the transfer learning technique, which allows using the knowledge gathered from some other computer vision tasks. The studies reported in Refs. [36, 41] employed transfer learning to address the problem of insufficient data and analyzed the COVID-19 positive cases concerning X-ray and CT scan images. However, there is a scope of improvement over the past works [42] concerning the accuracy, which we address in this paper. In this section, we formulate the problem considered in this paper. Our framework is an analysis framework, where we have a medical image database. This database contains multiple labeled chest X-ray and CT scan images of various classes. Such an image is the input to the framework. The problem is formulated as a supervised classification task and includes the following analyses: We first formulate our problem as a binary classification task, where the objective is to identify COVID-19 versus non-COVID-19 medical images. We further concentrate on more granular classification and formulate a multiclass classification problem. The objective of this problem is to categorize the medical images into classes like normal, bacteria, viral COVID-19, viral non-COVID-19, etc. Primarily, we create a trained model based on the training set. After the proper training, the trained model can be used to predict the class of an unlabeled image. Thus, a chest X-ray/CT scan image can be analyzed whether the patient is COVID-19eaffected or not. More details regarding the research tasks undertaken in this paper can be found in Section 5.2. In this section, we discuss our proposed method. This research emphasizes the classification task. Therefore, we propose a novel architecture that can handle the classification problem. In a deep convolutional architecture [43] , an image is fed to the system and usually passed through a sequence of layers. The input image transforms through every layer l, which comprises nonlinear transformation G l . This G l is a composite function of multiple operations, such as batch normalization (BN) [44] , activation function (e.g., ReLU) [45] , and convolution [43, 45] , or pooling [43, 45] , etc. The output of the lth layer is denoted as x l . Convolutional neural network: In the traditional CNN [43] , during feed-forward connection, the input of the lth layer is the output of the previous ðl À1Þth layer, which can be written as below. ResNet: The residual network (ResNet) [46] adds a skip connection besides the main feed-forward connection, which utilizes the residues of the previous layer. This is represented as follows: (11.2) In ResNet, the skip connection (output of G l ) and the main identity connection (x lÀ1 ) is combined with a summation/linear transformation, which may lead to some information loss [27] . Therefore, instead of summation, concatenation can be used. Dense connection: In a dense convolutional network (DenseNet) [27] , besides introducing the concatenation idea, the information flow between layers is improved. In DenseNet, multiple dense blocks are linked sequentially with the transition layers comprising convolution and pooling operations. Inside a dense block, the connection is dense, where the feature map of the lth layer (x l ) is dependent on all the feature maps of all the preceding layers, i.e., x 0 , x 1 , ., x lÀ1 . It can be denoted as follows. where ½x 0 ; x 1 ; .; x lÀ1 is the concatenation of the feature maps obtained from layers 0; 1; .; l À 1, and G l is a composite function. DenserNet: We adopt the idea of a dense block in our proposed architecture. We pictorially present the internal view of a dense block in Fig. 11 .2 that is used in our architecture, where the dense connectivity among layers can be observed. The main connections are shown by horizontal rigid lines, whereas the skip connections are shown using dotted lines. The composite function G l comprises six successive operations, i.e., BN, Rectified Linear Unit (ReLU) activation, 1  1 convolution (conv), followed by BN, ReLU, 3  3 conv. In a dense block, x 0 is the input feature map, and x l is the output feature map. In Fig. 11 .2, l ¼ 4. In DenseNet, the dense connection is only present inside a dense block, i.e., intra dense block connection [27] . We propose an architecture, where besides the intra-dense block connection, additional dense connections exist among the dense blocks, i.e., interdense block connection. Therefore, our proposed architecture is denser than DenseNet. We coin the name "DenserNet" to refer to our architecture. Our DenserNet architecture contains multiple dense blocks. Here, all the dense blocks are similar, i.e., each has the same l number of layers. In a dense block, the number of channels of input and output feature maps is kept the same. Therefore, for simplicity, all the feature maps inside a dense block contain the same number of channels. For example, in Fig. 11 .2, if the input feature map x 0 contains n c number of channels, then the output feature map x 4 , and in-between feature maps x 1 ; x 2 ; x 3 also contain n c number of channels, individually. In Fig. 11 .3, we graphically present a generalized version of our DenserNet architecture. The output of the mth dense block is d m , which is actually the last feature map of the mth dense block. The input of the ðm þ1Þth dense block is a feature map f m . The f m is a concatenation of multiple feature maps, calculated as follows. where Q i m is a composite function applied after the mth dense block. Q i m contains four consecutive operations, BN, ReLU, 1  1 convolution (conv), and 2 i  2 i max pooling (pool) layers; ci ¼ 1; 2; .; m, and m ! 1. The input and output feature maps of Q i m consist of the same number of channels. The main connection contains composite functions Q i m , for i ¼ 1. As a matter of fact, the main connection comprises only 2 1  2 1 max-pooling layers. In Fig. 11 .3, m ¼ 4. Here also, we show the main connection with rigid lines, and the skip connections with dotted lines. Input: An image is fed to our DenserNet architecture. The image is then transformed using a composite function containing BN, ReLU, and 1  1 conv. Here, during the convolution, we employ k number of filters to obtain a feature map with k channels. The transformed output is fed to the first dense block. Therefore, the first feature map of the first dense block consists of a k channeled feature map. Growth rate: The input and output feature maps of a dense block contain the same number of channels. The composite function Q i m also maintains the same number of channels. After the operation of the first dense block, the number of channels of feature map f 1 is k. The value of k grows with the number of dense blocks because of concatenation. As a matter of fact, f 2 has 2k channels obtained after the execution of the second dense block, f 3 contains 4k channels attained after the operation of the third block, and so on. In this manner, after the execution of the m th th block, f m consists 2 mÀ1 k number of channels. Here, k is a hyper-parameter, which grows with the number of dense blocks. We present an example concerning Fig. 11 .3 and Eq. (11.4) as follows: The feature map f 4 is a concatenation of 4k, 2k, k, and k channeled features maps obtained from Q 1 4 ðd 4 Þ, Q 2 4 ðd 3 Þ, Q 3 4 ðd 2 Þ, and Q 4 4 ðd 1 Þ, respectively. Therefore, f 4 contains a total of 8k À ¼ 4k þ2k þk þk ¼ 2 4À1 k Á number of channels. Classification: The f m is passed through a global average pooling (avg pool) layer that produces 2 mÀ1 k channeled feature maps, each of size 1  1. We flatten this feature map and generate a linear representation of a feature vector with dimension 2 mÀ1 k. This flattened layer (FC 1 ) with 2 mÀ1 k number of nodes is fully connected to a successive layer (FC 2 ) that contains h number of nodes. Then FC 2 is fully connected to a sequential layer FC 3 comprising c number of nodes, where c is the number of classes. Finally, a softmax layer [45] is added to obtain the classified output. Implementation details: In our DenserNet, an image of size 224  224 is fed as an input. Here, all the convolutional layers use the same convolutions, i.e., the input and output of a convolutional layer are of the same dimension. For our study undertaken in this paper, we use five dense blocks in total, and four layers in each of the dense blocks. In dense blocks, we use dropout with a rate of 20% at the end of every composite function G l . It helps in preventing the overfitting problem. The hyper-parameter k is set as 32. Therefore, the feature map f 1 has 32 number of channels, each of size 112  112 ¼ 224 2  224 2 , which we represent as f 1 : 112  112@32. Similarly, the feature maps f 2 , f 3 , f 4 , and f 5 can be represented as f 2 : 56  56@64, f 3 : 28  28@128, f 4 : 14  14@256, and f 5 : 7  7@512, respectively. Thus, after the fifth dense block, we obtain feature map f 5 containing 512 À ¼ 2 5À1 :32 Á number of channels, each of size 7  7. Now, f 5 is fed to the avg pool layer, where the employed filter is of size 7  7. As a matter of fact, FC 1 layer contains 512 number of nodes. For FC 2 , we fix the number of nodes as h ¼ 128. In FC 3 , the number of nodes c is decided based on the task undertaken, e.g., for the binary classification task, c ¼ 2. In this section, we discuss the experimental study and analyze the efficacy of our system. To perform the experiments, we required a database containing radiological images. For this purpose, we gathered some publicly available databases. The database employed is discussed below followed by the performance evaluation of our proposed method. For experimental analysis, we employed two separate databases containing chest X-ray and thorax CT-scan images. The details of these databases are as follows. (i) X-ray database (D X ): This database (say, D X ) contains a large collection of chest X-ray images of several human-beings of various demographics. The total count of X-ray images in D X is 6116 (¼1576 þ 2777 þ 270 þ 1493). In D X , the pneumoniaaffected image count is 4540 (¼2777 þ 270 þ 1493), and normal image count is 1576. The pneumonia images are categorized into two groups, i.e., bacteria and virus, which contains 2777 and 1763 (¼270 þ 1493) number of images, respectively. The virus-affected images are further divided into two categories, i.e., COVID-19 versus non-COVID-19 X-ray images, which consist of 270 and 1493 number of samples, respectively. In Fig. 11 .4, we pictorially represent this categorization. The X-ray images are collected from some publicly available data repositories mentioned as follows. The normal, bacterial pneumonia, and non-COVID-19 viral pneumonia X-ray images are gathered from Ref. [47] . The COVID-19 viral pneumonia-affected X-ray images are collected from Ref. [48] . We only used the frontal chest X-ray images for our experimentation. In Fig. 11 .5, we present some examples from D X . The training set of D X contains 1342, 2535, 199, and 1345 number of samples of normal, bacteria, COVID-19, and non-COVID-19 categories, respectively. The details of the training, validation, and test sets of D X are presented in Table 11 .1. of non-COVID-19 images is 397. The samples of D CT are obtained from a publicly available collection [49] . In Fig. 11 .6, we present a pair of samples from D CT . The dataset D CT is divided into training, validation, and test set as presented in Table 11 .2. In this subsection, we present the performance of our system carried out on databases D X and D CT . Here, we undertake various tasks to analyze bacterial pneumonia, viral pneumonia, and pandemic COVID-19. The tasks are mostly formulated as classification problems as below. Task-1: In this task, we perform a binary classification to classify the X-ray images of normal and pneumonia-affected patients. Task-2: Here, we classify the X-ray images of bacterial and viral pneumoniaaffected patients. Task-3: In this task, the viral COVID-19eaffected patients are separated from viral non-COVID-19 patients with respect to the X-ray images. Task-4: This task comprises the classification of four classes of X-ray images, i.e., normal, bacteria, viral COVID-19, and viral non-COVID-19. For Tasks 1, 2, 3, and 4, we use the X-ray images of D X database. Task-5: Here, we perform a binary classification to detect COVID-19 and non-COVID-19 CT-scan images of D CT database. With respect to these five tasks, we train five models by employing the corresponding training sets as mentioned in Tables 11.1 and 11.2. At first, we train our DenserNet model for Task-4, then transfer the weights of the initial two dense blocks to the models for Task-1, Task-2, and Task-3. Here, we adopt the idea of transfer learning. The training details of the models are mentioned as follows. Training details: To tackle the overfitting problem, we employ data augmentation [50] with All our models were trained using Adam optimizer [51] with mini-batch of size 64. Here, we fixed some hyper-parameters such as learning rate (a) ¼ 0.01, weight_decay ¼ 10 À4 , b 1 ¼ 0.9, b 2 ¼ 0.999, ε ¼ 10 À8 . We trained our models for 500 epochs. We did not use any early stopping [52] . We employed crossentropy [52] as a loss function. We measured the performance of our system in terms of accuracy, precision, recall, and F 1 score. The performance measures of our tasks are shown in Table 11 .3. From Table 11 .3, we can note that our method performed the best for Task-3 by attaining 96.18% accuracy, where the task was to separate the viral COVID-19eaffected patients from the viral non-COVID-19 patients with respect to the X-ray images. On database D X , our system obtained the lowest 82.40% accuracy for Task-4, where we classified into four classes of X-ray images, i.e., normal, bacteria, viral COVID-19, and viral non-COVID-19. For Task-1 and Task-2, we obtained 89.26% and 86.85% accuracies, respectively. On database D X , the highest to lowest performances of the tasks are in the following order: Task-3 > Task-1 > Task-2 > Task-4. On database D CT , we executed only Task-5, where we obtained 87.19% accuracy for detecting COVID-19 versus non-COVID-19 with respect to CT-scan images. In Table 11 .3, we observe a similar trend with respect to the F 1 score. The highest F 1 score was achieved for Task-3, and the lowest F 1 score was attained for Task-4. Fig. 11 .7 shows the bar chart representation of our DenserNet performance over the five tasks in terms of accuracy. We compared our proposed DenserNet architecture with some state-of-the-art deep learning-based architectures, such as GoogLenet [53] , VGG-16 [54] , ResNet-101 [46] , and DenseNet [27] that work well on ImageNet database [55] . For a fair comparison, all the architectures were trained on the same training data and a similar experimental setup. In Table 11 .4, we present this comparison analysis with respect to the accuracy measure. From Table 11 .4, we can observe, overall, our DenserNet performed the best on databases D X and D CT with respect to the five undertaken tasks. This can be easily observed from the bar chart of Fig. 11 .8. Overall, for all the tasks, the highest to lowest performances are as follows: DenserNet > DenseNet > ResNet-101 > VGG-16 > GoogLenet. Our method can be impactful in such a geographic location, where the proper COVID-19 test kit is not available, whereas the availability of an X-ray/CT-scan facility is there. In addition, our system has minimal human intervention, which is quite advantageous for breaking the chain of COVID-19 spread. Moreover, our work can be extended to inspect some other medical images related to tuberculosis, tumor, bone fracture, etc. In the present scenario, the whole world is facing a pandemic situation because of a massive outbreak of beta coronaviruses, specifically SARS-CoV-2 (COVID-19). In this paper, we work on analyzing COVID-19eaffected medical images. For this purpose, we propose a densely connected deep CNN, named as DenserNet. We employ two publicly available databases D X and D CT , which contain chest X-ray and thorax CT scan images, respectively. For COVID-19 versus non-COVID-19 medical image separation, our DenserNet achieved 96.18% and 87.19% accuracies on databases D X and D CT , respectively. In the future, we will endeavor to collaborate with some medical establishment to obtain more data, so that our system can learn various facets to produce better results. Currently, our system is mainly trained in analyzing COVID-19eaffected medical images. However, our system can be extended to analyze some other medical images concerning tumor, tuberculosis, etc. Clinical features of patients infected with 2019 novel coronavirus in A distinct name is needed for the new coronavirus A novel coronavirus from patients with pneumonia in China Local wavelet pattern: a new feature descriptor for image retrieval in medical CT databases Multiscale receptive field based on residual network for pancreas segmentation in CT images Deep multi-scale feature fusion for pancreas segmentation from CT images Study on MRI medical image segmentation technology based on CNN-CRF model A hybrid fuzzy clustering approach for the recognition and visualization of MRI images of Parkinson's disease Fusion of brain PET and MRI images using tissue-aware conditional generative adversarial network with joint loss Integrating Wikipedia articles and images into an information resource for radiology patients Image descriptors in radiology images: a systematic review Classification and retrieval of radiology images in H.264/AVC compressed domain, Signal Image Video Process Generate structured radiology report from CT images using image annotation techniques: preliminary results with liver CT DenseX-Net: an end-to-end model for lymphoma segmentation in whole body PET/CT images Multi-task refined boundary-supervision U-Net (MRBSU-Net) for gastrointestinal stromal tumor segmentation in endoscopic ultrasound (EUS) images Diagnosis of occlusal caries with dynamic slicing of 3D optical coherence tomography images A software tool for 3D visualization and slicing of MR images JPEG 2000 compression of unfocused light field images based on lenslet array slicing Using DICOM tags for clustering medical radiology images into visually similar groups A simple measure for acuity in medical images Detecting anatomical landmarks from limited medical imaging data using two-stage task-oriented deep neural networks Prediction of heart disease using machine learning algorithms Automatic Lung Cancer Prediction from Chest X-Ray Images Using Deep Learning Approach Arm fracture detection in X-rays based on improved deep convolutional neural network Bone age assessment with X-ray images based on contourlet motivated deep convolutional networks Graph-based compensated wavelet lifting for scalable lossless coding of dynamic medical data High-resolution encoder-decoder networks for lowcontrast medical image segmentation Skin lesion segmentation in dermoscopic images with ensemble deep learning methods A New Modified Deep Convolutional Neural Network for Detecting COVID-19 from X-Ray Images, 2020. CoRR Lung infection quantification of COVID-19 in CT images with deep learning Coronavirus (COVID-19) Classification Using CT Images by Machine Learning Methods COVID-Net: A Tailored Deep Convolutional Neural Network Design for Detection of COVID-19 Cases from Chest X-Ray Images Automatic Detection of Coronavirus Disease (COVID-19) Using X-Ray Images and Deep Convolutional Neural Networks COVID-19: Automatic Detection from X-Ray Images Utilizing Transfer Learning with Convolutional Neural Networks Towards an Effective and Efficient Deep Learning Model for COVID-19 Patterns Detection in X-Ray Images Automatic Detection of Coronavirus Disease (COVID-19) in X-Ray and CT Images: A Machine Learning-Based Approach A Critic Evaluation of Methods for COVID-19 Automatic Detection from X-Ray Images Weakly Supervised Deep Learning for COVID-19 Infection Detection and Classification from CT Images Diagnosing COVID-19 Pneumonia from X-Ray and CT Images Using Deep Learning and Transfer Learning Algorithms Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for COVID-19 Gradient-based learning applied to document recognition Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift The History Began from AlexNet: A Comprehensive Survey on Deep Learning Approaches' Deep residual learning for image recognition Identifying medical diagnoses and treatable diseases by image-based deep learning COVID-19 Image Data Collection A CT Scan Dataset about COVID-19 The Effectiveness of Data Augmentation in Image Classification Using Deep Learning Adam: A Method for Stochastic Optimization Deep Learning Going Deeper with Convolutions Very Deep Convolutional Networks for Large-Scale Image Recognition ImageNet large scale visual recognition challenge