key: cord-1004797-4zbrg9pq authors: Rahimzadeh, Mohammad; Attar, Abolfazl; Sakhaei, Seyed Mohammad title: A Fully Automated Deep Learning-based Network For Detecting COVID-19 from a New And Large Lung CT Scan Dataset date: 2021-03-31 journal: Biomed Signal Process Control DOI: 10.1016/j.bspc.2021.102588 sha: d6142af03942d6f7dacf5c90f3b214235da18477 doc_id: 1004797 cord_uid: 4zbrg9pq This paper aims to propose a high-speed and accurate fully-automated method to detect COVID-19 from the patient's chest CT scan images. We introduce a new dataset that contains 48260 CT scan images from 282 normal persons and 15589 images from 95 patients with COVID-19 infections. At the first stage, this system runs our proposed image processing algorithm that analyzes the view of the lung to discard those CT images that inside the lung is not properly visible in them. This action helps to reduce the processing time and false detections. At the next stage, we introduce a novel architecture for improving the classification accuracy of convolutional networks on images containing small important objects. Our architecture applies a new feature pyramid network designed for classification problems to the ResNet50V2 model so the model becomes able to investigate different resolutions of the image and do not lose the data of small objects. As the infections of COVID-19 exist in various scales, especially many of them are tiny, using our method helps to increase the classification performance remarkably. After running these two phases, the system determines the condition of the patient using a selected threshold. We are the first to evaluate our system in two different ways on Xception, ResNet50V2, and our model. In the single image classification stage, our model achieved 98.49% accuracy on more than 7996 test images. At the patient condition identification phase, the system correctly identified almost 234 of 245 patients with high speed. Due to the availability of medical imaging devices in most treatment centers, the researchers analyze CT scans and X-rays to detect COVID- 19 . In most patients with COVID-19, infections are found in the lungs of people with new coronavirus that can help diagnose the disease. Analysis of CT scans of patients with COVID-19 showed pneumonia caused by the new coronavirus [33] . With the approval of radiologists for the ability to use CT scans and X-rays to detect COVID-19, various methods have been proposed to use these images. Most patients who have COVID-19 symptoms at least four days later have X-rays and CT scans of their lungs, showing infections that confirm the presence of a new coronavirus in their body [4] . Although medical imaging is not recommended for definitive diagnosis, it can be used for early COVID-19 diagnosis due to the limitations of other methods [3] . In [40, 4] , some patients with early-onset COVID-19 symptoms were found to have new coronavirus infections on their CT scans. At the same time, their RT-PCR test results were negative, then both tests were repeated several days later, and RT-PCR confirmed the CT scan's diagnostic results. Although medical imaging is not recommended for the definitive diagnosis of COVID-19, it can be used as a primary diagnostic method for the COVID-19 to quarantine the Suspicious person and prevent the virus from being transmitted to others in the early stages of the disease. The advantage of using medical imaging is the ability to visualize viral infections by machine vision. Machine vision has many different methods, one of the best of which is deep learning [12] . Machine vision and deep learning have many applications in medicine [31] , agriculture [26] , economics [11] , etc., which have eliminated human errors and created automation in various fields. The use of machine vision and deep learning is one of the best ways to diagnose tumors and infections caused by various diseases. This method has been used for various medical images, such as segmentation of lesions in the brain and skin [20] , Applications to Breast Lesions, and Pulmonary Nodules [6] , sperm detection and tracking [28] and state-of-the-art bone suppression in x-rays images [41] . On the other hand, diagnosing the disease by computer vision and deep learning is much more accurate than radiologists. For example, in [15] , the accuracy of the method used is about 90%, while the accuracy of radiologists' diagnosis is approximately 70%. Due to the effectiveness of machine vision and deep learning in medical imaging, especially CT scan and X-ray images, machine vision and deep learning have been used to diagnose COVID-19. Convolutional neural networks made a great improvement in deep learning and computer vision tasks. Since the advent of convolutional layers some models like ResNet [12] , DenseNet [14] , EfficientNet [34] and Xception [7] were introduced which showed reliable results. ResNet models [12] introduced residual layers which helped the models to reduce the effect of data corruption while feeding through layers. Xception is the architecture that introduced separable convolutional layers. These layers are constructed of a depthwise convolutional layer and a pointwise convolutional layer consecutively. Separable convolutional layers proposed the idea that it is not necessary for the kernel of a convolution layer to be applied to each channel of the input data and separated 2 J o u r n a l P r e -p r o o f A PREPRINT - MARCH 31, 2021 the spatial and channel operations. Doing so, resulted in decreasing the number of weights and so made the authors capable of developing models with more layers. Another model that we have inspired from, is the feature pyramid network [18] . Feature pyramid network (FPN) was developed for object detection tasks and helped the models to detect objects with various scales in the images. As COVID infections also exist in various scales especially in small scales, we designed a new classification architecture inspired by the FPN to improve classification accuracy on CT scan images. In this paper, we introduce a fully-automated method for detecting COVID-19 cases from the output files(images) of the lung HRCT scan device. This system does not need any medical expert for system configuration and takes all the CT scans of a patient and clarifies if that patient is infected to COVID-19 or not. We also introduce and share a new dataset that we called COVID-CTset that contains 15589 COVID-19 images from 95 patients and 48260 normal images from 282 persons. At the first stage of our work, we apply an image processing algorithm for selecting those CT Scan images that inside the lung, and the possible infections are observable in them. In this way, we speed up the process because the network does not have to analyze all the images. Also, we improve the accuracy by giving the network only the proper images. After that, we will train and test three deep convolutional neural networks for classifying the selected images. One of them is our proposed enhanced convolutional model, which is designed to improve classification accuracy. At the final stage, we evaluate our fully automated system in two different ways. The first way is single image classification, which is evaluated on more than 7996 images, and the second is the fully automated system that we evaluated on almost 245 patients and 41892 images. We also investigate the infected areas of the COVID-19 classified images by segmenting the infections using a feature visualization algorithm. The general view of our work in this paper is represented in fig. 1 . In [23, 36] , using existing deep learning networks, they have identified COVID-19 on chest X-ray images and introduced the network with high accuracy. In [27] , by concatenating Xception and Resnet50v2 networks and using chest X-ray images, they were able to diagnose normal patients, pneumonia, and COVID-19, with an overall accuracy of 99.5 and 91.4 in the COVID-19 class, which was evaluated on 11302 images. In [17] , 3322 eligible CT scans were selected from the 3506 CT scans of different persons and used to learn and evaluate the proposed network, COVNet. In another study, CT scans of 120 people (2482 CT scans) were collected, half of which (60 people) were COVID-19, and classified by different networks, which was the most accuracy equal to 97.38% [32] . In [15] , CT scans of 287 patients were collected, including three classes of COVID-19, Community-acquired pneumonia (CAP), or other viral diseases in the lungs, and other diseases or healthy, and then, using the innovative algorithm called CovidCTNet, to classify the data with 90% accuracy. In [37] , CT scans of 5372 patients have been collected in several hospitals in China, which have been used in learning and evaluating the presented Innovative Deep Learning Network to classify data into three classes. In [35] , CT scans have been used to segment infections caused by the new coronavirus. These papers [24, 25, 5, 2, 29] also worked on classifying CT Scan and X-ray images using machine learning techniques and deep convolutional models. The rest of the paper is organized as follows: In section 2, we will describe the dataset, neural networks, and proposed algorithms. In section 3, the experimental results are presented, and in section 4, the paper is discussed. In section 5, we have concluded our paper, and in the end, the links to the shared codes and dataset are provided. COVID-CTset 1 is our introduced dataset. It was gathered from Negin radiology located at Sari in Iran between March 5th to April 23rd, 2020. This medical center uses a SOMATOM Scope model and syngo CT VC30-easyIQ software version for capturing and visualizing the lung HRCT radiology images from the patients. The format of the exported radiology images was 16-bit grayscale DICOM format with 512*512 pixels resolution. As the patient's information was accessible via the DICOM files, we converted them to TIFF format, which holds the same 16-bit grayscale data but does not conclude the patients' private information. Also, this format is easier to use with standard programming libraries. In the addressed link 2 at the end of this paper, the general information (age, sex, time of radiology imaging) for each patient is available. One of our novelties is using a 16bit data format instead of converting it to 8bit data, which helps improve the method's results. Converting the DICOM files to 8bit data may cause losing some data, especially when few infections exist in the image that is hard to detect even for clinical experts. This lost data may be the difference between different images or the values of the pixels of the same image. The pixels' values of the images differ from 0 to almost 5000, and the maximum pixel values of the images are considerably different. So scaling them through a consistent value or scaling each image based on the maximum pixel value of itself can cause the mentioned problems and reduce the network accuracy. So each image of COVID-CTset is a TIFF format, 16bit grayscale image. In some stages of our work, we used the help of clinical experts under the supervision of the third author, a radiology specialist, to separate those images that the COVID-19 infections are clear. To make these images visible with standard monitors, we converted them to float by dividing each image's pixel value by the maximum pixel value of that image. This way, the output images had 32bit float type pixel values that could be visualized by standard monitors, and the quality of the images was good enough for analysis. Some of the images of our dataset are presented in fig. 2 . COVID-CTset is made of 15589 images that belong to 95 patients infected with COVID-19 and 48260 images of 282 normal people (table 1) . Each patient has three folders that each folder includes the CT scans captured from the CT imaging device with a different thickness 3 . The distribution of the patients in COVID-CTset is shown in fig. 3 The lung HRCT scan device takes a sequence of consecutive images(we can call it a video or consecutive frames) from the chest of the patient that wants to check his infection to COVID-19. In an image sequence, the infection points may appear in some images and not be shown in other images. The clinical expert analyzes these consecutive images and, if he finds the infections on some of them, indicates the patient as infected. Many previous methods selected an image of each patient's lung HRCT images and then used them for training and validation. Here we decided to make the patient lung analysis fully automated. Consider we have a neural network that is trained for classifying CVOID-19 cases based on a selected data that inside the lung was obviously visible in them. If we test that network on each image of an image sequence the belongs to a patient, the network may fail. Because at the Samples from dataset COVID-19 Normal beginning and the end of each CT scan image sequence, the lung is closed as it is depicted in fig. 4 . Hence, the network has not seen these cases while training; it may result in wrong detections, and so does not work well. To solve this, we can separate the dataset into three classes: infection-visible,no-infection, and lung-closed. Although this removes the problem but dividing the dataset into three classes has other costs like spending some time for making new labels, changing the network evaluating way. Also, it increases the processing time because the network shall see all the images of patient CT scans. But we propose some other techniques to discard the images that inside the lungs are not visible in them. Doing this also reduces performing time for good because, in the last method, the networks should have seen all the images, and now it only sees some selected images. The images of our dataset are 16-bit grayscale images. The maximum pixel value between all the images is almost equal to 5000. This maximum value differs very much between different images. At the next step for discarding some images and selecting the rest of them from an image sequence that belongs to a patient, we aim to measure the pixels of each image in the indicated region that have less value than 300, which we call dark pixels. This number was chosen out of our experiments. For all the images in the sequence, we count the number of pixels in the region with less value than 300. After that, we would divide the difference between the maximum counted number and the minimum counted number by 1.5. This calculated number is our threshold. For example, if a CT scan image sequence of a patient has 3030 pixels with a value of less than 300 in the region, and another has 30 pixels less than 300, the threshold becomes 2000. The image with fewer dark pixels in the region than the threshold is the image that the lung is almost closed in that, and the image with more dark pixels is the one that inside the lung is visible in it. We calculated this threshold in this manner that the images in a sequence (CT scans of a patient) be analyzed together because, in one sequence, the imaging scale does not differ. After that, we discard those images that have less counted Closed-lung mode Open-lung mode dark pixels than the calculated threshold. So the images with more dark pixels than the computed threshold will be selected to be given to the network for classification. In fig. 8 , the image sequence of one patient is depicted, where it can be observed, which of the images the algorithm discards and which will be selected. Machine Vision has been a superior method for advancing many fields like Agriculture [26] , biomedical engineering [28, 22] , industry [16] and others. Implementing machine vision methods on deep neural networks, especially using the convolution layers, has resulted in extremely accurate performance. In this research, we used deep convolution networks to classify the selected CT scan images exported from the CT scan selecting algorithm into normal or COVID-19. We 7 J o u r n a l P r e -p r o o f A PREPRINT -MARCH 31, 2021 trained, evaluated, and compared three different deep convolutional networks: Xception [7] , ResNet50V2 [13] , and our proposed model. Xception introduced new inception modules constructed of depth-wise, separable convolution layers (depth-wise convolutional layers followed by a point-wise convolution layer). Xception achieved one of the best results on ImageNet [9] dataset. ResNet50V2, is a upgraded version of ResNet50 [12] . In this neural network, the authors made some changes in the connections and skip-connections between blocks and increased network performance in the ImageNet dataset. Feature pyramid network(FPN) was introduced by paper [18] and was utilized in RetinaNet [19] for enhancing object detection. FPN helps the network better learning and detecting objects at different scales that exist in an image. Some of the previous methods worked by giving an image pyramid (that includes different scales of the input image) to the network as the input. Doing this indeed improves the feature extraction process but also increases the processing time and is not efficient. Find images with max dark pixels(mx) and min dark pixels(mm) Count one CT scan image dark pixels(dp) dp > thr In fig. 9 , the architecture of the proposed network can be observed. We used ResNet50V2 as the backbone network and compared our model with ResNet50V2 and Xception. Other researchers can also use other models as the backbone. The FPN we used is like the original version of FPN [18] with this difference that we used concatenation layers instead of adding layers inside the feature pyramid network due to the authors' experience. FPN extracts five final features that each one presents the input image features on a different scale. After that, we implemented dropout layers ( to avoid overfitting), followed by the first classification layers. Note that we did not use SoftMax functions for the first classification layers because we wanted to feed them to the final classification layer, and as the SoftMax function computes each output neuron based on the ratio of other output neurons, it is not suitable for this place. Relu activation function is more proper. At the end of the architecture, we concatenated the five classified layers (each consisting of two neurons) and made a ten neurons dense layer. Then we connected this layer to the final classification layer, which applies the SoftMax function. With running this procedure, the network utilizes different classification results based on various scale features. As a result, the network would become able to classify the images better. Researchers can use our proposed model for running classification in other cases and datasets to improve classification results. Dense (2) Figure 9 : This figure shows our model, which uses ResNet50V2 as the backbone and applies the feature pyramid network and the designed layers for classification. Our dataset is constructed of two sections. The first section is the raw data for each person that is described in section 2.1. The second section includes training, validation, and testing data. We converted the images to 32-bit float types on the TIFF format so that we could visualize them with standard monitors. Then we took the help of the clinical experts under the supervision of the third author(Radiology Specialist) in the Negin radiology center to select the infected patients' images that the infections were clear on them. We used these data for training and testing the trained networks. To report more real and accurate results, we separated the dataset into five folds for training, validation, and testing. Almost 20 percent of the patients with COVID19 were allocated for testing in each fold. The rest were considered for training, and part of the testing data has been considered for validating the network after each epoch while training. Because the number of normal patients and images was more than the infected ones, we almost chose the number of normal images equal to the COVID-19 images to make the dataset balanced. Therefore the number of normal images that were considered for network testing was higher than the training images. The details of the training and testing data are reported in table 2. From the information in table 2, the question may arise as to why the number of normal persons in the training set is less than the number of COVID-19 patients. Because in each image sequence of a patient with COVID-19, we allocated some of them with observable infections for training and testing. So the number of images for a COVID-19 patient is less than the number of images for a normal person. We selected enough number of normal patients somehow that the number of normal images becomes almost equal to the number of COVID-19 class images. This number was enough for the network to learn to classify the images correctly, and the achieved results were high. As we had more normal images left, We selected a large number of normal data for testing so that the actual performance of our trained networks be more clear. We trained our dataset on Xception [7] , Resnet50V2 [13] and our model until 50 epochs. For training the networks, we used transfer learning from the ImageNet [9] pre-trained weights to make the networks' convergence faster. We chose the Nadam optimizer and the Categorical Cross-entropy loss function. We also used data augmentation methods to make learning more efficient and stop the network from overfitting. It is noteworthy that we did not resize the images for training or testing so as not to lose the small data of the infections. Our training parameters are listed in table 3. As is evident from table 3, we used the same parameters for all the networks. In this section, we report the results into two sections. The Image classification results section includes the results of the trained networks on the test set images. The Patient condition identification section reports the results of the automated system for identifying each person as normal or COVID-19. We implemented our algorithms and networks on Google Colaboratory Notebooks, which allocated a Tesla P100 GPU, 2.00GHz Intel Xeon CPU, and 12GB RAM on Linux to us. We used Keras library [8] on Tensorflow backend [1] for developing and running the deep networks. We trained each network on the training set with the explained parameters in section 2.4. We also used the accuracy metric while training for monitoring the network validation result after each epoch to find the training network's best-converged version. We evaluated the trained networks using four different metrics for each of the classes and the overall accuracy for all the classes as follows: Accuracy (f or each class) = T P + T N T P + F P + T N + F N (1) Accuracy (f or all the classes) = N umber of Correct Classif ied Images N umber of All Images (5) In these equations, for each class, T P (True Positive) is the number of correctly classified images, F P (False Positive) is the number of the wrong classified images, F N (False Negative) is the number of images that have been detected as a wrong class, and T N (True Negative) is the number of images that do not belong to another class and have not been classified as that class. The results for each fold are reported in table 9. We also showed the average results between five folds in confusion matrices in fig. 10 . Fig. 11 shows the training and validation accuracy in 20 epochs of the training procedure. Our model converges faster to higher accuracy. This is the ability of our model, which enhances the base model. In this section, we present the main results of our work. CT scan data is not like many other data like X-ray images, which can be evaluated by investigating one single image. CT scans are sequences of consecutive images (like videos), so for medical diagnosis, the system or the expert person must analyze more than one image. Based on this condition, for proposing an automatic diagnosis system, the developers must evaluate their system differently than single image classification. As we know, until today, we are the first that have evaluated our model in this way. If our proposed fully-automated system wants to check the infection of COVID-19 for a patient, it takes all the images of the patient CT scans as input. Then it processes them with the proposed CT scan selection algorithm to select the CT scans that the lung is visible in them. Those chosen images will be fed to the deep neural network to be classified as COVID-19 or normal. For indicating the condition of a patient, we must set a threshold. For each patient, if the number of CT scan images, which are identified as COVID-19, be more than the threshold, that patient would be considered as infected; otherwise, his condition would be normal. This threshold value depends on the precision of the model. In trained models with high accuracy, the threshold can be set to zero, meaning if at least one CT scan image of a patient (between the filtered CT scans by the selection algorithm) is detected as COVID-19, that patient would be considered being infected. The number of data used for evaluating our system in this section is listed in the table. 4. Table 5 shows an interesting result. In this table, threshold 1 is zero, and threshold 2 is one-tenth of the filtered CT scan images. It means that after filtering the CT scan images by the selection algorithm, in threshold1, if at least one CT scan image be identified as COVID, the patient would be selected as infected. In threshold 2, if at least one-tenth of 12 J o u r n a l P r e -p r o o f A PREPRINT -MARCH 31, 2021 the number of the filtered CT scan images are detected as COVID, then the system determines that patient, infected to COVID-19. With these circumstances, in table 5, we can see that our model performs very well in threshold 1, and this shows the high performance of this network. But other networks do not execute well in threshold 1, which indicates that these networks' accuracy is lower than our model. This remarkable result is caused by the feature pyramid network that gives a high ability to detect the infections correctly and not to detect false points as infections. The users can select this threshold based on their model accuracy. In the following (6), we used the second threshold (equal to one-tenth) for reporting the full results. But we recommend using the first threshold for accurate models like our model. The results of Patient condition identification for each of the trained networks in each fold are available in table 6. The speed of the fully automated system is reported in table 7 and the training and inference time of each model is available at table 8. In this section, we aim to use the Grad-CAM algorithm [30] to visualize the extracted features of the network to determine the areas of infections and investigate the network's correct performance. By looking at Fig. 12 and comparing the normal and COVID images; it is visible that the network is classifying the images based on the infected areas. In the COVID-19 images, the highlighted features are around the infection areas, and in the normal images, as the network does not see any infections, the highlighted features are at the center showing the no infections have been found. Therefore the results can be trusted for medical diagnosis. Using the Grad-CAM algorithm can help the medical expert distinguish the CT scan images better and find the infections. 14 J o u r n a l P r e -p r o o f Our model 248 7 16 2 232 5 1 Xception 238 17 17 1 221 16 ResNet50V2 239 16 17 1 222 15 Our model 254 14 21 2 233 12 2 Xception 234 34 23 0 211 34 ResNet50V2 245 23 22 1 223 22 Our model 232 15 17 1 215 14 3 Xception 227 In the single image evaluating phase ( By referring to table 5 , it can be seen that our model performs patient condition identification more precisely than other networks especially when the COVID threshold is very low. This means our model has learned the COVID infections very well so its false detections as COVID are much fewer than simple models. Fig. 12 also presents some of the classified images processed by the Grad-CAM algorithm to visualize the sensitive extracted features. Based on this figure, the system classifies the images by looking to correct points, and the results are trustworthy. From table 7, and 8 it can be understood that our model runs almost as fast as other models and the processing speed is good. As one CT scan image sequence has less than 100 images in most cases (when imaging thickness is between 2 to 6), this system can process them in near 4 to 6 seconds. It can be seen from table 7 that some of the CT scan slices with a close difference differ in speed more than what is expected. The reason is that the CT scan selection algorithm may 16 J o u r n a l P r e -p r o o f A PREPRINT -MARCH 31, 2021 select different proper CT scan images from each of the CT image sequences. So, the processing speed changes more than expected. Another thing that worth noting is that the reason COVID class precision is not very high, like accuracy or sensitivity, is because of having an unbalanced test dataset. We had around 450 COVID-19 images and 7800 normal images for testing the network performance. Our model averagely classified 102 images from 7852 normal images as CODIV-19 wrongly, which is a good value, but because the number of COVID-19 images in the test-set is much lower than normal test images, it made the COVID-19 precision around 81 percent. So, this value of precision does not mean the network is performing poorly. What makes this work reliable is that it is designed and tested in real circumstances like considering the input data as an image-sequence or video (not just a single image), being evaluated on a large dataset, showing high accuracy and low false positives, and good inference speed. We hope that our shared dataset and codes can help other researchers improve AI models and use them for advanced medical diagnosis. In this paper, we have proposed a fully automated system for COVID-19 detection from lung HRCT scans. We also introduced a new dataset containing 15589 images of normal persons and 48260 images belonging to patients with COVID-19. At first, we proposed an image processing algorithm to filter the proper images of the patients' CT scans, which show inside the lung perfectly. This algorithm helps increase network accuracy and speed. At the next stage, we introduced a novel deep convolutional neural network for improving classification. This network can be used in many classification problems to improve accuracy, especially for the images containing important objects in small scales. We trained three different deep convolution networks for classifying the CT scan images into COVID-19 or normal. Our model, which utilizes ResNet50V2, a modified feature pyramid network, and the designed architecture, achieved the best results. After training, we used the trained networks for running the fully automated COVID-19 identifier system. We evaluated our system in two different ways: one on more than 7796 images and the other one on almost 245 patients and 41892 images with different thicknesses. For single image classification (first evaluation way), our model showed 98.49% overall accuracy. Our model obtained the best results at the patient condition identification stage (second evaluation way) and correctly identified approximately 234 patients from 245 patients. We also used the Grad-CAM algorithm to highlight the CT scan images' infection areas and investigate the classification correctness. Based on the obtained results, it can be understood that the proposed methods can improve COVID-19 detection and run fast enough for implementation in medical centers. We have made our data available for public use in this address: (https://github.com/mr7495/COVID-CTset). The dataset is available in two parts: one is the raw data presented in three folders for each patient. The next part is the training, validation, and testing data in each fold. We hope that this dataset will be utilized for improving COVID-19 monitoring and detection in the coming researches. All the used codes for data analysis, training, validation, testing, and the trained networks are shared at (https: //github.com/mr7495/COVID-CT-Code TensorFlow: Large-scale machine learning on heterogeneous systems Recognition of corona virus disease (covid-19) using deep learning network Acr recommendations for the use of chest radiography and computed tomography (ct) for suspected covid-19 infection | american college of radiology Correlation of chest ct and rt-pcr testing in coronavirus disease 2019 (covid-19) in china: a report of 1014 cases Coronavirus (covid-19) classification using ct images by machine learning methods Computeraided diagnosis with deep learning architecture: applications to breast lesions in us images and pulmonary nodules in ct scans Xception: Deep learning with depthwise separable convolutions Imagenet: A large-scale hierarchical image database Reverse transcription pcr: Principle, procedure, application, advantages and disadvantages Deep learning in intermediate microeconomics: Using scaffolding assignments to teach theory and promote transfer Deep residual learning for image recognition Identity mappings in deep residual networks An open-source deep learning approach to identify covid-19 using ct image Deep learning for smart industry: Efficient manufacture inspection system with fog computing Artificial intelligence distinguishes covid-19 from community acquired pneumonia on chest ct Feature pyramid networks for object detection Focal loss for dense object detection A survey on deep learning in medical image analysis Huntington's disease -understanding the stages of symptoms! -by ms. sadhana ghaisas | lybrate Anatomically consistent cnn-based segmentation of organs-at-risk in cranial radiotherapy Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks Coronavirus (covid-19) classification using deep features fusion and ranking technique Classification of coronavirus (covid-19) from x-ray and ct images using shrunken features Introduction of a new dataset and method for detecting and counting the pistachios based on deep learning A modified deep convolutional neural network for detecting covid-19 and pneumonia from chest x-ray images based on the concatenation of xception and resnet50v2 Sperm detection and tracking in phase-contrast microscopy image sequences using deep learning and modified csr-dcf Emcnet: Automated covid-19 diagnosis from x-ray images using convolutional neural network and ensemble of machine learning classifiers Grad-cam: Visual explanations from deep networks via gradient-based localization Deep learning in medical image analysis Sars-cov-2 ct-scan dataset: A large dataset of real patients ct scans for sars-cov-2 identification. medRxiv Emerging 2019 novel coronavirus (2019-ncov) pneumonia Rethinking model scaling for convolutional neural networks Deep learning models for covid-19 infected area segmentation in ct images. medRxiv Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images A fully automatic deep learning system for covid-19 diagnostic and prognostic analysis WHO. Q&a on coronaviruses Covid-19 testing -wikipedia Chest ct for typical 2019-ncov pneumonia: relationship to negative rt-pcr testing Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain We wish like to thank Negin radiology experts that helped us in proving the dataset. We also like to appreciate Google for providing free and powerful GPU on Colab servers and free space on Google Drive. A PREPRINT - MARCH 31, 2021