key: cord-0925279-5caa0arb authors: Rahimzadeh, M.; Attar, A.; Sakhaei, S. M. title: A Fully Automated Deep Learning-based Network ForDetecting COVID-19 from a New And Large Lung CT ScanDataset date: 2020-06-12 journal: nan DOI: 10.1101/2020.06.08.20121541 sha: c1c2b245c03a9bb2f22846fd2072bcc03b088075 doc_id: 925279 cord_uid: 5caa0arb COVID-19 is a severe global problem that has crippled many industries and killed many people around the world. One of the primary ways to decrease the casualties is the infected person's identification at the proper time. AI can play a significant role in these cases by monitoring and detecting infected persons in early-stage so that it can help many organizations. In this paper, we aim to propose a fully-automated method to detect COVID-19 from the patient's CT scan without needing a clinical technician. We introduce a new dataset that contains 48260 CT scan images from 282 normal persons and 15589 images from 95 patients with COVID-19 infection. Our proposed network takes all the CT scan image sequences of a patient as the input and determines if the patient is infected with COVID-19. At the first stage, this network runs an image processing algorithm to discard those CT images that inside the lung is not properly visible in them. This helps to reduce the number of images that shall be identified as normal or COVID-19, so it reduces the processing time. Also, running this algorithm makes the deep network at the next stage to analyze only the proper images and thus reduces false detections. At the next stage, we propose a modified version of ResNet50V2 that is enhanced by a feature pyramid network for classifying the selected CT images into COVID-19 or normal. If enough number of chosen CT scan images of a patient be identified as COVID-19, the network considers that patient, infected to this disease. The ResNet50V2 with feature pyramid network achieved 98.49% accuracy on more than 7996 validation images and correctly identified almost 237 patients from 245 patients. humans [27] . The severe outbreak of the new coronavirus spread rapidly throughout China and then spread to other countries. The virus disrupted many political, economic, and sporting events and affected the lives of many people worldwide. The most important feature of the new coronavirus is it's fast and wide-spreading capability. The virus is mainly transmitted directly from people with the disease to others; It is transmitted indirectly through the surfaces and air in the environment in which the infected people come in contact with it [30] . As a result, correctly identifying the symptoms of people with the disease and quarantining them plays a significant role in preventing the disease. New coronavirus causes viral pneumonia in the lungs, which results in severe acute respiratory syndrome. The new coronavirus causes a variety of changes in the sufferer. The most common symptoms of new coronavirus are fever, dry cough, and tiredness [30] . The symptoms of this disease vary from person to person [18] . Other symptoms such as loss of sense of smell and taste, headache, and sore throat may occur in some patients, but severe symptoms that indicate the further progression of COVID-19 include shortness of breath, chest pain, and loss of ability to move or Talking [30] . There are several methods for definitive diagnosis of COVID-19, including reverse transcriptase-polymerase chain reaction (RT-PCR), Isothermal nucleic amplification test, Antibody test, Serology tests, and medical imaging [31] . RT-PCR is the primary method of diagnosing COVID-19 and many viral diseases. However, the method is restricted for some of the assays as higher expertise and experimentation are required to develop new assays [8] . Besides, the lack of diagnostic kits in most contaminated areas around the world is leading researchers to come up with new and easier ways to diagnose the disease. Due to the availability of medical imaging devices in most treatment centers, the researchers analyze CT scans and X-rays to detect COVID- 19 . In most patients with COVID-19, infections are found in the lungs of people with new coronavirus that can help diagnose the disease. Analysis of CT scans of patients with COVID-19 showed pneumonia caused by the new coronavirus [27] . With the approval of radiologists for the ability to use CT scans and X-rays to detect COVID-19, various methods have been proposed to use these images. Most patients who have COVID-19 symptoms at least four days later have X-rays and CT scans of their lungs, showing infections that confirm the presence of a new coronavirus in their body [3] . Although medical imaging is not recommended for definitive diagnosis, it can be used for early COVID-19 diagnosis due to the limitations of other methods [2] . In [32, 3] , some patients with early-onset COVID-19 symptoms were found to have new coronavirus infections on their CT scans. At the same time, their RT-PCR test results were negative, then both tests were repeated several days later, and RT-PCR confirmed the CT scan's diagnostic results. Although medical imaging is not recommended for the definitive diagnosis of COVID-19, it can be used as a primary diagnostic method for the COVID-19 to quarantine the Suspicious person and prevent the virus from being transmitted to others in the early stages of the disease. The advantage of using medical imaging is the ability to visualize viral infections by machine vision. Machine vision has many different methods, one of the best of which is deep learning [10] . Machine vision and deep learning have many applications in medicine [25] , agriculture [21] , economics [9] , etc., which have eliminated human errors and created automation in various fields. The use of machine vision and deep learning is one of the best ways to diagnose tumors and infections caused by various diseases. This method has been used for various medical images, such as segmentation of lesions in the brain and skin [17] , Applications to Breast Lesions, and Pulmonary Nodules [4] , sperm detection and tracking [23] and state-of-the-art bone suppression in x-rays images [33] . After that, we will train and test three deep convolutional neural networks for classifying the selected images. One of them is our proposed enhanced version of ResNet50V2 with a feature pyramid network. At the final stage, after the deep network is ready, we evaluate our fully automated system on more than 230 patients and 7996 images. The general view of our work in this paper is represented in fig. 1 In [20, 24] , using existing deep learning networks, they have identified COVID-19 on chest X-ray images and introduced the network with high accuracy. In [22] , by concatenating Xception and Resnet50v2 networks and using chest X-ray images, they were able to diagnose normal patients, pneumonia, and COVID-19, with an overall accuracy of 99.5 and 91.4 in the COVID-19 class, which was evaluated on 11302 images. In [14] , 3322 eligible CT scans were selected from the 3506 CT scans of different persons and used to learn and evaluate the proposed network, COVNet. In another study, CT scans of 120 people (2482 CT scans) were collected, half of which (60 people) were COVID-19, and classified by different networks, which was the most accuracy equal to 97.38% [26] . In [12] , CT scans of 287 patients were collected, including three classes of COVID-19, Community-acquired pneumonia (CAP), or other viral diseases in the lungs, and other diseases or healthy, and then, using the innovative algorithm called CovidCTNet, to classify the data with 90% accuracy. In [29] , CT scans of 5372 patients have been collected in several hospitals in China, which have been used in learning and evaluating the presented Innovative Deep Learning Network to classify data into three classes. In [28] , CT scans have been used to segment infections caused by the new coronavirus. The rest of the paper is organized as follows: In section 2, we will describe the dataset, neural networks, and proposed algorithm. In section 3, the experimental results are presented, and in section 4, the paper is discussed. In section 5, we have concluded our paper, and in the end, the links to the shared codes and dataset are provided. COVID-CTset is our introduced dataset. It was gathered from Negin medical center that is located at Sari in Iran. This medical center uses a SOMATOM Scope model and syngo CT VC30-easyIQ software version for capturing and visualizing the lung HRCT radiology images from the patients. The format of the exported radiology images was 16-bit grayscale DICOM format with 512*512 pixels resolution. As the patient's information was accessible via the DICOM files, we converted them to TIFF format, which holds the same 16-bit grayscale data but does not conclude the patients' private information. One of our novelties is using a 16bit data format instead of converting it to 8bit data, which helps improve the method's results. Converting the DICOM files to 8bit data may cause losing some data, especially when few infections exist in the image that is hard to detect even for clinical experts. This lost data may be the difference between different images or the values of the pixels of the same image. The pixels' values of the images differ from 0 to almost 5000, and the maximum pixels values of the images are considerably different. So scaling them through a consistent value or scaling 3 All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 12, 2020. . https://doi.org/10.1101/2020.06.08.20121541 doi: medRxiv preprint each image based on the maximum pixel value of itself can cause the mentioned problems and reduce the network accuracy. So each image of COVID-CTset is a TIFF format, 16bit grayscale image. In some stages of our work, we used the help of clinical experts under the supervision of the third author, a radiology specialist, to separate those images that the COVID-19 infections are clear. To make these images visible with regular monitors, we converted them to float by dividing each image's pixel value by the maximum pixel value of that image. This way, the output images had a 32bit float type pixel values that could be visualized by regular monitors, and the quality of the images was good enough for analysis. Some of the images of our dataset are presented in fig. 2 . Table 1 : COVID-CTset data distribution 4 All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 12, 2020. The lung HRCT scan device takes a sequence of consecutive images(we can call it a video or consecutive frames) from the chest of the patient that wants to check his infection to COVID-19. In an image sequence, the infection points may appear in some images and not be shown in other images. The clinical expert analyzes theses consecutive images and, if he finds the infections on some of them, indicates the patient as infected. Many previous methods selected an image of each patient's lung HRCT images and then used them for training and validation. Here we decide to make the patient lung analysis fully automated. Consider we have a neural network that is trained for classifying CVOID-19 cases based on a selected data that inside the lung was obviously visible in them. If we test that network on each image of an image sequence the belongs to a patient, the network may fail. Because at the beginning and the end of each CT scan image sequence, the lung is closed as it is depicted in fig. 4 . Hence, the network has not seen these cases while training; it may result in wrong detections, and so does not work well. To solve this, we can separate the dataset into three classes: infection-visible,no-infection, and lung-closed. Although this removes the problem but dividing the dataset into three classes has other costs like spending some time for making new labels, changing the network validation way. Also, it increases the processing time because the network shall see all the images of patient CT scans. But we propose some other techniques to discard the images that inside the lungs are not visible in them. Doing this also reduces performing time for good because, in the last method, the networks should have seen all the images, and now it only sees some selected images. Fig. 7 shows the selected region in some different images. The images of our dataset are 16-bit grayscale images. The maximum pixel value between all the images is almost equal to 5000. This maximum value differs very much between different images. At the next step for discarding some images and selecting the rest of them from an image sequence that belongs to a patient, we aim to measure the pixels of each image in the indicated region that have less value than 300, which we call dark pixels. This number was chosen out of our experiments. For all the images in the sequence, we count the number of pixels in the region with less value than 300. After that, we would divide the difference between the maximum counted number, and the minimum counted number by 1.5. This calculated number is our threshold. For example, if a CT scan image sequence of a patient has 3030 pixels with a value of less than 300 in the region, and another has 30 pixels less than 300, the threshold becomes 2000. The image with less All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 12, 2020. . https://doi.org/10.1101/2020.06.08.20121541 doi: medRxiv preprint dark pixels in the region than the threshold is the image that the lung is almost closed in that, and the image with more dark pixels is the one that inside the lung is visible in it. We calculated this threshold in this manner that the images in a sequence (CT scans of a patient) be analyzed together because, in one sequence, the imaging scale does not differ. After that, we discard those images that have less counted dark pixels than the calculated threshold. So the images with more dark pixels than the computed threshold will be selected to be given to the network for classification. In fig. 8 , the image sequence of one patient is depicted, where you can observe which of the images the algorithm discards and which will be selected. Machine Vision has been a superior method for advancing many fields like Agriculture [21] , biomedical engineering [23, 19] , industry [13] and others. Implementation of machine vision methods on the deep neural networks, especially using the convolution layers, has resulted in extremely accurate performing. In this research, at the next stage of our work, we used deep convolution networks to classify the selected image of the first stage into normal or COVID-19. We utilized Xception [5] , ResNet50V2 [11] and a modified version of ResNet50V2 for running the classification. Xception introduced new inception modules constructed of depth-wise, separable convolution layers (depth-wise convolutional layers followed by a point-wise convolution layer). Xception achieved one of the best results on ImageNet [7] dataset. ResNet50V2, is a upgraded version of ResNet50 [10] . In this neural network, the authors made some changes in the connections and skip-connections between blocks and increased network performance in the ImageNet dataset. Feature pyramid network(FPN) was introduced by paper [15] and was utilized in RetinaNet [16] for enhancing object detection. FPN helps the network better learning and detecting the multi-scale objects that may exist in an image. Some of the previous methods worked by giving an image pyramid (that includes different scales of the input image) to the network as the input. Doing this indeed improves the feature extraction process but also increases the processing time and is not efficient. Middle images of a sequence Last images of a sequence (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 12, 2020. . https://doi.org/10.1101/2020.06.08.20121541 doi: medRxiv preprint Closed-lung mode Open-lung mode solves this problem by generating a bottom-up and a top-down feature hierarchy with lateral connections from the network generated features at different scales. This helps the network generate more semantic features for objects at different scales. As it is described using FPN helps when there are objects with different scales in the image. Although here we investigate image classification, to do this, the network must learn about the infection points and classify the image based on them. Using FPN can help us better classify the images in our cases. In fig. 9 you can see the architecture of the proposed network. We used concatenation layers instead of adding layers in the default version of the feature pyramid network [15] due to the authors' experience. At the end of the network, we concatenated the five classification results of the feature pyramid outputs(each output presents classification based on one scale features) and gave it to the classifier so that the network can use all of them for better classification. 7 All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 12, 2020. Our dataset is constructed of two sections. The first section is the raw data for each person that is described in section 2.1. The second section includes training and validation data. We converted the images to 32-bit float types on the TIFF format so that we could visualize them with regular monitors. Then we took the help of the clinical experts under the supervision of the third author(Radiology Specialist) in the Negin medical center to select the infected patients' images that the infections were clear on them. We used these data for training and validating the trained networks. To report more real and accurate results, we separated the dataset into five folds for training and validation. Almost 20 percent of the patients with COVID19 were allocated for validation in each fold, and the rest were considered for training. Because the number of normal patients and images was more than the infected ones, we almost chose the number of normal images equal to the COVID-19 images to make the dataset balanced. Therefore the number of normal images that were considered for network validation was higher than the training images. The details of the training and validation data are reported in table 2. 8 All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 12, 2020. . https://doi.org/10.1101/2020.06.08.20121541 doi: medRxiv preprint The highlighted images are the ones that the algorithm discards. It is observable that those images that clearly show inside the lung are selected to be classified at the next stage. From the information in table 2, the question may arise as to why the number of normal persons in the training set is less than the number of COVID-19 patients. Because in each image sequence of a patient with COVID-19, we allocated some of them with observable infections for training and validation. So the number of images for a COVID-19 patient is less than the number of images for a normal person. We selected enough number of normal patients that the number of normal images is almost equal to the number of images of COVID-19 class. This number was enough for the network to learn to classify the images correctly, and 9 All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 12, 2020. . https://doi.org/10.1101/2020.06.08.20121541 doi: medRxiv preprint the achieved results were high. As we had more normal images left, We selected a large number of normal data for validation so that the actual performance of our trained networks be more clear. We trained our dataset on Xception [5] , Resnet50V2 [11] and the modified ResNet50V2(With FPN) until 50 epochs. For training the networks, we used transfer learning from the ImageNet [7] pre-trained weights to make the networks convergence faster. We chose the Nadam optimizer and the Categorical Cross-entropy loss function. We also used data augmentation methods to make learning more efficient and stop the network from overfitting. It is noteworthy that we did not resize the images for training or validation so as not to lose the small data of the infections. Our training parameters are listed in table 3. As is evident from table 3, we used the same parameters for all the networks. In this section, we report the results into two sections. The Image classification results section includes the results of the trained networks on the Validation Set images. The Patient identification section reports the results of the automated system for identifying each person as normal or COVID-19. We implemented our algorithms and networks on Google Colaboratory Notebooks, which allocated a Tesla P100 GPU, 2.00GHz Intel Xeon CPU, and 12GB RAM on Linux to us. We used Keras library [6] on Tensorflow backend [1] for developing and running the deep networks. Images Fold1 77 1820 45 1916 18 462 237 7860 Fold2 72 1817 37 1898 23 465 245 7878 Fold3 77 1836 53 1893 18 446 229 7883 Fold4 81 1823 76 1920 14 459 206 7856 Fold5 73 1832 71 1921 22 450 211 7785 Table 2 : Training and Validation details of CT-COVID-Set 10 All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 12, 2020. We trained each network on the training set and with the explained parameters in section 2.4. We also used the accuracy metric while training for monitoring the network validation result after each epoch to find the best converged of the trained network. We evaluated the trained networks using four different metrics for each of the classes and the overall accuracy for all the classes as follows: Accuracy (f or each class) = T P + T N T P + F P + T N + F N (1) P recision = T P T P + F P Accuracy (f or all the classes) = N umber of Correct Classif ied Images N umber of All Images In these equations, for each class, T P (True Positive) is the number of correctly classified images, F P (False Positive) is the number of the wrong classified images, F N (False Negative) is the number of images that have been detected as a wrong class, and T N (True Negative) is the number of images that do not belong to another class and have not been classified as that class. The results for each fold is reported in table 5. We also showed the average results between five folds in confusion matrices in fig. 10 . In this section, we present the main results of our work. If our proposed fully-automated system wants to check the infection of COVID-19 for a patient, it takes all the images of the patient CT scans as input. Then it processes them with the proposed CT scan selection algorithm to select the CT scans that the lung is visible in them. Those chosen images will be fed to the deep neural network to be classified as COVID-19 or normal. Based on the experiments, the infections are usually visible in at least 20 percent of the selected CT scan images(Those images that inside the lung is visible in them) of an infected patient. So as there might be errors in the trained networks, 11 All rights reserved. No reuse allowed without permission. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 12, 2020. . https://doi.org/10.1101/2020.06.08.20121541 doi: medRxiv preprint we set a threshold equal to 30, which means if 30 percent of the selected CT scan images of a person be identified as COVID-19, then that person would be considered as an infected one. Otherwise, the system indicates that person as normal. The results of Patient identification for each of the trained networks in each fold are available in table 4. Based on the results from table 5 and table 4, we understand that the combination of ResNet50V2 with feature pyramid network made better overall accuracy. In the single image validation phase(table 5), the average results between five-folds show that Resnet50V2 with FPN achieved 98.49% overall accuracy and 94.96% sensitivity for COVID-19 class. Xception evaluation results show 96.55 % overall accuracy and 98.02% COVID-19 sensitivity. Also, Xception performed better in detecting COVID-19 patients, but the ResNet50V2 with FPN showed better results overall. At the fully automated patient classification, the average results between five folds in table 4 show that ResNet50V2 with FPN achieved the best results and approximately correctly classifies 237 persons from 245 persons, which is an acceptable value. Other networks showed good results, too, meaning that applying the proposed methods can make a precise, fully automated system for detecting the infected persons. We hope that our shared dataset and codes can help other researchers improve these techniques and use them for advanced medical diagnosis. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 12, 2020. Identified FPN 248 7 16 5 232 2 1 Xception 247 8 17 7 230 1 ResNet50V2 248 7 17 6 231 1 FPN 259 9 21 7 238 2 2 Xception 253 15 23 15 230 0 ResNet50V2 257 11 22 10 235 1 FPN 239 8 17 7 222 1 3 Xception 237 10 18 10 219 0 ResNet50V2 235 12 17 11 In this paper, we have proposed a fully automated system for COVID-19 detection from lung HRCT scans. We also introduced a new dataset containing 15589 images of normal persons and 48260 images belonging to patients with COVID-19. At the first stage, we proposed an image processing algorithm to filter the proper images of the patients' CT scans, which show inside the lung correctly. This algorithm helps increase network accuracy because the deep network would analyze only the appropriate images. Also, as the network only sees some of the CT scan images, it makes the processing faster. In this research, to make the classification and network feature extraction more accurate, we used the original produced files of the CT scan device for training and validation, which are 16-bit grayscale images. At the next stage, we trained three different deep convolution networks for classifying the CT scan images into COVID-19 or normal. One of these networks was the enhanced version of ResNet50V2 with a feature pyramid network that achieved the best overall accuracy. After training, we used the trained networks for running the fully automated COVID-19 identifier system. We tested that system on more than 230 patients and 7796 images. For single image classification, Resnet50V2 with FPN and Xception networks showed 94.96% and 98.02% sensitivity for COVID-19 class and 98.49% and 96.55% overall accuracy, respectively. At the final and main evaluation phased of the proposed automated system, the ResNet50V2 with FPN obtained the best results and correctly identified approximately 237 patients from 245 patients averagely between five folds. Based on the obtained results, it can be understood the proposed methods can improve COVID-19 detection accuracy. We hope that our methods and dataset can help the researchers to improve COVID-19 monitoring and detection. We have made our data available for public use in this address: (https://github.com/mr7495/COVID-CTset). The dataset is available in two parts: one is the raw data that is presented in three folders for each patient. The next part is the training and validation data in each fold. We hope that this dataset will be utilized for improving COVID-19 monitoring and detection in the coming researches. (which was not certified by peer review) is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. The copyright holder for this preprint this version posted June 12, 2020. . https://doi.org/10.1101/2020.06.08.20121541 doi: medRxiv preprint TensorFlow: Large-scale machine learning on heterogeneous systems Acr recommendations for the use of chest radiography and computed tomography (ct) for suspected covid-19 infection | american college of radiology Correlation of chest ct and rt-pcr testing in coronavirus disease 2019 (covid-19) in china: a report of 1014 cases Computeraided diagnosis with deep learning architecture: applications to breast lesions in us images and pulmonary nodules in ct scans Xception: Deep learning with depthwise separable convolutions Imagenet: A large-scale hierarchical image database Reverse transcription pcr: Principle, procedure, application, advantages and disadvantages Deep learning in intermediate microeconomics: Using scaffolding assignments to teach theory and promote transfer Deep residual learning for image recognition Identity mappings in deep residual networks An open-source deep learning approach to identify covid-19 using ct image Deep learning for smart industry: Efficient manufacture inspection system with fog computing Artificial intelligence distinguishes covid-19 from community acquired pneumonia on chest ct Feature pyramid networks for object detection Focal loss for dense object detection A survey on deep learning in medical image analysis Huntington's disease -understanding the stages of symptoms! -by ms. sadhana ghaisas | lybrate Anatomically consistent cnn-based segmentation of organs-at-risk in cranial radiotherapy Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks Introduction of a new dataset and method for detecting and counting the pistachios based on deep learning A modified deep convolutional neural network for detecting covid-19 and pneumonia from chest x-ray images based on the concatenation of xception and resnet50v2 Sperm detection and tracking in phase-contrast microscopy image sequences using deep learning and modified csr-dcf Detection of coronavirus disease (covid-19) based on deep features Deep learning in medical image analysis Sars-cov-2 ct-scan dataset: A large dataset of real patients ct scans for sars-cov-2 identification. medRxiv Emerging 2019 novel coronavirus (2019-ncov) pneumonia Deep learning models for covid-19 infected area segmentation in ct images. medRxiv A fully automatic deep learning system for covid-19 diagnostic and prognostic analysis WHO. Q&a on coronaviruses Covid-19 testing -wikipedia Chest ct for typical 2019-ncov pneumonia: relationship to negative rt-pcr testing Cascade of multi-scale convolutional neural networks for bone suppression of chest radiographs in gradient domain We wish like to thank Negin medical center experts that helped us in proving the dataset. We also like to appreciate Google for providing free and powerful GPU on Colab servers and free space on Google Drive. We declare that this paper is original and has been read and approved by all named authors and that there are no other persons who satisfied the criteria for authorship but are not listed. We further confirm that all have approved the order of authors listed in the paper of us. All the patients' shared data have been approved by Negin Radiology Medical Center located at Sari, Iran, under the supervision of its director(Dr.Sakhaei, radiology specialist) and Dr.Mahdi Hassanzadeh. It must be mentioned that to protect patients' privacy, all the DICOM files have been converted to TIFF format files to remove the patients' information. A PREPRINT -JUNE 8, 2020