key: cord-0810507-h2j5b22x authors: Ahmed, S.; Hossain, T.; Hoque, O. B.; Sarker, S.; Rahman, S.; Shah, F. M. title: Automated COVID-19 Detection from Chest X-Ray Images : A High Resolution Network (HRNet)Approach date: 2020-09-01 journal: nan DOI: 10.1101/2020.08.26.20182311 sha: d73713b4ccc57559d1e9249d365a6640d9b61613 doc_id: 810507 cord_uid: h2j5b22x The pandemic, originated by novel coronavirus 2019 (COVID-19), continuing its devastating effect on the health, well-being, and economy of the global population. A critical step to restrain this pandemic is the early detection of COVID-19 in the human body, to constraint the exposure and control the spread of the virus. Chest X-Rays are one of the noninvasive tools to detect this disease as the manual PCR diagnosis process is quite tedious and time-consuming. In this work, we propose an automated COVID-19 classifier, utilizing available COVID and non-COVID X-Ray datasets, along with High Resolution Network (HRNet) for feature extraction embedding with the UNet for segmentation purposes. To evaluate the proposed dataset, several baseline experiments have been performed employing numerous deep learning architectures. With extensive experiment, we got 99.26% accuracy, 98.53% sensitivity, and 98.82% specificity with HRNet which surpasses the performances of the existing models. Our proposed methodology ensures unbiased high accuracy, which increases the probability of incorporating X-Ray images into the diagnosis of the disease. forward testing approach by which the examining process can escalate with the speed of transmitting to promptly identify and detach the infected person. So, it is the most decisive task to defend the proliferation as this malignant virus is continuously ravaging the world. The convoluted PCR testing method is the only approved process of detecting the novel COVID-19 disease by WHO. For most of the country, people cannot afford this verification cost. As a result, people are dying without getting proper treatment because of the fatal virus. Moreover, the effect of this virus reflects in the major body parts including lung, heart, brain, etc. In a study published by nature, directly induced lung injury and deteriorate the respiratory system [5] . Also, there is a considerable amount of feature and observation to distinguish the infected part from a lung image. So, it will be beneficial for the society and a milestone development if we consider and establish a detection model based on the Chest X-Ray (CXR) or CT Scan images to classify the COVID-19 disease. Researchers around the world are continuously trying to build a time-efficient and cost-effective model to identify COVID-19 disease. Investigators are adopting CXR and CT Scan images for the classification of the infected lung. There is a disparate type of AI-based architectures developed to efficiently recognize the infected lung images [6] [7] [8] [9] . In the AI-based methods, machine learning and deep learning architectures stands out in most of the COVID-19 classification tasks [10] [11] [12] . But one of the biggest hindrances the researchers are facing is a deficiency in the dataset. To efficiently train a model, a reasonable amount of the subject images is required. But there is an insufficient amount of COVID-19 affected lung available for the research. So, image processing or machine learning model hardly can segregate the COVID and non-COVID images. To support this dataset complication, researchers are lying towards deep learning architecture because of the augmentation and transfer learning approaches [13] [14] . Various is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08. 26.20182311 doi: medRxiv preprint versions of CNN models such as -Inception Net, XceptionNet, ResNet, VGGNet, etc. are the prominent architectures that are employed in this research. Studying the existing classification models, we have adopted High-Resolution Network (HRNet) for feature extraction. In HRNet, different types of convolutional resolutions are linked in a parallel manner. Besides, HRNet consists of plentiful interaction across low and high resolution that bolsters its internal representation ability. The feature representation of this network is kept up during the training process that can prevent the small target information loss in the feature map. For vision-based problems, like small target segmentation and classification, HR-Net gives more accurate and definite results because of the parallel procedure. In summary, our contributions in this research are as follows: -Firstly, we propose a COVID-19 classification method based on a high-resolution network for feature extraction, provides competing results compared with the existing architectures. -Secondly, we integrate UNet for lung segmentation along with the HRNet to precisely and accurately classify the COVID region. This addition improves the result significantly and validates the infected lung region rather than the redundant non-relevant areas. -Finally, we conduct a performance comparison with the existing advanced models by implementing those models and evaluating their performance with our proposed work. From the experimental results, we can affirm that the proposed model accomplishes surpassed the existing models by accuracy, sensitivity, specificity, and other evaluation metrics. The rest of this paper is organized as follows. In Section II, we provided a study on the related works in this area of research. Subsequently, in Section III we discussed our proposed model which consists of the detailed description of our dataset and classification network. In Section IV, the experimental analysis is . CC-BY-ND 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint presented in detail, followed by the performance evaluation. Finally, in Section V, we concluded this paper with significant future works. Over the years, numerous types of works have been established to detect COVID-19 disease from a distinct perspective. Researchers around the world tried to come up with a model that can efficiently classify this disease considering a short amount of time. In this section, a study on the existing works on COVID-19 classification will thoroughly describe with appropriate characterization and depiction. Apostolopoulos et al. [14] proposed an architecture based on transfer learning for the feature extraction. Firstly, the authors tried to employed a CNN model to extract the feature of a different nature that is called pre-trained (only used for feature extractor) the CNN model. This was done by operating the transfer learning for feature extraction. After that, the extracted features were fed into a particular network for classification purposes. Though the author accomplished exceptional result but the author did not focus on handling the negative transfer. Borghesi et al. [15] introduced a chest X-Ray (CXR) scoring system named as Brixia score to determine the outcome (recovery or death). Dividing the lungs into six zones with the aid of frontal chest projection, the authors assigned four scores (Score 0: no abnormalities to Score 3: alveolar predominance or critical condition). Then the six scores for the six divided zones were aggregated to obtain the final score ranging from 0 to 18. For validation, weight kappa (k w ), confidence interval (CI) and P-values were calculated. Although the scoring system is a unique way to identify the disease, the experiment should apply to a considerable amount of CXR images. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08. 26.20182311 doi: medRxiv preprint Leveraging the multi-resolution capabilities of the inception module, Das et al. [16] built a truncated inception net architecture associating with the adaptive learning rate protocol. In this model, kernels of disparate receptive fields were executed in a parallel manner for feature extraction. Then, the extracted features were deformed depth-wise to obtain the final output. Because of the diminutive dataset, used in the architecture, the inception net is truncated at some particular point of the model. An accuracy of 99.92% was achieved classifying COVID-19 cases combining with the pneumonia cases. A COVID-19 detection model considering multi-class and hierarchical classification was developed by Pereira et al. [17] . Working on the natural data imbalance, a resampling algorithm for rebalancing the is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint cases and 98.08% for binary classes was attained by the authors. A highly diverse and long-rang selective connective method was proposed by Wang et al. [19] . The machine-driven design strategy is leveraged by generative synthesis [20] . A PEPX module (conv1×1 + conv1×1 + DWconv3×3 + conv1×1 + conv1×1) was assembled with general convolutional, flatten and fully connected layers. Finally, softmax was used for classification purposes. Though an accuracy of 93.3% was achieved operating this network, the long-range connections in the densely-connected DNN produce memory overhead. Also, the architecture is computationally expensive for the long-range connections in the network. Moreover, heterogeneous incorporation of convolutional layers with different kernels and grouping configurations, heavily affect the interconnection and operation of the architecture. Narin et al. [21] worked with the pre-trained models such as -ResNet50, In-ceptionV3 and Inception-ResNetV2. Because of the diminutive amount of data, transfer learning was incorporated to overcome the training time and deficiency in the dataset. Firstly, the input images were fed into the pre-trained models integrated with the transfer learning. Secondly, in the training phase, Global Average Pooling, Fully Connected Layer with ReLU was employed. Finally, the authors concluded with a fully connected layer with softmax for the final classification. The model achieved 97%, 98% and 87% accuracy respectively by operating In-ceptionV3, ResNet50 and Inception-ResNetV2 architecture. Nevertheless, transfer learning was incorporated with the model for the deficiency in the dataset, the model overfits with the data. From the studied research works, we have summarized the methods, datasets and performance in Table 1 . Though these deep learning methods worked well on this classification, there is a high chance of biasness and oversampling in the learning process for an insufficient number of images. Also, the feature information needs to incorporate in each and every layer on high-to-low and low-to-high upsam- is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint detection architecture based on UNet (segmentation purpose) and HRNet (feature extraction purpose). In this section, we briefly discussed our approaches to data preprocessing, pro- is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint (COVID vs Pneumonia vs Normal lung conditions) [34] . Studying these existing works and considering the drawbacks described in the literature review, we have introduced HRNet [35] for feature extraction embedded with UNet [36] for segmentation purposes. Firstly, this segmentation process has been introduced because COVID chest X-ray images that are publicly available contain several redundant marks, lung regions cropped, shifted to different directions, etc. After that, for feature selection, HRNet has been introduced in the field of COVID-19 detection from CXR images. HRNet has the capability of avoiding small features from the images. After feature selection, a classification head, created of a fully connected neural network has been used to classify COVID vs Non-COVID images. Furthermore, a standard dataset is developed for COVID detection from chest X-ray images from several public data sources. These data sources are updated every day and giving researchers opportunities to focus more on COVID detection from chest X-ray images. A depiction of the proposed architecture is illustrated in Fig. 1 . Following, we thoroughly described the related steps of the proposed method. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint The dataset for the classification purpose is built by assembling images from several acknowledgeable sources. Firstly, the COVID-19 Chest X-Ray (CXR) dataset that has been used in our experiment is collected from the public repository of GitHub [22] . As of July 03, 2020, this repository contains 759 images. In Fig. 2 , the class distribution of the public repository is depicted. In this public repository, 521 images are labeled as COVID-19 and 12 images are labeled as Acute Respiratory Distress Syndrome (ARDS) COVID-19. A total number of 533 images from the repository were collected from this source for our primary COVID-19 dataset. Though this is the most prominent repository since the beginning of the research in COVID-19 classification, there are some shortcomings in the images. One of the particular drawbacks can be exemplified as -this collection contains images from several publicly available resources such as websites and pdf formats. As a result, these images are in variable size and quality. Also, there are few side view images where the majority of the images belong to frontal views. Moreover, some images have color issues, contain markers, etc. In Fig. 3 , four examples of COVID-19 images are depicted acknowledging the issues -side view (Fig. 3a) , washed out (Fig. 3b) , color issue (Fig. 3c) and markers (Fig. 3d) . For non-COVID/normal images, we have collected images from the National Institute of Health (NIH) Chest X-Ray [25] which contains 108,948 frontal view Xray images with 14 condition labels including normal condition and pneumonia of 32,717 unique patients. Unlike the COVID dataset, these images are not of variable dimensions. All of the images are resized to 1024 × 1024 in portrait orientation. As most of the images from the COVID dataset belongs to adult patients, we have applied an age threshold of 18 years or older on normal condition images to keep the X-Ray image condition as similar as possible. We also explored the ChexPert [37] which contains 224,316 chest radiographs. This dataset contains 14 . CC-BY-ND 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint As described earlier, COVID images contain lateral X-ray images, taken from PDF files, marked images ( fig. 3d and 3e) , etc. Firstly, we removed these images . CC-BY-ND 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint from the COVID dataset as there is a possibility that these images can make the classifier biased. Secondly, in NIH and ChexPert datasets, some images contain medical equipment ( fig. 3f and 3g) which creates redundancies and abnormalities in times of training the model. Hence, these types of images are excluded from the main data repository. But still, most of the images contain several marks around the lungs area. To avoid these marks, we segmented the lung area from the images. In the next section, we described the process of lung segmentation and preprocessing the images and finally creating a practical COVID dataset for our experiment. In fig. 4 , a comparison between a single random image collected from COVID, NIH, and ChexPert dataset is illustrated. Furthermore, we aggregated Normal images from NIH and ChexPert dataset as it looks similar. In Table 2 , we have summarized the properties of the final dataset created. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint While the Contracting path is established by the basic convolutional process, the Extracting path is constituted with transposed 2D convolutional layers. For this segmentation task, we collected a dataset from Jaeger et al. [38] . This dataset contains a total number of 800 frontal X-ray images from the Mont- is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint segmentation model. After applying an area threshold to these segmentations, we finally got our COVID dataset. Our final COVID dataset consists of 410 COVID images and 500 non-COVID images from NIH and the ChexPert dataset. The working procedure of this segmentation method is summarized in Algorithm 1. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint Input: CXR, (maskL, maskR) Output: Your output 1 mask ← maskL ∪ maskR 2 dataset ← CXR, Dataset 3 for train data, test data ← split(dataset, kfold = 10) do dataset. In both cases, we applied the same set of augmentation such as scaling, padding, crop, rotation, gamma correction, slight Gaussian blur, random noise, salt and pepper noise, etc. based on a probabilistic value. Fig. 7 demonstrates the output of image augmentation. In Table 3 , we have discussed the parameters of augmentation. These parameters were used for both the segmentation dataset and the COVID dataset. All of these parameters were selected empirically. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint For feature extraction, High Resolution Net (HRNet) [35] is a state-of-the-art neural network architecture. In most of the recent HRNet adopted architecture, it is used as the backbone of the proposed models. Considering two approaches-Top-Down and Bottom-Up, HRNet followed the top-down approach because it detects the subject first, establishes a bounding box around the subject, and then estimate the significant feature. Moreover, HRNet relies on continuous multi-scale fusions instead of a single high-to-low upsampling process. In the following, a brief description of the HRNet architecture is characterized. The fundamental working procedure is it calculates lower resolution and higher resolution sub-network parallelly. Then, these two networks coalesced together by a fuse layer for the purpose to assemble and interchange information from each of the sub-network with each other. Consisting of four parts, each of the parts is built with repeating modularized multi-resolution sections [35] . Each section consists of a group convolution supporting the multi-resolution properties. A multi-resolution convolution can be constructed with the aid of regular convolution wherein a regular convolution, the input, and output layers are connected in a fully-connected approach [39] . The subnetworks follow these properties to aggregate their multi-. CC-BY-ND 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. To make an accurate detection, the classifiers need to focus on the lung regions properly. Our previous experiments without segmentation. Fig. 9 shows that despite having high accuracy on training and validation set, the focus region of the classifiers often deviate to outside of lungs which may lead to a false prediction. To address this issue we segmented our dataset and kept only the lung portion from the X-ray images. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint dataset of 2472 images, the UNet model was trained on Montgomery and Shenzen dataset. This dataset has 800 images and to train UNet KFold (10 Fold) crossvalidation is used. In this dataset, two corresponding masks, one for the right lung and one for the left lung contains for one CXR image. At first, two lung images were combined to make one corresponding mask for each CXR image. These images are then employed to train the UNet model. For the loss function, a Dice coefficient loss is used to get crisp borders. This loss function has been introduced by Milletari et al. [40] in their research of 3D volumetric image segmentation. Dice loss originates from Sørensen-Dice coefficient. It is a statistical method developed in 1940. Given dice coefficient D, it can be written as, Here P i and G i represents the pixels of prediction and ground truth respectively. In edge detection, the values of P i or G i is either 0 or 1. That means if two sets of pixels overlap perfectly the D reaches its maximum value to 1, otherwise it decreases. By using dice loss, two sets of predicted edge and ground truth edge are trained to overlap gradually. After that, traditional cross-entropy loss calculates the average of per-pixel loss where the per-pixel loss is calculated discretely here without knowing if the adjacent pixels are edges or not. As a result, it is not enough for image-level prediction. Dice loss provided better results in our lung segmentation using UNet. The model was trained for 25 epochs for each fold with a learning rate of 0.005, and a custom dice coefficient loss function with a grayscale input image of size 512 × 512 pixels. Table 4 shows the input, output, and layer configuration used for UNet. As mentioned above, UNet is trained for 10 Folds, 25 epochs for each fold, and the dice coefficient loss function is used. The learning rate is selected by train and error process and images are augmented according to the configuration described in the augmentation section. Fig. 10 shows the average training accuracy, training . CC-BY-ND 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint (Table 5 ). The performance analysis of the model is shown in Table 6 for each fold. Using the KFold algorithm, first, the dataset is divided into 10 folds where 1 fold is kept for testing. The other 9 folds are used for training purposes. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint Negative denoted by T P, T N, F P and F N respectively, then the three adopted evaluation matrices which are as follows: In Table 6 , the average of the testing accuracy, sensitivity and specificity for the 10 folds is characterized. We accomplished 99.26% testing accuracy, 98.53% of testing sensitivity and 98.52% of testing specificity. Furthermore, we depicted the confusion matrices of worst and best cases in fig. 13 . In the worst confusion matrix . CC-BY-ND 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint ( fig. 13a ), 39 and 50 testing images correctly classified as COVID and non-COVID respectively whereas only two images are falsely classified. But, in fig. 13b , all the testing images are accurately classified as COVID and non-COVID which represents the best confusion matrix. We have also successfully eradicated the issue of false focused region detection through segmentation. Fig. 12 shows the heatmaps of correctly classified regions. Moreover, we have compared the performance of the proposed model with the existing adopted models. Three existing prominent architecture -ResNet152, DenseNet121, and EfficientNetB4 are implemented for the comparison purpose. We achieved the superlative result in each of the terms of the evaluation metric by comparing our proposed model with these trained architectures. In Table 7, we have summarized the existing model's performance comparison with our proposed model. In fig. 14 , a depiction of the optimal confusion matrix of the existing trained model is illustrated. . CC-BY-ND 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint In this study, we use a segmented X-ray dataset, as extensive experiments show that, with a non-segmented dataset, classifiers may focus outside of the lung region which can lead to false classification results. Meanwhile, we also evaluate some state-of-the-art recognition methods on our dataset. The result demonstrates that HRnet performs the best among the others with 99.26% accuracy, 98.53% sensitivity, 98.82% specificity. The proposed model is fully automated without any need for manual feature extraction. Moreover, to ensure a production-ready solution, we broadly investigate the results and focus regions of the classifiers and our experimental results show the robustness of our proposed model to focus on the right region and classify. To conclude, this model can be used to help the radiologist to make clinical decisions, due to its unbiased high-accuracy and cor-. CC-BY-ND 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint rectly identified focus region. We hope that our proposed methodology is a step towards the possibility of lessening the false positive detection from X-Ray images. However, in terms of data, we are still in the primary level of the experiment. As the number of patients increasing around the world and the symptoms and formation of the virus are changing day by day, with the continuous collection of data, we intend to extend the experiment further and enhance the usability of the model. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint . CC-BY-ND 4.0 International license It is made available under a perpetuity. is the author/funder, who has granted medRxiv a license to display the preprint in (which was not certified by peer review) preprint The copyright holder for this this version posted September 1, 2020. . https://doi.org/10.1101/2020.08.26.20182311 doi: medRxiv preprint Severe acute respiratory syndrome coronavirus 2 (sars-cov-2) and corona virus disease-2019 (covid-19): the epidemic and the challenges Early transmission dynamics in wuhan, china, of novel coronavirus-infected pneumonia Clinical features of patients infected with 2019 novel coronavirus in wuhan, china Epidemiologic and clinical characteristics of novel coronavirus infections involving 13 patients outside wuhan, china A crucial role of angiotensin converting enzyme 2 (ace2) in sars coronavirus-induced lung injury Review of artificial intelligence techniques in imaging data acquisition, segmentation and diagnosis for covid-19 Detection of human coronavirus nl63 in young children with bronchiolitis Systematic review of artificial intelligence tech-Sifat Ahmed 1 benchmarking: Taxonomy analysis, challenges, future solutions and methodological aspects Coronavirus (covid-19): a review of clinical features, diagnosis, and treatment Artificial intelligence and machine learning to fight covid-19 Machine learning using intrinsic genomic signatures for rapid classification of novel pathogens: Covid-19 case study Serial quantitative chest ct assessment of covid-19: Deep-learning approach Deep-covid: Predicting covid-19 from chest x-ray images using deep transfer learning Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks Covid-19 outbreak in italy: experimental chest x-ray scoring system for quantifying and monitoring disease progression Truncated inception net: Covid-19 outbreak screening using chest x-rays Covid-19 identification in chest x-ray images on flat and hierarchical classification scenarios Automated detection of covid-19 cases using deep neural networks with x-ray images Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images Ferminets: Learning generative machines to generate efficient neural networks via generative synthesis Automatic detection of coronavirus disease (covid-19) using x-ray images and deep convolutional neural networks Covid-19 image data collection U.s. national library of medicine. tuberculosis chest x-ray image data sets Chestx-ray8: Hospitalscale chest x-ray database and benchmarks on weakly-supervised classification and localization of common thorax diseases Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest radiography images Can ai help in screening viral and covid-19 pneumonia Covidx-net: A framework of deep learning classifiers to diagnose covid-19 in x-ray images Covid-19 screening on chest x-ray images using deep learning based anomaly detection Presumed asymptomatic carrier transmission of covid-19 Classification of covid-19 in chest x-ray images using detrac deep convolutional neural network Diagnosing covid-19 pneumonia from x-ray and ct images using deep learning and transfer learning algorithms Covid-caps: A capsule network-based framework for identification of covid-19 cases from x-ray images X-ray image based covid-19 detection using pre-trained deep learning models High-resolution representations for labeling pixels and regions U-net: Convolutional networks for biomedical image segmentation Chexpert: A large chest radiograph dataset with uncertainty labels and expert comparison Two public chest x-ray datasets for computer-aided screening of pulmonary diseases Interleaved group convolutions V-net: Fully convolutional neural networks for volumetric medical image segmentation