key: cord-127759-wpqdtdjs authors: Qi, Xiao; Brown, Lloyd; Foran, David J.; Hacihaliloglu, Ilker title: Chest X-ray Image Phase Features for Improved Diagnosis of COVID-19 Using Convolutional Neural Network date: 2020-11-06 journal: nan DOI: nan sha: doc_id: 127759 cord_uid: wpqdtdjs Recently, the outbreak of the novel Coronavirus disease 2019 (COVID-19) pandemic has seriously endangered human health and life. Due to limited availability of test kits, the need for auxiliary diagnostic approach has increased. Recent research has shown radiography of COVID-19 patient, such as CT and X-ray, contains salient information about the COVID-19 virus and could be used as an alternative diagnosis method. Chest X-ray (CXR) due to its faster imaging time, wide availability, low cost and portability gains much attention and becomes very promising. Computational methods with high accuracy and robustness are required for rapid triaging of patients and aiding radiologist in the interpretation of the collected data. In this study, we design a novel multi-feature convolutional neural network (CNN) architecture for multi-class improved classification of COVID-19 from CXR images. CXR images are enhanced using a local phase-based image enhancement method. The enhanced images, together with the original CXR data, are used as an input to our proposed CNN architecture. Using ablation studies, we show the effectiveness of the enhanced images in improving the diagnostic accuracy. We provide quantitative evaluation on two datasets and qualitative results for visual inspection. Quantitative evaluation is performed on data consisting of 8,851 normal (healthy), 6,045 pneumonia, and 3,323 Covid-19 CXR scans. In Dataset-1, our model achieves 95.57% average accuracy for a three classes classification, 99% precision, recall, and F1-scores for COVID-19 cases. For Dataset-2, we have obtained 94.44% average accuracy, and 95% precision, recall, and F1-scores for detection of COVID-19. Conclusions: Our proposed multi-feature guided CNN achieves improved results compared to single-feature CNN proving the importance of the local phase-based CXR image enhancement. Coronavirus disease 2019 (COVID- 19) is an infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), a newly discovered coronavirus [1, 2] . In March 2020, the World Health Organization (WHO) declared the COVID-19 outbreak a pandemic. Up to now, more than 9.23 million cases have been reported across 188 countries and territories, resulting in more than 476,000 deaths [3] . Early and accurate screening of infected population and isolation from public is an effective way to prevent and halt spreading of virus. Currently, the gold standard method used for diagnosing COVID-19 is real-time reverse transcription polymerase chain reaction (RT-PCR) [4] . The disadvantages of RT-PCR include its complexity and problems associated with its sensitivity, reproducibility, and specificity [5] . Moreover, the limited availability of test kits makes it challenging to provide the sufficient diagnosis for every suspected patients in the hyper-endemic regions or countries. Therefore, a faster, reliable and automatic screening technique is urgently required. In clinical practice, easily accessible imaging, such as chest X-ray (CXR), provides important assistance to clinicians in decision making. Compared to computed tomography (CT) the main advantages of CXR are: Enabling fast screening of patients, being portable, and easy to setup (can be setup in isolation rooms). However, the sensitivity and specificity (radiographic assessment accuracy) of CXR for diagnosing COVID-19 is low compared to CT. This is especially problematic for identifying early stage COVID-19 patients with mild symptoms. This causes larger intra-and inter-observer variability in reading the collected data by radiologists since qualitative indicators can be subtle. Therefore, there is increased demand for computer aided diagnostic method to aid the radiologist during decision making for improved management of COVID-19 disease. In view of these advantages and motivated by the need for accurate and automatic interpretation of CXR images, a number of studies based on deep convolutional neural networks (CNNs) have shown quite promising results. Ozturk et al. [6] proposed a CNN architecture, termed DarkCovidNet, and achieved 87.02% three class classification accuracy. The method was evaluated on 127 COVID-19, 500 healthy and 500 pneumonia CXR scans. COVID-19 data was obtained from 125 patients. Wang et al. [7] built a public dataset named COVIDx, which is comprised of a total of 13975 CXR images from 13870 patient case and developed COVID-Net, a deep learning model. Their dataset had 358 Covid-19 images obtained from 266 patients. Their model achieved 93.3% overall accuracy in classifying normal, pneumonia, and COVID-19 scans. In [8] a ResNet-50 architecture was utilized to achieve a 96.23% overall accuracy in classifying four classes, where pneumonia was split into bacterial pneumonia and viral pneumonia. However, there were only eight COVID-19 CXR images used for testing. In [9] , 76.37% overall accuracy was reported on a dataset including 1583 normal, 4290 pneumonia and 76 COVID-19 scans. COVID-19 data was collected from 45 patients. In order to improve the performance of the proposed method, data augmentation was performed on the COVID-19 dataset bringing the total COVID-19 datasize to 1,536. With data augmentation they have improved the overall accuracy 97.2%. In [10] , Contrast Limited Adaptive Histogram Equalization (CLAHE) was used to enhance the CXR data. The authors proposed a depth-wise separable convolutional neural network (DSCNN) architecture. Evaluation was performed on 668 normal, 619 pneumonia, and 536 COVID-19 CXR scans. Average reported multi-class accuracy was 96.43%. Number of patients for the COVID-19 dataset was not available. In [11] , a stacked CNN architecture achieved an average accuracy of 92.74%. The evaluation dataset had 270 COVID-19 scans from 170 patients, 1139 normal scans from 1015 patients, and 1355 pneumonia scans from 583 patients. In [12] , the reported multi-class average classification accuracy was s 94.2%. The evaluation dataset included 5000 normal, 4600 pneumonia, and 738 COVID-19 CXR scans. The data was collected from various sources and patient information was not specified. In [13] transfer learning was investigated for training the CNN architecture. The evaluation dataset included 224 COVID-19, 504 normal, and 700 pneumonia images. 93.48% average accuracy was reported for three-class classification. The average accuracy increased to 94.72% if viral pneumonia was included in the evaluation. In [14] , performance of three different, previously proposed, CNN architectures was evaluated for multi-class classification. With 2,265 COVID-19 images, the study used the largest COVID-19 dataset reported so far. Average area under the curve (AUC), for classification of COVID-19 from regular pneumonia, was 0.73 [14] . Although numerous studies have shown the capability of CNNs in effective identification of COVID-19 from CXR images, none of these studies investigated local phase CXR image features as multi-feature input to a CNN architecture for improved diagnosis of COVID-19 disease. Furthermore, except [14, 7] , most of the previous work was evaluated on a limited number of COVID-19 CXR scans. In this work we show how local phase CXR features based image enhancement improves the accuracy of CNN architectures for COVID-19 diagnosis. Specifically, we extract three different CXR local phase image features which are combined as a multi-feature image. We design a new CNN architecture for processing multi-feature CXR data. We evaluate our proposed methods on large scale CXR images obtained from healthy subjects as well as subjects who are diagnosed with community acquired pneumonia and COVID-19. Quantitative results show the usefulness of local phase image features for improved diagnosis of COVID-19 disease from CXR scans. Our proposed method is designed for processing CXR images and consists of two main stages as illustrated in Figure 1 : 1-We enhance the CXR images (CXR(x, y)) using local phase-based image processing method in order to obtain a multi-feature CXR image (M F (x, y)), and 2-we classify CXR(x, y) by designing a deep learning approach where multi feature CXR images (M F (x, y)), together with original CXR data (CXR(x, y)), is used for improving the classification performance. Next, we describe how these two major processes are achieved. In order to enhance the collected CXR images, denoted as CXR(x, y), we use local phase-based image analysis [15] . Three different CXR(x, y) image phase features are extracted: 1-Local weighted mean phase angle (LwP A(x, y)), 2-LwP A(x, y) weighted local phase energy (LP E(x, y)), and 3-Enhanced local energy attenuation image (ELEA(x, y)). LP E(x, y) and LwP A(x, y) image features are extracted using monogenic signal theory where the monogenic signal image (CXR M (x,y)) is obtained by combining the bandpass filtered CXR(x, y) image, denoted as CXR B (x, y), with the Riesz filtered components as: Here h 1 and h 2 represent the vector valued odd filter (Riesz filter) [16] . α-scale space derivative quadrature filters (ASSD) are used for band-pass filtering due to their superior edge detection [17] . The LwP A(x, y) image is calculated using: ). We do not employ noise compensation during the calculation of the LwP A(x, y) image in order to preserve the important structural details of CXR(x, y). The LP E(x, y) image is obtained by averaging the phase sum of the response vectors over many scales using: In the above equation sc represents the number of scales. LP E(x, y) image extracts the underlying tissue characteristics by accumulating the local energy of the image along several filter responses. The LP E(x, y) image is used in order to extract the third local phase image ELEA(x, y). This is achieved by using LP E(x, y) image feature as an input to an L1 norm based contextual regularization method. The image model, denoted as CXR image transmission map (CXR A (x, y)), enhances the visibility of lung tissue features inside a local region and assures that the mean intensity of the local region is less than the echogenicity of the lung tissue. The scattering and attenuation effects in the tissue are combined as: Here ρ is a constant value representative of echogenicity in the tissue. In order to calculate ELEA(x, y), CXR A (x, y) is estimated first by minimizing the following objective function [15] : In the above equation • represents element-wise multiplication, χ is an index set, and * is convolution operator. D j is calculated using a bank of high order differential filters [18] . The filter bank enhances the CXR tissue features inside a local region while attenuating the image noise. W j is a weighting matrix calculated using: equation the first part measures the dependence of CXR A (x, y) on LP E(x, y) and the second part models the contextual constraints of CXR A (x, y) [15] . These two terms are balanced using a regularization parameter λ [15] . After and is a small constant used to avoid division by zero [15] . Combination of these three types of local phase images as three-channel input creates a new multi-feature image, denoted as M F (x, y). Qualitative results corresponding to the enhanced local phase images are displayed in Figure 2 . Investigating Figure 2 we can observe that the enhanced local phase images extract new lung features that are not visible in the original CXR(x, y) images. Since local phase image processing is intensity invariant, the enhancement results will not be affected from the intensity variations due to patient characteristics or X-ray machine acquisition settings. The multi-feature image M F (x, y) and the original CXR(x, y) image are used as an input to our proposed deep learning architecture which is explained in the next section. Our proposed multi-feature CNN architecture consists of two same convolutional network streams for processing CXR(x, y) images and the corresponding M F (x, y) respectively. Strategies for the optimal fusion of features from multi-modal images is an active area of research. Generally, data is fused earlier when the image features are correlated, and later when they are less correlated [19] . Depending on the dataset, different types of fusion strategies outperform the other [20] . In [21] , our group has also investigated early, mid, and late-level fusion operations in the context of bone segmentation from ultrasound data. Late-fusion operation has outperformed the other fusion operations. In [22] , authors have also used late-fusion network, for segmenting brain tumors from MRI data, has outperformed other fusion operations. During this work we design mid-fusion and late-fusion architectures (Fig.3) . As part of this work we have also investigate several fusion operations: sum fusion, max fusion, averaging fusion, concatenation fusion, convolution fusion. Based on the performance of the fusion operations and fusion architectures, on a preliminary experiment, we use concatenation fusion operation for both of our architectures. We use the following network architectures as the encoder network: Pretrained AlexNet [23] , ResNet50 [24] , SonoNet64 [25] , XNet(Xception) [26] , InceptionV4(Inception-Resnet-V2) [27] and Efficient-NetB4 [28] . Pretrained AlexNet [23] and ResNet50 [24] have been incorporated into various medical image analysis tasks [29] . SonoNet64 achieved excellent performance in implementation of both classification and localization tasks [25] . XNet(Xception) [26] , InceptionV4 (Inception-Resnet-V2) [27] and Ef-ficientNetB4 [28] were chosen due to their outstanding performance on recent medical data classification tasks as well as classification of COVID-19 from chest CT data [30, 31] . We use the following datasets to evaluate the performance of proposed fusion network models: BIMCV [32] , COVIDx [7] , and COVID-CXNet [12] . COVID-19 CXR scans from BIMCV [32] and COVIDx [7] datasets were combined to generate the 'Evaluation Dataset' (Table 1) . For Normal and Pneumonia datasets we have randomly selected a subset of 2567 images (from 2567 subjects) from the evaluation dataset (Table 1 ). In total 2567 images from each class (Normal, Pneumonia, COVID-19) were used during 5-fold cross validation. Table 2 shows the data split for COVID-19 data only. Similar split was also performed for Normal and Pneumonia datasets. In order to provide additional testing for our proposed networks, we have designed a new test dataset which we call 'Test Dataset-2' ( Table 3 ). The images from Normal and Pneumonia cases which were not included in the 'Evaluation Dataset' were part of the 'Test Dataset-2'. Furthermore, we have included all the COVID-19 scans from COVID-CXNet [12] . In order to show the improvements achieved using our proposed multifeature CNN architecture we also trained the same CNN architectures using only M F (x, y) or CXR(x, y) images. We refer to these architectures as monofeature CNNs. Quantitative performance was evaluated by calculating average accuracy, precision, recall, and F1-scores for each class [9, 7] . The experiments were implemented in Python using Pytorch framework. All models were trained using stochastic gradient descent (SGD) optimizer, crossentropy loss function, learning rate 0.001 for the first epoch and a learning rate Fig. 4 : Grad-CAM images [33] obtained by late fusion ResNet50 architecture. decay of 0.1 every 15 epochs with a mini-batches of size 16. For local phase image enhancement, we have used sc = 2 and the rest of the ASSD filter parameters were kept same as reported in [15] . For calculating ELEA(x, y) images we used λ = 2, = 0.0001, η = 0.85, and ρ, the constant related to tissue echogenicity, was chosen as the mean intensity value of LP E(x, y). These values were determined empirically and kept constant during qualitative and quantitative analysis. Qualitative analysis: Gradient-weighted Class Activation Mapping (Grad-CAM) [33] visualization of normal, pneumonia, and COVID-19 are presented as qualitative results in Figure 4 . Investigating Figure 4 we can see the discriminative regions of interest localized in the normal, pneumonia, and COVID-19 data. Quantitative analysis of Evaluation Dataset: Table 4 shows average accuracy of the 5-fold cross validation on the 'Evaluation Dataset' for mono-feature CNN architectures as well as the proposed multi-feature CNN architectures. A Box and Whisker plot is presented in Figure 5 . In most of the investigated network designs M F (x, y)-based mono-feature CNN architectures outperform CXR(x, y)-based mono-feature CNN architectures. The best average accuracy is obtained when using our proposed multi-feature ResNet50 [24] architecture. All multi-feature CNNs with mid-and late-fusion operation compared with mono-feature CNNs, with original CXR(x, y) images as input, achieved statistically significant difference in terms of classification accuracy (p<0.05 using a paired t-test at %5 significance level). Except SonoNet64 [25] , XNet(Xception) [26] , and InceptionV4(Inception-Resnet-V2) [27] , all multi-feature CNNs with mid-fusion operation compared with mono-feature CNNs with M F (x, y) images as input show statistically significant difference in terms of classification accuracy (p<0.05 using a paired t-test at %5 significance level). We did not find any statistical significant difference in the average accuracy results between the middle-level and late-fusion networks (p>0.05 using a paired t-test at %5 significance level). Figure 6 presents confusion matrix results together with average precision, recall, and F1-scores for all multi-feature late-fusion CNN architectures. One important aspect observed from the presented results we can see that almost all the investigated multi-feature networks achieved very high precision, recall, and F1-scores for COVID-19 data indicating very few cases were misclassified as COVID-19 from other infected types. Quantitative analysis of Test Dataset-2: Multi-feature ResNet50 provides the highest overall accuracy shown in Table 5 , which is consistent with the quantitative result achieved with the 'Evaluation Dataset'. Figure 7 shows a Box and Whisker plot for each network. All multi-feature CNNs with late-fusion operation compared with mono-feature CNNs, with original CXR(x, y) im- Fig. 6 : Confusion matrix, and average precision, recall and F1-scores obtained from 5-fold cross validation on 'Evaluation Data' using all multi-feature network models. ages as input, achieved statistically significant difference in terms of classification accuracy (p<0.05 using a paired t-test at %5 significance level). Except XNet(Xception) [26] , all the multi-feature CNNs with mid fusion operation compared with mono-feature CNNs with original CXR(x, y) images as input achived statistically significant difference in terms of classification accuracy (p<0.05 using a paired t-test at %5 significance level). Except XNet(Xception) [26] , all multi-feature CNNs with mid-fusion operation compared with mono-feature CNNs with M F (x, y) images as input show statistically significant difference in terms of classification accuracy (p<0.05 using a paired t-test at %5 significance level). Similar to 'Evaluation Dataset' results, there was no statistically significant difference in the average accuracy results between the middle-level and late-fusion networks (p>0.05 using a paired t-test at %5 significance level) except ResNet50 [24] , and XNet(Xception) [26] architectures. Confusion matrix results, together with average precision recall and F1-score values, for all multi-feature late-fusion CNN architectures evaluated are presented in Fig-ure8 . Similar to the results presented for 'Evaluation Dataset', high precision, recall, and F1-score values are obtained for the COVID-19 data. Development of a new computer aided diagnostic methods for robust and accurate diagnosis of COVID-19 disease from CXR scans is important for improved management of this pandemic. In order to provide a solution to this need, in this work, we present a multi-feature deep learning model for classification of CXR images into three classes including COVID-19, pneumonia,and normal healthy subjects. Our work was motivated by the need for enhanced representation of CXR images for achieving improved diagnostic accuracy. To this end we proposed a local phase-based CXR image enhancement method. We have shown that by using the enhanced CXR data, denoted as M F (x, y), in conjunction with the original CXR data, diagnostic accuracy of CNN architectures can be improved. Our proposed multi-feature CNN architectures were trained on a large dataset in terms of the number of COVID-19 CXR scans and have achieved improved classification accuracy across all classes. One of the very encouraging result is the proposed models show high precision, recall, and F1-scores on the COVID-19 class for both testing datasets. In addition, except for AlexNet [23] , all multi-feature CNNs with late fusion operation has less number of parameters compared with corresponding multi-feature CNNs with middle fusion operation ( Figure 9 ). Since the image classifier of AlexNet [23] is consist of three fully connected layers (fc), which store majority of parameters, AlexNet [23] with late fusion operation almost double the number of parameters compared with middle fusion operation. The rest of networks have only one or no fc layer in the image classifiers. Finally, compared to previously reported results, our work achieves the highest three class classification accuracy on a significantly larger COVID-19 dataset (Table 6 ). This will ensure few false positive cases for the COVID-19 detected from CXR images and will help alleviate burden on the healthcare system by reducing the amount of CT scans performed. While the obtained results are very promising, more evaluation studies are required specifically for diagnosing early stage COVID-19 from CXR images. Our future work will involve the collection of CXR scans Fig. 9 : Model Size vs. Overall Accuracy from early stage or asymptotic COVID-19 patients. We will also investigate the design of a CXR-based patient triaging system. Haghanifar et al. [12] UNet+DenseNet Training data: Testing data: A review of coronavirus disease-2019 (covid-19) Coronavirus disease 2019 An interactive web-based dashboard to track covid-19 in real time Detection of sars-cov-2 in different types of clinical specimens Development of reverse transcription (rt)-pcr and real-time rt-pcr assays for rapid detection and quantification of viable yeasts and molds contaminating yogurts and pasteurized food products Automated detection of covid-19 cases using deep neural networks with x-ray images Covid-net: A tailored deep convolutional neural network design for detection of covid-19 cases from chest x-ray images Covid-resnet: A deep learning framework for screening of covid19 from radiographs Covidiagnosis-net: Deep bayes-squeezenet based diagnostic of the coronavirus disease 2019 (covid-19) from x-ray images Covidlite: A depth-wise separable deep neural network with white balance and clahe for detection of covid-19 Stacked convolutional neural network for diagnosis of covid-19 disease from x-ray images Covid-cxnet: Detecting covid-19 in frontal chest x-ray images using deep learning Covid-19: automatic detection from x-ray images utilizing transfer learning with convolutional neural networks Umls-chestnet: A deep convolutional neural network for radiological findings, differential diagnoses and localizations of covid-19 in chest x-rays Localization of bone surfaces from ultrasound data using local phase information and signal transmission maps The monogenic signal α scale spaces filters for phase based edge detection in ultrasound images Efficient image dehazing with boundary constraint and contextual regularization Multimodal deep learning. In: ICML A review: Deep learning for medical image segmentation using multi-modality fusion Automatic segmentation of bone surfaces from ultrasound using a filter-layer-guided cnn Multi modal convolutional neural networks for brain tumor segmentation Imagenet classification with deep convolutional neural networks Deep residual learning for image recognition Sononet: real-time detection and localisation of fetal standard scan planes in freehand ultrasound Xception: Deep learning with depthwise separable convolutions Inception-v4, inception-resnet and the impact of residual connections on learning Efficientnet: Rethinking model scaling for convolutional neural networks A survey on deep learning in medical image analysis Identifying melanoma images using efficientnet ensemble: Winning solution to the siim-isic melanoma classification challenge Automatic detection of coronavirus disease (covid-19) in x-ray and ct images: A machine learningbased approach Bimcv covid-19+: a large annotated dataset of rx and ct images from covid-19 patients Grad-cam: Visual explanations from deep networks via gradient-based localization Acknowledgements The authors are thankful to all the research groups, and national agencies worldwide who provided the open source X-ray images. Funding: Nothing to declare. Conflict of interest The authors declare that they have no conflict of interest.