key: cord-0931651-9l76ud4f authors: Hu, Qinhua; Gois, Francisco Nauber B.; Costa, Rafael; Zhang, Lijuan; Yin, Ling; Magai, Naercio; de Albuquerque, Victor Hugo C. title: Explainable artificial intelligence-based edge fuzzy images for COVID-19 detection and identification date: 2022-05-13 journal: Appl Soft Comput DOI: 10.1016/j.asoc.2022.108966 sha: f15de3302e0a4c45efbfe64d50e8ec069347b207 doc_id: 931651 cord_uid: 9l76ud4f The COVID-19 pandemic continues to wreak havoc on the world’s population’s health and well-being. Successful screening of infected patients is a critical step in the fight against it, with radiology examination using chest radiography being one of the most important screening methods. For the definitive diagnosis of COVID-19 disease, reverse-transcriptase polymerase chain reaction remains the gold standard. Currently available lab tests may not be able to detect all infected individuals; new screening methods are required. We propose a Multi-Input Transfer Learning COVID-Net fuzzy convolutional neural network to detect COVID-19 instances from torso X-ray, motivated by the latter and the open-source efforts in this research area. Furthermore, we use an explainability method to investigate several Convolutional Networks COVID-Net forecasts in an effort to not only gain deeper insights into critical factors associated with COVID-19 instances, but also to aid clinicians in improving screening. We show that using transfer learning and pre-trained models, we can detect it with a high degree of accuracy. Using X-ray images, we chose four neural networks to predict its probability. Finally, in order to achieve better results, we considered various methods to verify the techniques proposed here. As a result, we were able to create a model with an AUC of 1.0 and accuracy, precision, and recall of 0.97. The model was quantized for use in Internet of Things devices and maintained a 0.95 percent accuracy. The Coronavirus (COVID-19) is a viral disease caused by hard acute respiratory syndrome coronavirus 2 (SARS-CoV-2). The outbreak seems to have a detrimental impact on the market and health. Many nations are challenged by the medical tools necessary for COVID-19 detection. They are looking forward to developing a low-cost, fast tool to detect and diagnose the virus efficiently. Even though a chest X-Ray (CXR) scan is a useful candidate, the images created by the scans must be analyzed, and large numbers of evaluations need to be processed. A CXR of individuals is a vital step in the struggle against COVID-19. This disease causes pulmonary opacities and bilateral parenchymal ground-glass, sometimes with a peripheral lung distribution and a morphology. Several Deep Learning (DL) techniques revealed a firmly optimistic accuracy of COVID-19 patient discovery via the use of CXRs [44] [1] . Because most hospitals have X-ray machines, it is the radiologists' first choice. Automatic diagnosis of COVID-19 from chest pictures is particularly desirable because radiologists are limited and also busy in pandemic conditions. Despite the fact that most machine learning models include a margin of error, automation can be critical for screening patients who can then be assessed with more precise tests. Image segmentation is an essential procedure for most medical image analysis tasks. Having great segmentations will help clinicians and patients provide essential information for 2-D and 3-D visualization, surgical preparation, and early disease detection [28] . Segmentation describes regions of interest (ROIs), e.g., lung, lobes, bronchopulmonary segments, and infected areas or lesions at the CXR or computed tomography (CT) images. Segmented regions could be further used to extract features for description and other applications [40] . Automated computer-aided diagnostic (CADx) tools powered by artificial J o u r n a l P r e -p r o o f Journal Pre-proof intelligence (AI) techniques to detect and distinguish COVID-19 related nasal abnormalities must be tremendously valuable, given the significant number of patients. These tools are particularly vital in places with inadequate CT accessibility or radiological experience, and CXRs create fast, higher throughput triage in mass casualty situations. These instruments combine radiological picture processing components with computer vision to identify common disease indications and localize problematic ROIs. At the moment, recent advances in machine learning (ML), especially DL approaches using convolutional neural networks (CNNs), have demonstrated promising performance in identifying, classifying, and measuring disease patterns in medical images in CT scans and CXRs [37] [7] [29] [34] [11] [9] [10] [27] [38] . In the past decades, Fuzzy logic has represented a vital role in many research areas [9] . Fuzzy logic is an offshoot of fuzzy set theory, which reproduces reasoning and human thinking to boost the procedure's efficacy when managing uncertain or vague data [38] . With little loss in model accuracy, post-training quantization is a conversion technique that can reduce model size while improving CPU and hardware accelerator latency. You can quantize a TensorFlow floating model that has already been trained by converting it to TensorFlow Lite format with the TensorFlow Lite Converter. As a result, the goal of this research is to use ML to solve the problem of identifying COVID-19, using X-rays. VGG16, ResNet152V2, InceptionV3, and EfficientNetB3 were chosen as the neural networks to predict disease probability. Finally, in order to obtain better results, we use several techniques proposed here, such as fuzzy filters and Multinput networks. According to a fuzzy equal relation, fuzzy rough set-based approaches find reduct directly on initial data. The difference between items is preserved by a fuzzy relation. The classification precision can be improved using a fuzzy rough set approach. As a result, we were able to produce models with an Area Under Curve (AUC) of 1.0 and several variations with very high-performance evaluation metrics like precision, accuracy, and recall. To quantize the model, we use TensorFlow Lite Converter with a 0.95 accuracy. This study's main novelty uses a multi-input network with a combination of segmented and non-segmented images in a neural network composed of two pre-trained networks. In a nutshell, the primary contributions of this paper are: • use a multi-input approach to CXR with COVID-19 classification; • apply a trapezoidal membership function to generate fuzzy edge images J o u r n a l P r e -p r o o f Journal Pre-proof of CXR with COVID-19; • obtains classification models with AUC of 0.99 and recall of 100% to CXR COVID-19 detection. In epidemic regions, COVID-19 presumed patients are in immediate demand for identification and suitable therapy. Nevertheless, medical images, mainly chest CT, include hundreds of pieces that require a very long time for those experts in diagnosing. Additionally, COVID-19, being a new virus, has comparable symptoms to several different kinds of pneumonia, which necessitates radiologists to collect many experiences for attaining a more accurate diagnostic operation. Therefore, AI-assisted diagnosis utilizing medical images is highly desirable [40] . Several studies aim to separate COVID-19 patients from non-COVID-19 subjects. The researchers distinguished pneumonia manifestations with higher specificity from that of viral pneumonia on chest CT scans. It was noted that COVID-19 pneumonia was shown to be peripherally distributed together with ground glass opacities (GGO) and vascular thickening [33] . Abdel-Basset et al. [44] propose a hybrid COVID-19 detection model based on an improved marine hunters algorithm (IMPA) to get X-Ray image segmentation. The ranking-based diversity decrease (RDR) strategy enhances the IMPA operation to achieve better alternatives from fewer iterations. The experimental results reveal that the hybrid model outperforms all other algorithms for a range of metrics. Abdul Waheed et al. [44] present a process to generate synthetic chest X-ray (CXR) images by developing an Auxiliary Classifier Generative Adversarial Network (ACGAN) [33] . The segmentation approaches in COVID-19 programs can be mostly grouped into two different classes, i.e., the lung-region-oriented approaches as well as the lung-lesion-oriented procedures. The former lung-region-oriented procedures aim to separate lung areas, i.e., entire lung and lung lobes, from other areas in CT or X-ray, which is considered as a requisite measure in COVID-19 [40] . Jin et al. [15] shows a two-stage pipeline for screening COVID-19 in CT images, where the entire lung area is first detected through an efficient segmentation network based on UNet+. Wang proposes a novel COVID-19 Pneumonia Lesion segmentation system (COPLE-Net) to better deal with the lesions with various scales and looks [45] . Chouhan et al. [7] approach extract features from images using several pre-trained neural network models. The study uses five distinct models, examined their per-J o u r n a l P r e -p r o o f Journal Pre-proof formance, and combined outputs, which beat individual models, reaching the state-of-the-art performance in pneumonia identification. The study reached an accuracy of 96.4% with a recall of 99.62% on unseen data from the Guangzhou Women and Children's Medical Center dataset. Zheng et al. [49] developed a weakly-supervised deep learning-based software utilizing 3D CT volumes to identify COVID-19. The lung region was segmented using a pre-trained UNet; then, the segmented 3D lung region was fed into a 3D deep neural network to foretell the probability of COVID-19 infectious. The present study presents a different approach from the studies presented, using a multi-input architecture and segmented images in conjunction with non-segmented images. To extract information from fabric photos, Lin et al. propose a multi-input neural network. The segmented small-scale image and the related features collected using standard methods are the inputs. Experiments suggest that including these manually extracted features into a neural network can increase its performance to a degree [22] .For identifying autism, Epalle et al. propose a multi-input deep neural network model. The architecture of the model is built to accommodate neuroimaging data that has been preprocessed using three different reference atlases. For each training example, the proposed deep neural network receives data from three alternative parcellation algorithms at the same time and learns discriminative features from the three input sets automatically. As a result of this process, learned features become more general and less reliant on a single brain parcellation approach. The study used a collection of 1,038 real participants and an augmented set of 10,038 samples to validate the model utilizing cross-validation methods. On genuine data, the study achieve a classification accuracy of 78.07 percent, and on augmented data, the model reach a classification accuracy of 79.13 percent, which is about 9% higher than previously reported results [12] . Blind/referenceless image spatial quality evaluator (BRISQUE) [8] [35] is a reference-less quality assessment technique. The BRISQUE algorithm estimates the quality score of an image with computational performance. The algorithm selects the pointwise numbers of sectionally normalized luminance signs and measures image naturalness based on measured differences from a natural picture form. The algorithm models the incidence of pairwise statistics of neighboring normalized luminance signals, which provide deformity orientation in-J o u r n a l P r e -p r o o f Journal Pre-proof formation. Though multiscale, the version applies to calculate features making it computationally fast and time-efficient [8] [35] . The BRISQUE model utilized a spatial method. First, a locally normalized luminance, also known as Mean Subtracted Contrast Normalized (MSCN) [8] , is calculated as the following equation: where µ(m, n) is the local mean,I(m,n) is the intensity image, and normalizes using local variance σ (m,n). N are spatial indices, M and N are the image height and width, respectively, to avoid a zero denominator (variance). The local mean µ(m,n) and local variance δ(m,n) is calculated using the following equations: where w = {w k,l |k = −K, .., K, l = −L, .., L} K-means clustering is a common segmentation technique in pixel-based methods. Clustering pixel-based approaches have low complexity in comparison to other region-based approaches. K-means clustering is adequate for image segmentation because the amount of clusters is usually known for images of particular areas of the body. K-means is a clustering algorithm to partition data. Clustering is the procedure for grouping data points with similar feature vectors into several clusters. Let the feature vectors obtained from l clustered data be X = {x i |i = 1, 2, ..., /}. The generalized algorithm starts k cluster centroids C = {c j |j = 1, 2..., k} by randomly choosing k characteristic vectors from X. Next, the feature vectors are grouped into k clusters using a chosen distance measure, such as Euclidean distance [46] . Edge detection is the strategy used most often for segmenting images based on fluctuations in intensity. Edge detection is a requirement for image segmentation because it usually allows the image to be represented by black and white colors. Edge detection identifies the size and shape of an item. A better edge detection method is very likely to be a valuable tool for several applications. A digital image is a discrete description of reality. The image is composed of the color of the pixels and the position of these objects. Any potential treatment of the picture will have to account for the image's discretization issues. For instance, at times, it is not possible to discern which pixel belongs to which item. Even a human has some difficulty with the place of the edges on an image. Conventional segmentation techniques such as watershed, region growing, and thresholding are suitable for segmenting regions with clear boundaries. However, for cases with boundaries and inhomogeneity, these methods cannot help segment the areas. Therefore, the fuzzy logic appears as a suitable choice for tackling these edges' representation [24] [21] [14] [25] [2] . Fuzzy systems are an option to the classic boolean logic that only has two states: false or true. The membership values have been signaled by either 0 absolute false or 1 for complete accuracy and range. Fuzzy systems overcome the uncertainties in the information and solve image processing [39] . In a fuzzy inference system (FIS) [21] , a fuzzy set declares each fuzzy number and requires a predetermined range of crisp with a grade of membership. The fuzzy sets of input membership functions transfer crisp inputs into fuzzy inputs. The set is explained as follows X = x 1 , x 2 , ..., x n where, x is an element µ in the set X. A membership worth expresses the grade of membership linked to each element x i in a fuzzy set A, which reveals a combination A = µ 1 (x 1 ), µ 2 (x 2 ), ..., µ n (x n ). Membership function (MF) is a curve that defines how every pixel from the input is redirected to a membership value between 1 and 0. The MF curve is a function of a vector x and is determined by four scalar parameters b, a, c, and d [17] [18] . The use of DL and CNN methodologies in various computer vision software has been grown quickly. DL draws its power to optimize multiple neuron layers connected as a system that includes operators and linear. A convolutional neural network is a type of feed-forward neural network broadly employed for picture-based classification, object detection, and recognition. The fundamental principle is using convolution, which generates the filtered characteristic maps piled over each other [26] . A CNN is a structure of DL that measures the convolution between the weights along with a picture input. It selects attributes from the input data as opposed to conventional ML methods. During the learning process, the optimal values to the convolution coefficients, using a pre-defined price function, are discovered, based on which the characteristics are automatically determined. Convolution is a method that takes a little matrix of numbers (known as kernel or filter), pass it on an image and change it based on the values from the filter. Subsequent attribute map values are calculated according to the following formula [48] : The convolutional layer gives a convolved characteristic map; as a result signal after applying the dot product between a small region of input and the filter weights to which they all are connected. Then, the pooling layer performs a downsampling operation. In the convolutional neural system, the size of pooling layer output can be measured using the following formula [26] : where W is the input size; F is the convolutional kernel size; P is the padding value and S is the step size. Transfer learning has brought considerable importance since it can work with little or no information in the training phase. That is, data that is wellestablished is adjusted by move learning from one domain to another. Transfer learning is well-suited to scenarios where a version performs poorly due to obsolete data or scant [23] [50] . This form of transfer learning used in DL is known as an inductive transfer. This is where the reach of feasible models, i.e., model bias, is narrowed in a practical way using a model match on a different but related task. Since AlexNet won the ImageNet competition, CNNs are utilized for a broad selection of DL applications. From 2012 to the current, researchers are attempting to apply CNN on several different tasks [31] . Journal Pre-proof 3.4 The VGG16 system is fashioned of 3 x 3 convolutional layers, 13 convolutional layers, and three fully-connected layers and can be attached to the pooling layer after every phase. The max-pooling layer follows some convolutional layers. The stride is set to 1 pixel. The five max-pooling layers use a determined stride of 2 and a 2 x 2-pixel filter. A padding of 1 pixel is done for the 3 x 3 convolutional layers-all the layers of the network use ReLU as the activation function [32] . Deep Residual Network (ResNet) is an Artificial Neural Network (ANN) that overcomes reduced precision when developing a plain ANN using a deeper layer compared to a shallower ANN. This Deep Residual Network's purpose would be to earn ANN with layers with higher precision. The idea of it would be to create ANN that may upgrade the weight into a shallower layer, i.e., decrease degradation gradient [4] . Residual Networks (ResNet) improving DL introduces the notion of restructuring the layers in order for residual functions to be learned, which are relative to the inputs of these layers instead of learning capabilities that have no reference to the layer inputs. This restructuring solved the vanishing gradient problem in CNNs and allowed the training of considerably deeper neural networks. ResNet-152 includes 152 layers that are 8×VGGNet's depth, nevertheless has lower complexity. An ensemble of ResNet-152 models attained 3.57% precision error on the ImageNet test dataset and won first place in the ILSVRC 2015 classification challenge. Google's Inception V3 is the variant of the DL Architectures series Inception V3 trained with 1000 classes using the first ImageNet dataset with more than 1 million pictures [20] . EfficientNet is a DL family of models with fewer parameters than the stateof-the-art versions. The model improves performance by a smart mixture of depth, width, and resolution. EfficientNet scales width and resolution using a compound coefficient. The advantage of EfficientNets in comparison to CNN is related to the decrease in the number of FLOPS along with which parameters, increasing precision. The classification accuracy for EffficientNet can also be better than those that have similar complexities [19] [31] [41] . J o u r n a l P r e -p r o o f Journal Pre-proof 3.5 The convolutional layer can be utilized as object sensors without giving an object's annotated bounding box to the practice sample. CNN lost this ability when connected to a fully connected layer. Unlike a traditional CNN, whereby looking at the image, the goal is to identify the picture class, the class activation map produces a heatmap to show the significant area of the image classified. Class activation mapping is a method to generate a specified category representing the discriminative areas that link the class of the object [6] [3]. Deep learning has a long track record of success, but the use of heavy algorithms on large graphical processing units isn't ideal. In response to this disparity, a new class of deep learning methods known as quantization has emerged. Quantization is used to reduce the size of the neural network while maintaining high performance accuracy. This is particularly important for ondevice applications, where memory and computation capacity are constrained. The process of approximating a neural network that uses floating-point numbers by a neural network with low bit width numbers is known as quantization for deep learning. The memory requirements and computational costs of using neural networks are drastically reduced as a result of this. The proposed method consists of using a multi-input network with two input images: a non-segmented image and the second segmented image using a fuzzy trapezoidal membership function or a K-means cluster segmentation. The proposal consists of four stages. The first one reads the set of images of the two datasets previously presented and performs the data shuffling. The second phase is to apply the fuzzy filter and the segmenter separately using K-means to compare which achieves the best accuracy. The fuzzy filter applies a trapezoidal fuzzy number presented as: J o u r n a l P r e -p r o o f We use the Brisque score to select the best parameters (a,b,c,d) for the fuzzy filter. Initially, images treated with a diffuse filter and a cluster are used to train four networks using transfer of learning, namely VGG16, InceptionV3, ResNet152V2, and EfficientNetB3. This test aims to compare the performance between the applied diffuse filter and the cluster averages. We evaluate the tests using the AUC, accuracy, precision and recall metrics. The third phase consists of applying tests with a multi-input network with two pre-trained networks. We altered the networks by substituting the last layer, using a fully connected layer, with 20 nodes divided into another fully connected sigmoid layer with a single node. The criteria are applied in 12 combinations between VGG16, InceptionV3, ResNet152V2, and EfficientNetB3. These tests were executed on ten epochs. The combination with a better AUC score was chosen to be tunning and evaluated. The fourth phase of the process consists of tunning the best model chosen. We train the best model in 100 epochs, and the metrics of the ROC curve, f1-score, and recall by epoch are presented. The last step consists of using class activation maps to compare explainable ML practice with and without the fuzzy filter. We use Adaptive Moment Estimation (Adam) as the optimizer and Binary Cross Entropy as the loss function and Sigmoid as the networks' activation function. The initial learning rate was 0.001. We chose the simple learning speed schedule of decreasing the learning rate by a constant element when operation metric plateaus about the validation/test place (commonly called Reduce learning rate on plateau). We configure Reduce learning rate on plateau to monitor validation loss with factor parameter equals 0.2 and patience with value 2. The best-chosen combination was obtained after 100 epochs. We use two datasets to train the proposed COVID-Net. The first dataset was available on https://github.com/ieee8023/covid-chestxray-dataset and approved by the University of Montreal's Ethics Committee (Fig. 4) . The dataset is a collection of CXR of Healthy vs. Pneumonia (Corona) affected patients, infected patients, along with few other categories such as SARS (Severe Acute Respiratory Syndrome ), Streptococcus and ARDS (Acute Respiratory Distress Syndrome). The second dataset is available on the Kaggle platform on https://www.kaggle.com/nabeelsajid917/covid-19-x-ray-10000-images and was used to test the model. We use the AUC, accuracy, precision, and recall to compare the models. The trapezoidal rule is used to ascertain the AUC. The resulting area is equal to this Mann-Whitney U statistic divided by N 1 * N 2 , where N 1 and N 2 are the number of instances in C 1 and C 2 , respectively. The AUC can be described as the chance to correctly identify the C 1 case when confronted with a randomly selected case from each class. Let I(x,y) : R 2 → R be a medical image and S(I(x,y)):R 2 → Ω, Ω = 0, 1 a binary decision of picture I(x,y). According to [30] , the gold standard as G and the result as R, each fold can be classified as: The precision is given by The recall is given by: where TP is the number of true positives and FN the number of false negatives, where R is Recall, the recall is the capability of the classifier to find all the samples. The best value is 1, and the worst value is 0. Accuracy is a metric for deciding the models' performance in categorizing positive and negative classes. Assessing all detailed data with all data calculates the rating. It is given by: The number of features in each feature map in a CNN is at most a constant times the number of input pixels n (usually 1). Because each output is merely the sum product of k pixels in the picture and k weights in the filter, and k doesn't vary with n, convolving a fixed size filter across an image with n pixels requires O(n) time. The process of building the single input model is described in the algorithm in the listing 1. Three parameters are required by the method: The weight of adjacent layers of neural networks (the model uses transfer learning, including additional layers to a pre-trained network); the Dropout value used for network generalization and the network to be created (the model uses transfer learning, including additional layers to a pre-trained network); and the weight of adjacent layers of neural networks (the model uses transfer learning, including additional layers to a pre-trained network). BatchNormalization is used in this method. Batch normalization is a transformation that keeps the mean output close to 0 and the standard deviation of the output close to 1. The algorithm in the listing 2 describes the process of creating the Multi-input model. The approach requires the same three parameters that were used to generate the single input model. The last parameter is an array containing a list of networks that will be used to build the model. Model (INPUTs=[model1.INPUT,model2 .INPUT], outputs=x) concatenates the output of the two models, while concatenate( [ model1.output, model2.output]) concatenates the input of the two models. The pseudo-code for experiment execution is shown in the listing 3. There are two network lists, and the algorithm combines the networks and the layer weights. The model is trained with original and fuzzy images using the Adam optimizer and binary cross entropy loss function for each combination. The Listing 4 show the main imports.The 5 and 6 listings describe methods for creating single and multi-input models, respectively. Listing 7 shows the code for training and evaluating several neural network configurations. The code for executing the predictions is listed in Listing 8. The procedure of applying the fuzzy transformation, which is provided through the python API skfuzzy, to each image is shown in Listing 9. The fuzzy method's parameters were chosen using the brisque score technique. This study's primary goal is to present a monitoring model and reduce human errors in COVID-19 diagnosis. The proposed model's performance was measured using the Area Under the receiver operator characteristic curve (AUC). The code used in these experiments and datasets is available in https: //www.kaggle.com/naubergois/covid-xray-classification-with-fuzzy-1-0-recall and https://www.kaggle.com/naubergois/fork-of-covidxray-classification-with-fuzzy-1-0-r. The model requires O(n) time. We use Brisque values to tunning the Fuzzy filter parameters and obtain a=0.2, b=0.4, c=200, and d=200. Fig. 3(c) shows samples of the X-Rays with respective Brisque scores. Fig. 3 (a) presents a non-SARS sample with the best fuzzy parameters. Fig. 3 (b) presents a non-SARS sample with the best fuzzy parameters. The first results concern the comparison between the use of the fuzzy filter and the segmentation with K-means. Table 1 show the results obtained with the use of K-means, each table show AUC, accuracy, precision and recall of each model. EfficientNetB3 obtain the better precision and InceptionV3 the better recall result. Tables 2 and 3 show the results for all pre-trained networks used with and without the fuzzy filter. We can see that the fuzzy filter has considerably improved the results of the ResNet152V2 model and for the AUC of all models. On the other hand, fuzzy filters reduce the recall of VGG16 and EfficientNet B3. The use of the fuzzy filter surpassed the use of the K-means cluster in all experiments. Fig 5(a) present the loss by an epoch of the best single input models. Fig 5(b) present the AUC by an epoch of the best single input models. The tables 6 and 5 present the results obtained through the MultiInput technique using and not using the fuzzy filter. We obtained a higher AUC in all cases with the fuzzy filter. Therefore, it is easy to conclude that the fuzzy filter contributes to distinguishing positive or negative for COVID. It is also verified that in the same way that the EfficientNetB3 shows a decrease in accuracy using a single input, there is no difference in the multi-input technique. The table 7 shows the comparison of the results between multi-input and single-input networks. Except for the EfficientNetB3 model, multi-input models obtained better value for AUC and accuracy. The combination of the VGG16 and ResNet152V2 network achieved the best result. This combination was trained in 100 epochs. The VGG16 network received the images with a fuzzy filter and ResNet152v2 received the images J o u r n a l P r e -p r o o f Journal Pre-proof Fig. 6(a) shows the model's confusion matrix, and it is important to note that no COVID-19 was misclassified. Fig 7(a) and 7 (b) present the class activation maps with and without fuzzy filter. We can observe that the regions of the map are well delimited with fuzzy filter images. Several approaches to discover COVID-19 are trained to find out Pneumonia. Pneumonia is a possibly life-threatening illness caused by several pathogens. In common practice, most research proposes to classify the presence of Pneumonia associated with COVID-19. COVID-19 and pneumonia are both respiratory disorders that share many of the same symptoms. However, they are much more intimately connected. As a result of the viral infection that causes COVID-19 or the flu, some patients acquire pneumonia. Many times, pneumonia develops in both lungs in COVID-19 patients, putting the patient at serious danger of respiratory problems. Even if you don't have COVID-19 or the flu, you can get pneumonia from bacteria, fungus, and other microorganisms. However, COVID-19 pneumonia is a unique infection with unusual characteristics [13] . Some studies attempt to distinguish between common pneumonia and COVID-19 pneumonia [47] [43] [16] . Some datasets with X-ray pictures of cases (pneumonia or COVID-19) and controls have been made accessible in order to develop machine-learning-based algorithms to aid in illness diagnosis. These datasets, on the other hand, are primarily made up of different sources derived from pre-COVID-19 and COVID-19 datasets. Some studies have discovered significant bias in some of the publicly available datasets used to train and test J o u r n a l P r e -p r o o f Journal Pre-proof [42] . This study does not intend to validate the differences or distinguish between usual pneumonia and pneumonia caused by COVID at this time, but that is a goal for future research. This research presents an approach where transfer learning in conjunction with fuzzy filters allows the classification of CXRs. This study has an AUC value more significant than the presented researches in the literature review. In this paper, we show that by using transfer learning and leveraging pretrained models, we can achieve very high accuracy in detecting COVID-19. Also, together with the fuzzy filter, this study shows that it is possible to achieve a recall of 1.0 with more than one pre-trained model. The best model was a combination of VGG16 and ResNet152V2. Finally, using quantization technology, we achieve an accuracy of 0.95. Despite the fact that we achieved good COVID-19 detection accuracy, sensitivity, and specificity, this does not imply that the solution is ready for production, especially given the small number of photos currently accessible about COVID-19 cases. The aim of this analysis is to provide radiologists, data scientists, and the research community with a multi-input CNN model that may be used to diagnose COVID-19 early, with the aim that it will be expanded upon to speed up research in this area. A Hybrid COVID-19 Detection Model Using an Improved Marine Predators Algorithm and a Ranking-Based Diversity Reduction Strategy Segmentation of blood clot MRI images using intuitionistic fuzzy set theory IECBES 2018 -Proceedings pp Class Activation Mapping-Based Car Saliency Region and Detection for In-Vehicle Surveillance. IES 2019 -International Electronics Symposium: The Role of Techno-Intelligence in Creating an Melanoma Cancer Classification Using ResNet with Data Augmentation Bias analysis on public x-ray image datasets of pneumonia and covid-19 patients On the use of class activation map for land cover mapping A novel transfer learning based approach for pneumonia detection in chest X-ray images Modified-BRISQUE as no reference image quality assessment for structural MR images Albuquerque V (2019) A Novel Approach for Optimum-Path Forest Classification Using Fuzzy Logic Smart supervision of cardiomyopathy based on fuzzy harris hawks optimizer and wearable sensing data optimization: A new model An open ioht-based deep learning framework for online medical image recognition Multi-atlas classification of autism spectrum disorder with hinge loss trained deep architectures: Abide i results Covid-19 pneumonia: Ards or not? Using Fuzzy Inference system for detection the edges of Musculoskeletal Ultrasound Images AI-assisted CT imaging analysis for COVID-19 screening: Building and deploying a medical AI system in four weeks Hybrid ensemble model for differential diagnosis between covid-19 and common viral pneumonia by chest x-ray radiograph Removal of rician noise in MRI images using bilateral filter by fuzzy trapezoidal membership function Intuitionistic Fuzzy C-Means Clustering Using Rough set for MRI Segmentation Deep learning and transfer learning applied to sentinel-1 DInSAR and Sentinel-2 optical satellite imagery for change detection. 2020 International SAUPEC/RobMech/PRASA Conference ISEC: An Optimized Deep Learning Model for Image Classification on Edge Computing Edge detection using trapezoidal membership function based on fuzzy's mamdani inference system Fabric defect detection based on multi-input neural network Generation of fuzzy edge images using trapezoidal membership functions Human image complexity analysis using a fuzzy inference system CNN based traffic sign classification using adam optimizer Deep learning for multigrade brain tumor classification in smart healthcare systems: A prospective survey Medical Image Segmentation Using K-Means Clustering and Improved Watershed Algorithm View project Neuro vasculature modeling View project MED-ICAL IMAGE SEGMENTATION USING K-MEANS CLUSTERING AND IMPROVED WATERSHED ALGORITHM Reboucas Filho PP (2020) Automatic detection of covid-19 infection using chest x-ray images through transfer learning Statistical validation metric for accuracy assessment in medical image segmentation Automatic Detection and Monitoring of Diabetic Retinopathy using Efficient Convolutional Neural Networks and Contrast Limited Adaptive Histogram Equalization Crack detection of concrete pavement with cross-entropy loss function and improved VGG16 network model Iteratively Pruned Deep Learning Ensembles for COVID-19 Detection in Chest X-rays Health of things algorithms for malignancy level classification of lung nodules Determination of reconstruction parameters in Compressed Sensing MRI using BRISQUE score Public covid-19 x-ray datasets and their impact on model bias-a systematic review of a significant problem Online heart monitoring systems on the internet of health things environments: A survey, a reference model and an outlook A New Design of Mamdani Complex Fuzzy Inference System for Multi-attribute Decision Making Problems Fuzzy based Pooling in Convolutional Neural Network for Image Classification Review of Artificial Intelligence Techniques in Imaging Data Acquisition, Segmentation and Diagnosis for COVID-19 Optimized Light-Weight Convolutional Neural Networks for Histopathologic Cancer Detection Deep learning covid-19 detection bias: accuracy through artificial intelligence A new approach for classifying coronavirus covid-19 based on its manifestation on chest x-rays using texture features and neural networks CovidGAN: Data Augmentation Using Auxiliary Classifier GAN for Improved Covid-19 Detection A Noise-robust Framework for Automatic Segmentation of COVID-19 Pneumonia Lesions from CT Images Brain tumor detection using colorbased K-means clustering segmentation Automatic distinction between covid-19 and common pneumonia using multi-scale convolutional neural network on chest ct scans Detecting Masses in Mammograms using Convolutional Neural Networks and Transfer Learning Deep Learning-based Detection for COVID-19 from Chest CT using Weak Label Fuzzy Transfer Learning Using an Infinite Gaussian Mixture Model and Active Learning