key: cord-0910227-xje3t590 authors: Mubarak, Auwalu Saleh; Serte, Sertan; Al‐Turjman, Fadi; Ameen, Zubaida Sa'id; Ozsoz, Mehmet title: Local binary pattern and deep learning feature extraction fusion for COVID‐19 detection on computed tomography images date: 2021-09-29 journal: Expert Syst DOI: 10.1111/exsy.12842 sha: d8d3604d3edb3f26e7a7b4bc9321b3f6831d90c3 doc_id: 910227 cord_uid: xje3t590 The deadly coronavirus virus (COVID‐19) was confirmed as a pandemic by the World Health Organization (WHO) in December 2019. It is important to identify suspected patients as early as possible in order to control the spread of the virus, improve the efficacy of medical treatment, and, as a result, lower the mortality rate. The adopted method of detecting COVID‐19 is the reverse‐transcription polymerase chain reaction (RT‐PCR), the process is affected by a scarcity of RT‐PCR kits as well as its complexities. Medical imaging using machine learning and deep learning has proved to be one of the most efficient methods of detecting respiratory diseases, but to train machine learning features needs to be extracted manually, and in deep learning, efficiency is affected by deep learning architecture and low data. In this study, handcrafted local binary pattern (LBP) and automatic seven deep learning models extracted features were used to train support vector machines (SVM) and K‐nearest neighbour (KNN) classifiers, to improve the performance of the classifier, a concatenated LBP and deep learning feature was proposed to train the KNN and SVM, based on the performance criteria, the models VGG‐19 + LBP achieved the highest accuracy of 99.4%. The SVM and KNN classifiers trained on the hybrid feature outperform the state of the art model. This shows that the proposed feature can improve the performance of the classifiers in detecting COVID‐19. nasopharyngeal swabs (Chen et al., 2009) . However, the high false-negative rate (Chan et al., 2020; Chu et al., 2020) duration of the test and the lack of RT-PCR test kits for the early stages of the epidemic can restrict the early diagnosis of infected patients. Computed tomography (CT) and chest X-ray (CXR) is well matched to the lung picture of COVID-19 infections in comparison to the swab examination. With the current situation in the world, a fast and reliable method of detecting COVID-19 is needed, in profiling COVID-19 X-ray and CT scan images were used (Kroft et al., 2019; Liu et al., 2020; Strunk et al., 2014) . CT and CXR show the spatial position of the alleged pathology as well as the extent of the damage. The signature phenotype of CXR is the bilateral spread of peripheral hazy lung opacities, including the consolidation of air space (Wong et al., 2020) . The benefit of imaging is that it has a high sensitivity, a short response time, and can imagine the magnitude of the infection in the lung. The downside of imaging is that it has a low specificity, which makes it difficult to differentiate between different forms of lung infection, particularly when the lung infection is serious. Computer-aided diagnostic (CAD) systems can help radiologists improve accuracy rate. Researchers are currently using handcrafted or learning features that are centred on lung texture, structure, and morphological characteristics for identification. However, it is always important and difficult to select the right classifier that can optimally handle the properties of the lung spaces. The traditional image recognition methods are support vector machine (SVM), K-nearest neighbours (KNN), artificial neural networks (ANNs), decision trees (DTs) and Bayesian networks (BNs). These machine-learning methods (Fehr et al., 2015; Orrù et al., 2012) needs hand-crafted features to compute such as morphological, texture, SIFT, entropy, the density of pixels, elliptic shape, geometry, Fourier descriptors (EFDs), and off-shelf classifiers as explained in . In comparison, machine-based learning (ML) approaches are known as non-deep learning methods. There are many uses for these non-deep learning approaches, such as the use of neurodegenerative diseases, cancer diagnosis, and psychological disorders (Cruz & Wishart, 2006; Doyle et al., 2007; Oakden-Rayner et al., 2017; Orrù et al., 2012; Parmar et al., 2017) . However, the major drawbacks of non-deep learning approaches are that they rely on the extraction stage of the function and this makes it challenging to find the most important feature required to produce the most successful outcome. The application of artificial intelligence (AI) will be used to solve these difficulties. AI technology in the field of medical imaging is becoming increasingly popular, particularly for the advancement of technology and the growth of deep learning (Gao et al., 2017; Sa et al., 2021; Wang et al., 2017; Zhang et al., 2016) . Convolutional neural networks (CNNs) have attained state-of-the-art performance in the field of medical imaging based on previous studies (Waheed et al., 2020; Wang, Tang, et al., 2019) . This level of reliability is achieved by training and fine-tuning the system's millions of parameters with labelled data. Because of the large number of parameters, CNN can easily overfit small datasets, so generalization efficiency is proportional to the size of the labelled data. Due to the limited number of datasets, limited datasets prove to be the most challenging problem in the medical imaging domain (Greenspan, 2016; Roth et al., 2015; Tajbakhsh et al., 2016) . Medical image acquisition is a very expensive and tedious process that requires the participation of radiologists and researchers (Greenspan, 2016) . Furthermore, due to the recent severity of the COVID-19 disease, sufficient data of chest CT scan images are tough to obtain, unlike in Waheed et al. (2020) whereby the detection of COVID-19 was performed on synthetic CT scan images, we proposed an offline data augmentation were by several data augmentations employed in many studies were performed on each of the three datasets employed in the study such as random reflection, random rotation, random rescale, random translation. Feature extraction is a critical component of a detection system's performance (Niu & Suen, 2012) . CNN features are automatically trained. One of CNN's advantages in the case of transformations such as translation, scaling, and rotation is that they can be invariant. Invariance, rotation, and scale are three of CNN's most unique advantages, particularly in image recognition problems such as object detection, since they allow the network to abstract identity, enabling it to recognize the object even though the image's pixel values vary greatly. Feature extraction increases the accuracy of the models learned by extracting the features from the input data. This move in the general framework reduces data dimensionality by removing redundant data. It also increases the speed and inference of model training. Methods of extraction of features generate new features by rendering the variations and transformations of the original features. Colour, shape, texture or pixel value is the type of characteristics that can be obtained from medical images. Any diagnostic image, such as CT scan images, does not contain any colour detail. This is appreciated in this field. By linking most of the real objects around us to the internet, IoT technology helps us to integrate the physical and virtual environments. It gives everyday objects computational power and network access, allowing them to produce and disseminate data. These objects could include home appliances, wearable devices, medical equipment, and vehicles. The internet of things (IoT) pushes us closer to AI, smart access, and automation with less human interference (Alsukayti, 2020) . Remote diagnostics, such as radiological services and online image processing, may benefit from the image database with advanced analysis, which allows physicians to diagnose critical illnesses without having to travel to remote locations. Improvements in biomedical and health-care environments are apparent when combined with ML algorithms that can analyse and develop sophisticated simulations. Every day, millions of images are created, allowing different types of AI to open up new frontiers in big data analytics. The machine learning algorithms mainly clean structured data from raw datasets, converting it into expectations and assumptions to aid in the adoption of immediate actions (Xing et al., 2016) . The IoT has spurred the development of a wide range of smart IoT applications across several industries. Successful IoT implementations and experiments are needed to advance the various technical aspects of these solutions. Low-cost and modular approaches such as mathematical modelling and simulation are commonly used to solve this. However, such techniques are limited in their ability to realistically capture physical characteristics and network conditions. To address this issue, a revolutionary IoT testbed device that allows for the realistic testing of various IoT solutions in a controlled environment. The testbed was built to provide multidimensional generalpurpose support for various IoT properties such as sensing, connectivity, portal, energy storage, data processing, and security (Mu et al., 2019; Reshi et al., 2019; Schuß et al., n.d.) . IoT and big data analytics, in general, are two main innovations that can change the biomedical and healthcare industries and improve people's lives (Banerjee et al., 2020) . The deadly disease of COVID-19 has affected the world by crippling operations, various approaches have been taken into account in the battle against the spread of deadly disease, prediction analysis methods have been conducted to forecast the spread of disease, taking into account the number of infected, susceptible and recovered patients, and this prediction is a classic technique (Srivastava et al., 2020) . to achieve real-time prediction, AI models combined with IoT were used to help health professionals treat and monitor COVID-19 by examining parameters such as temperature, blood pressure and heart rate, considering the high number of incidents, the protection of data transmission and the energy efficiency of the low-power device used to gather information is very important, as suggested by Al-turjman and Deebak (2020). To reduce the economic effect of COVID-19 (Rahman, 2020) Proposed AI model that is data-driven to forecast lock-down and non-lock-down geographical borders to minimize the economic effects of the COVID-19 pandemic, the accepted form of lock-down by several countries was full lock-down, this method is not beneficial to the economy. Employ transfer learning by training and comparing model output, more data has been created using image augmentation and conditional generative adversarial network (CGAN) . Transfer learning has been reported to be able to solve the problem of a few datasets, and both data generation will improve the efficiency of the model based on the ResNet-50 performance criterion. With data augmentation, the ResNet-50 achieved the highest accuracy of 82.1%, a sensitivity of 0.77, a specificity of 0.876 and precision of 0.849. Grey level scale zone matrix along with SVM was used to identify CT scan images of COVID19 and non-COVID19 images, based on adjusting hyperparameters validation fold 2, 5, 10, and 10 fold to obtain the maximum accuracy (Barstugan et al., 2020) . Explainable deep learning classification approach was proposed by Soares et al. (2020) . To distinguish COVID-19 and healthy individuals with CT scan images, the proposed model surpassed seven distinct AI models and reached an accuracy of 97.3, a sensitivity of 0.955, an F1-score of 0.973, a precision of 0.991 and an AUC of 0.973. Different AI models were used to distinguish COVID-19 and stable individuals-ray and CT scan of which (81.5%-95.2%) and (95.4%-100%) were obtained in CT scans and X-ray (Ahsan et al., 2020; Xie, 2020) . In this study, KNN and SVM classifiers were trained using handcrafted LBP extracted features and seven pre-trained deep learning (i.e., CNN) models extracted features for COVID-19 detection. The contribution in this study are: 1. We proposed a hybrid LBP and CNN feature to train KNN and SVM classifiers. 2. We compared the performance of KNN and SVM classifiers on handcrafted LBP and automatic CNN features. 3. The performance of the KNN and SVM in detecting COVID-19 was improved by concatenating the LBP and the CNN features. 4. Three datasets were merged to generalized the performance of the model and improve the training of the model. 5. The proposed model outperformed the state of the art model. This section describes the characteristics of the dataset used, the proposed feature extractions techniques and the machine learning models for COVID-19 detection. In this study, three datasets were merged to efficiently classify COVID-19, common pneumonia and healthy individual as presented in Table 1 ( Figure 1 ). The SVM (Wang et al., 2013) classification is referred to as a mechanism wherein the supervised binary classification system is used and where a classification model is implemented, in which the algorithm creates a hyperplane that maximizes the margin that occurs between two input classes. For instance, considering linearly separate data with two distinct classes, the system can have numerous hyperplanes which separate two classes. SMV considers the most optimal hyperplane with the highest margin of all available hyperplanes, where the margin is the difference between the hyperplane and the support vectors. Given a set of training data is the actual value, x i represents the input vector and N is the data number), given that the SVM function is: where ϕ X ð Þ is mapped non-linearly from input vector x, which are input feature spaces. Then, the SVM equation is given as (Wang et al., 2013) : is the kernel function in the feature space after performing non-linear mapping and b is bias term. The most commonly used kernel function is Gaussian Radial Basis Function (RBF) because it performs better than linear and polynomial kernel as it is not only capable to map nonlinearly training data into infinite-dimensional space but also easier to implement (Wang et al., 2013) and it is given as: where γ is the kernel parameter. KNN is a non-parametric method of classification (Altman, 1992) when classifying using KNN, the entity to be ranked is decided upon by its neighbours and assigned to the most comparable class of its closest neighbours. Three-class KNNs is used as part of the research. The local binary pattern (LBP) is an efficient non-parametric operator for the description of local image features and has provided the centre pixel (x c ,y c ), the ordered binary set identified as LBP is obtained by comparing the grey value of the centre pixel (x c ,y c ), with the pixels of its eight neighbours. The LBP code is thus expressed as the decimalized version of an octet binary integer: where i c represents the grey value of the middle pixel (x c ,y c ), and in the grey value of the pixels of its eight neighbours. The LBP code is invariant for any monotonous transformation of the grey level, and the local binary code remains unchanged after transformation. A typical CNN is a type of deep model in which convolutional filters and pooling operations are applied alternately on the local neighbourhoods of each pixel in the raw input, generating complex high-level features (Li et al., 2017) . CNN's have been primarily applied to 2D images, and have achieved better performance in image classification. Transfer learning is a research topic of machine learning. It focuses on preserving information learned when solving a particular problem and adapting it to a particular but connected problem (Apostolopoulos & Mpesiana, 2020; Taresh, 2020; Hussein et al., 2019; Learning, 2020; Mahmud et al., 2020) . In training the pre-trained network for another problem, some features of the pre-trained models may be modified, such modifications are layers to freeze, layers to be inserted, and some hyperparameter values changed. Residual network (ResNet) (He et al., 2016) . It is a deep learning algorithm used to classify images. The core principle behind ResNet is to deal with vanishing gradients that degrade network performance caused by accumulating a convolution layer over a pooling layer in deep network architecture, shortcuts that provide identification is a residual block, the notion of adding skip connections essentially eliminates a high training error, other deep networks do not include an identity link, which is why ResNet may not. The input layer accepts a 224 Â 224 image size. GoogleNet is a 22-layer network composed of the input layer, convolution layers, max-pooling and softmax classifier, the key feature that makes GoogleNet different is the 1 Â 1 convolution, networking and global average pooling. GoogleNet won the 2014 ILSVRC competition at a low error rate relative to VGG (Szegedy et al., 2015) . ShuffleNet (Zhang et al., n.d.) is a 50-layer deep learning model that uses AlexNet's group convolution on the first convolution layer. Although group convolution greatly decreases computation, one downside is that the performance of such channels is powered by a small fraction of the input. To resolve this problem, to address this issue, the channels which are also differentiable are shuffled in ShuffleNet to address this issue. 3 | TRAINING in this study three dataset were merged to detect COVID-19, the three datasets contained three classes, COVID-19, common pneumonia and healthy individuals. Before training, the data was preprocessed by resizing the images to 224 by 224, also, several data augmentation such as random reflection, random rotation, random rescale, random translation along X-axis and random translation along Y-axis was performed to increase the number of images, improves training and to reduce overfitting (Ghassemi et al., 2020; . After the augmentation, two features extraction techniques handcrafted LBP and automatic deep features from seven pre-trained models MobileNetv2, GoogleNet, ResNet-50, ResNet-101, ShuffleNet, VGG16 and VGG19 were extracted. Eighty percentage of the data was used for training and 20% for testing. The training for this study is carried out in stages using two classifiers, KNN and SVM, in the first stage, textual features were extracted using Figure 2 shows the classification process for the COVID-19 detection. Within the scope of the study, three classes of CT scan images COVID-19, common pneumonia and healthy individuals classes were classified using two classifiers, KNN and SVM. Before training the classifiers, handcrafted LBP features and deep high-level features of seven deep learning models were extracted. In Table 2 , SVM and KNN classifiers performance were compared to determine the best performing model with LBP features as inputs, the SVM achieves an accuracy of 97.5% sensitivity of 98.7% a specificity of 96.1%, F1 score of 97.5%, the precision of 96.4%, Yonden index of 94.8% and AUC of 97.4%. The SVM as shown in Figure 3 outperformed the KNN in terms of accuracy, sensitivity, specificity, F1 score, precision, Yonden index and AUC. This shows that the SVM can efficiently detect COVID-19, common pneumonia and healthy individuals CT scan images based on LBP features extracted from the CT scan images. F I G U R E 2 Shows the classification process for the COVID-19 detection In Tables 3 and 4 , SVM and KNN were used to classify seven deep learning models extracted features respectively, in To take the advantage of both the handcrafted LBP features and the automatic deep learning features, in this study, the LBP and CNN features were extracted and then concatenated, the KNN and the SVM classifiers were trained using these concatenated features. In Tables 5 and 6 , the results of the SVM and KNN classifiers were presented respectively. In Table 5 , the SVM achieved the highest performance in terms of accuracy with ResNet-50+ LBP and MobileNetv2+ LBP features with an accuracy of 98.7%, in terms of sensitivity VGG-16+ LBP and MobileNetv2+ LBP achieved a sensitivity of 100% each, for the specificity ResNet-50+ LBP achieves the highest specificity of 100%. In Table 6 It is important to improve COVID-19 detection, particularly on CT scan images that provide slice-level information of the chest. X-ray images were used in many experiments on medical imaging, but CT images were used in only a handful of studies. Hence, we were motivated to carry out this study based on several factors, including insufficient CT scan images data, and the ability to distinguish between common pneumonia, COVID-19 and healthy individuals' CT scan images. The performance criteria of a model describe how well the model performs in solving a problem. The performance of the two pre-trained models based on the binary and multi-class classification was compared using the frequently applied performance evaluation criteria, namely validation accuracy (ACC), sensitivity (SN), specificity (SP), F1 score, precision (PR), Yonden-index and AUC. Accuracy is a measure that gives an insight into how well the model learns and produces reliable results. It is the proportion of predictions that were provided correctly by the model. The accuracy of a model is the ratio of correctly predicted samples to the number of input samples. The number of correctly predicted samples is the sum of the number of true positives and false negatives It is defined as the ability of a model to test correctly and identify patients with a disease as presented in Equation (5). Specificity is a measure of how many negatives the trained model managed to capture out of the entire set of correctly predicted negative values by labelling the samples as negative. The relation for calculating specificity is presented in Equation (6). 4.1.4 | F1-Score F1-score is a measure of the balance between the precision and recall of a model. It is used to perform a statistical analysis of the test accuracy. The F1-score of a model lies between 0 and 1. It is said to be very good if its value lies near 1 and very bad if it is near 0. It is calculated by applying Equation (7) F1 Score Is the cut-point that optimizes the biomarker's distinguishing ability when equal weight is given to sensitivity and specificity. It also gives a summary of the receiver operating characteristic curve, as presented in Equation (8). Yonden Index ¼ sensitivity þ specificity ð Þ À 1 ð8Þ Precision is a measure of how precise or accurate the model is in terms of positive classifications. In other words, it measures the number of true positives out of all the predicted positives. The relation for precision is presented in Equation (9) Precision where TP, true positive; TN, true negative; FP, false positive; FN, false negative. In this study, two classifiers SVM and KNN were employed to classify COVID-19, common pneumonia and healthy individuals CT scan images, before training the classifiers, handcrafted LBP features and Automatic deep learning features of seven pre-trained networks were extracted, the training of the classifiers was conducted on the extracted features, to improve the performance of the classifiers, a new feature was proposed by concatenating the LBP and the CNN features to train the classifiers, this proposed feature shows improvement in the performance of the classifiers compared to the performance of the classifiers on LBP or CNN features. Due to the limited number of CT scan images available at this stage in the pandemic, only a small number of CT scan images were used for training the classifiers. This shows the scarcity of data in the research community. In the future, the authors will explore more pre-processing techniques while more classifiers such as Decision Tree and ensemble classifiers will be explored to improve the performance of the detection. The authors will also try to incorporate the proposed model with IoT for easy detection of COVID-19 around the globe. COVID-19 symptoms detection based on nasnetmobile with explainable AI using various imaging modalities Privacy-aware energy-efficient framework using the internet of medical things for COVID-19 A multidimensional internet of things testbed system: Development and evaluation An introduction to kernel and nearest-neighbor nonparametric regression Multi-task deep learning based CT imaging analysis for COVID-19 pneumonia: Classification and segmentation Covid-19: Automatic detection from X-ray images utilizing transfer learning with convolutional neural networks Emerging trends in IoT and big data analytics for biomedical and health care technologies. In Handbook of data science approaches for biomedical engineering Coronavirus (COVID-19) classification using CT images by machine learning methods Can chest CT features distinguish patients with negative from those with positive initial RT-PCR results for coronavirus disease (COVID-19)? A familial cluster of pneumonia associated with the 2019 novel coronavirus indicating person-to-person transmission: A study of a family cluster Momentum contrastive Learning for few-shot COVID-19 diagnosis from chest CT images Molecular diagnosis of a novel coronavirus (2019-NCoV) causing an outbreak of pneumonia Applications of machine Learning in cancer prediction and prognosis Automated grading of prostate cancer using architectural and textural image features Automatic classification of prostate cancer Gleason scores from multiparametric magnetic resonance images Classification of CT brain images based on deep learning networks Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images Guest editorial deep learning in medical imaging: Overview and future promise of an exciting new technique Deep residual learning for image recognition Lung and pancreatic tumor characterization in the deep learning era: Novel supervised and unsupervised learning approaches Added value of ultra-low-dose computed tomography, dose equivalent to chest x-ray radiography, for diagnosing chest pathology Spectral-Spatial Classification of Hyperspectral Imagery with 3D Convolutional Neural Network. Remote Sensing Clinical and CT imaging features of the COVID-19 pneumonia: Focus on pregnant women and children A deep transfer learning model with classical data augmentation and CGAN to detect COVID-19 from chest CT radiography digital images Within the Lack of Chest COVID-19 X-ray Dataset: A Novel Detection Model Based on Genomic characterisation and epidemiology of 2019 novel coronavirus: Implications for virus origins and receptor binding CovXNet: A multi-dilation convolutional neural network for automatic COVID-19 and other pneumonia detection from chest X-ray images with transferable multi-receptive feature optimization OpenTestBed: Poor man's IoT testbed A novel hybrid CNN-SVM classifier for recognizing handwritten digits Precision radiology: Predicting longevity using feature engineering and deep learning methods in a radiomics framework Using support vector machine to identify imaging biomarkers of neurological and psychiatric disease: A critical review Critical increase in Na-doping facilitates acceptor band movements that yields $180 MeV shallow hole conduction in ZnO bulk crystals Data-driven dynamic clustering framework for mitigating the adverse economic impact of Covid-19 lockdown practices Development and web performance evaluation of internet of things testbed Improving computer-aided detection using convolutional neural networks and random view aggregation C-SVR Crispr: prediction of CRISPR/Cas12 GuideRNA activity using deep learning models A competition to push the dependability of low SARS-CoV-2 CT-scan dataset: A large dataset of real patients CT scans for SARS-CoV-2 identification A systematic approach for COVID-19 predictions and parameter estimation. Personal and Ubiquitous Computing Imaging profile of the COVID-19 infection: Radiologic findings and literature review IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Convolutional neural networks for medical image analysis: Full training or fine tuning? Transfer learning to detect COVID-19 automatically from X-Ray images using convolutional neural networks COVID-19 and common pneumonia chest CT dataset. Mendeley Data CovidGAN: Data augmentation using auxiliary classifier GAN for improved Covid-19 detection Clinical characteristics of 138 hospitalized patients with 2019 novel coronavirus-infected pneumonia in Wuhan Mammograms with deep learning Cerebral micro-bleeding identification based on a nine-layer convolutional neural network with stochastic pooling Cerebral micro-bleeding detection based on densely connected neural network Improved annual rainfall-runoff forecasting using PSO-SVM model based on EEMD Frequency and distribution of chest radiographic findings in COVID-19 positive patients Insight into 2019 novel coronavirus-An updated interim review and lessons from SARS-CoV and MERS-CoV Chest CT for typical Covid-19 pneumonia Strategies and principles of distributed machine learning on big data COVID-CT-dataset: A CT image dataset about COVID-19 Chest CT manifestations of new coronavirus disease 2019 (COVID-19): A pictorial review Deep learning based classification of breast tumors with shear-wave elastography ShuffleNet: An extremely efficient convolutional neural network for mobile devices A novel coronavirus from patients with pneumonia in China Very deep convolutional networks for large-scale image recognition Obtained MTech. Electrical and Electronics Engineering with specialization in Instrumentation and Control from Sharda University Sertan Serte is an associate professor in the Department of Electrical and Electronic Engineering at Near East University. His research focus is on computer vision and machine learning He is a leading authority in the areas of smart/cognitive, wireless, and mobile networks' architectures, protocols, deployments, and performance evaluation Obtained MSc degree in Bioengineering from Cyprus International University in 2016. Currently, an academic staff in Biochemistry Department His main research interests are Biosensors, Crispr and application to artificial intelligence. Email: mehmet.ozsoz@neu.edu.tr How to cite this article: Mubarak, A This research focuses on those affected by the COVID-19 pandemic and those who are helping to fight this virus in whatever way they can. We would also like to thank the doctors, nurses and all healthcare providers who are putting their lives at risk in the fight against the coronavirus outbreak. The authors declare no conflict of interest. Auwalu Saleh Mubarak, Sertan Serte, Fadi Al-Turjman, Zubaida Sa'id Ameen and Mehmet Ozsoz contributed to the design and implementation of the research, to the analysis of the results and the writing of the manuscript. The data that support the findings of this study are available from the corresponding author upon reasonable request. Fadi Al-Turjman https://orcid.org/0000-0001-6375-4123