key: cord-0915509-dskleq8v authors: Khade, Smita; Gite, Shilpa; Thepade, Sudeep D.; Pradhan, Biswajeet; Alamri, Abdullah title: Detection of Iris Presentation Attacks Using Feature Fusion of Thepade’s Sorted Block Truncation Coding with Gray-Level Co-Occurrence Matrix Features date: 2021-11-08 journal: Sensors (Basel) DOI: 10.3390/s21217408 sha: f18c66f09f183371b9b09e26a36676a7acfc1823 doc_id: 915509 cord_uid: dskleq8v Iris biometric detection provides contactless authentication, preventing the spread of COVID-19-like contagious diseases. However, these systems are prone to spoofing attacks attempted with the help of contact lenses, replayed video, and print attacks, making them vulnerable and unsafe. This paper proposes the iris liveness detection (ILD) method to mitigate spoofing attacks, taking global-level features of Thepade’s sorted block truncation coding (TSBTC) and local-level features of the gray-level co-occurrence matrix (GLCM) of the iris image. Thepade’s SBTC extracts global color texture content as features, and GLCM extracts local fine-texture details. The fusion of global and local content presentation may help distinguish between live and non-live iris samples. The fusion of Thepade’s SBTC with GLCM features is considered in experimental validations of the proposed method. The features are used to train nine assorted machine learning classifiers, including naïve Bayes (NB), decision tree (J48), support vector machine (SVM), random forest (RF), multilayer perceptron (MLP), and ensembles (SVM + RF + NB, SVM + RF + RT, RF + SVM + MLP, J48 + RF + MLP) for ILD. Accuracy, precision, recall, and F-measure are used to evaluate the performance of the projected ILD variants. The experimentation was carried out on four standard benchmark datasets, and our proposed model showed improved results with the feature fusion approach. The proposed fusion approach gave 99.68% accuracy using the RF + J48 + MLP ensemble of classifiers, immediately followed by the RF algorithm, which gave 95.57%. The better capability of iris liveness detection will improve human–computer interaction and security in the cyber-physical space by improving person validation. Automatic access to a system by a genuine person has become very simple in the information era. For automated system access, validation of user identity is crucial. Biometric authentication systems are computer-based systems that use biometric traits to verify a user's identity. The biometric authentication system has more advantages over other password-based conventional authentication mechanisms [1] . The biometric system eliminates the need to remember a password or pin or keep a card in possession. The Table 1 . Iris presentation attacks. The impostor offers a printed image of validated iris to the biometric sensor [4] . The impostor wears contact lenses on which the pattern of the genuine iris is printed [5] . The impostor plays a video of a registered user in front of a biometric system [6] . The impostor uses the eye of a dead person in front of a biometric system [7] . The impostor embeds the iris region into the authentic images to make the synthesized images more realistic [8] . Analyzing threat and vulnerability is crucial for securing the biometric system. The challenging threat of spoofing a biometric authentication system is mitigated with liveness detection of acquired biometric traits before authentication [9] . The critical contributions of the research work presented here are as follows: • Development of Thepade's sorted block truncation coding (TSBTC) and gray-level cooccurrence matrix (GLCM) iris image data as features for the first time in iris liveness detection (ILD). • Implementation of the fusion of best TSBTC N-ary global features with GLCM local features from an iris image, for the first time in ILD. • Performance analysis of ML classifiers and ensembles to finalize the best classifier for ILD. Validating the performance of the proposed ILD method across various existing benchmark datasets and techniques. The paper is organized as follows: Section 2 briefly reviews the related literature, Section 3 presents an overview of existing methods, and Section 4 presents the proposed ILD method. The experiment details, observed results, and inferences drawn from the results are discussed in Section 5. The concluding remarks and future research directions are discussed in Section 6. Many attempts have been made to detect the liveness of the sensed biometric traits before they are authenticated. A few prominent approaches are discussed in this section. Kaur et al. [10] used a rotation-invariant feature set consisting of Zernike moments and polar harmonic transforms that extract local intensity variations to detect iris spoofing attacks. The spoofing attacks on various sensors also have a considerable effect on the overall efficiency of a system. A system can detect only print and contact lens attacks. They used Hough transform and GLCM to extract features from iris images. These extracted features were passed for discriminant analysis (DA), used as a classification tool for differentiating live images from spoofed ones. Agarwal et al. [11] used fingerprints and iris identity for liveness detection. The standard Haralick's statistical features based on the GLCM and neighborhood gray-tone difference matrix (NGTDM) generate a feature vector from the fingerprint. Texture features from the iris are used to boost the performance of a system. They used a standard dataset to test if the performance of this model is better than the existing model. In the existing system, GLCM has a huge feature vector size. In a recent paper, Jusman et al. (2020) [12] compared the proposed approach with other existing approaches and proved that the proposed approach performs better, with 100% accuracy. The limitation of this study is that the authors followed a traditional approach of segmentation, normalization, and feature extraction, which is a complex and time-consuming task. Subsequently, Khuzani et al. [13] extracted the shape, density, FFT, GLCM, GLDM, and wavelet features from iris images. In total, 2000 iris images from the CASIA-Iris-Interval dataset are used for implementation. The highest accuracy of 99.64% was achieved by using a multilayer neural network. Agarwal et al. (2020) used a feature descriptor, i.e., local binary hexagonal extreme pattern, for fake iris detection [14] . The proposed descriptor exploits the relationship between the center pixel and its hexa neighbor. The hexagonal shape using the six-neighbor approach is preferable to the rectangular structure due to its higher symmetry. This approach's limitation is that it covers only print and contact lens attacks and is highly complex [14] . Thavalengal et al. (2016) [15] developed a smartphone system that captures RGB and NIR images of the eye and the iris. Pupil localization techniques with distance metrics are used for detection. For feature vector generation, 4096 elements are considered, which are extensive. Even though the authors claimed a reasonable liveness detection rate, they worked with a real-time database. TSBTC has been used many times in the literature in other domains, but none of the studies has identified iris liveness detection using TSBTC. Some of the studies from other domains are discussed here. used TSBTC to retrieve images from datasets using the key points extraction method [16] . Chaudhari et al. (2021) used a fusion of TSBTC and Sauvola thresholding features [17] . With the help of multiple classifiers, including SVM, Kstar, J48, RF, RT, and ensembles, the authors achieved good accuracy. To enhance image classification, Thepade et al. (2018) used TSBTC with feature-level fusion of Niblack thresholding and SVM, RF, Bayes net, and ensembles of classifiers [18] . In their work, Fathy and Ali (2018) did not consider the segmentation and normalization phases typically used in fake iris detection systems [8] . Wavelet packets (WPs) are used to break down the original image into a wavelet. They claimed 100% accuracy, but it does not work with all types of attacks, and it covered only limited spoofing attacks. Hu et al. (2016) performed ILD using regional features [19] . Regional features are designed based on the relationship of the features with the neighboring regions. During an experiment, Hu used 144 relational measures based on regional features. Czajka (2015) designed a liveness detection system using pupil dynamics [20] . In this system, pupil reaction is measured with the help of sudden changes in light intensity. If the eye reacts to light intensity changes, then the eye is live; otherwise, it is fake. Linear and non-linear support vector machines are used to classify natural reactions and spontaneous oscillations in this work. The limitation of the system measures diverse functions, which take time. The data used in this analysis do not include any measurements from older people, so there is inaccuracy in the observation [20] . Naqvi et al. (2020) developed a system to detect accurate ocular regions, such as the iris and sclera [21] . This system is based on the convolutional neural network (CNN) model with a lite-residual encoder-decoder network. Average segmentation error is used to evaluate the segmentation results. Publicly available databases are considered for evaluating the system. Kimura et al. (2020) designed a liveness detection system using CNN, which improves the model's accuracy by tuning hyperparameters [22] . To measure the performance of the system, attack presentation classification error rate (APCER) and bonafide presentation classification error rate (BPCER) are used. The hyperparameters considered in this paper are the number of epochs (max), batch size, learning rate, and weight decay hyperparameters. This system works only for print and contact lens attacks. Lin and Su (2019) developed a face anti-spoofing and liveness detection system using CNN [23] . The image is resized to 256 * 256, and RGB and HSV color spaces are used. The author claims better iris liveness prediction [23] . Long and Zeng (2019) identified ILD with the help of the BNCNN architecture with 18 layers. The batch normalization technique is used in BNCNN to avoid the problem of overfitting and gradient disappearing during training [24] . Dronky et al. (2019) [25] observed from the literature that many researchers do not identify all iris attacks. So, from the existing literature, it is observed that researchers have worked on a few iris attacks, and a prominent feature vector size is considered. Table 2 summarizes the literature review in ascending order of the year of publication. Iris recognition system is susceptible to many security challenges. These vulnerabilities make a system less reliable for highly secured applications [3] . This paper attempts ILD using the feature-level fusion of GLCM and TSBTC features of iris images, which detect whether the iris is live or fake. The proposed approach avoids any preprocessing, such as segmentation, normalization, and localization, conventionally used by the methods proposed in the literature, making the proposed approach swifter and relatively easier [15] . The only preprocessing done in the proposed approach is resizing the iris image to square size. Figure 1 shows the block diagram of the ILD system. The proposed system is divided into four phases: iris image resizing, feature formation, classification, and ILD. to square size. Figure 1 shows the block diagram of the ILD system. The proposed system is divided into four phases: iris image resizing, feature formation, classification, and ILD. Iris preprocessing plays a vital role in ILD. In the proposed algorithm, two iris preprocessing approaches are followed. Images are acquired using four different standard datasets, so each dataset uses a different size of images to be stored. During preprocessing, the original images are normalized to the size 128 * 128, which maintains integrity throughout the experiment. While images capture different datasets using different sensors, some sensors (e.g., LG, Congent, Vista) capture images in the RGB format, and some (e.g., LG, Dalsa) capture them in the grayscale format. To maintain uniqueness, images are converted into the grayscale format. In the proposed method, feature fusion is attempted with the help of GLCM and Thepade's SBTC applied on iris images. The statistical distribution information of the gray-level value of an image is gener- Iris preprocessing plays a vital role in ILD. In the proposed algorithm, two iris preprocessing approaches are followed. Images are acquired using four different standard datasets, so each dataset uses a different size of images to be stored. During preprocessing, the original images are normalized to the size 128 * 128, which maintains integrity throughout the experiment. While images capture different datasets using different sensors, some sensors (e.g., LG, Congent, Vista) capture images in the RGB format, and some (e.g., LG, Dalsa) capture them in the grayscale format. To maintain uniqueness, images are converted into the grayscale format. In the proposed method, feature fusion is attempted with the help of GLCM and Thepade's SBTC applied on iris images. The statistical distribution information of the gray-level value of an image is generated by GLCM [27, 28] . GLCM is applied to a resized iris image. Figure 2 shows feature formation using GLCM. We computed four features: contrast, energy, entropy, and correlation using GLCM. These are given by: The following image entropy equation is used to describe an image's randomness. The greater the entropy, the more difficult it is to arrive at any conclusion from the data. Contrast: As expressed in Equation (3), contrast is used to evaluate the intensity difference between the reference pixel and its neighbor. The low-intensity value represents the GLCM's poor contrast. Correlation Equation (4) represents the linear dependency of gray-level values in the co-occurrence matrix. These four features for one image are taken into consideration. The 10 crossvalidation technique is used for the correct estimation of accuracy. Energy: Local gray-level consistency represents energy as expressed in Equation (1), which is high in similar pixels. Entropy: The following image entropy equation is used to describe an image's randomness. The greater the entropy, the more difficult it is to arrive at any conclusion from the data. Contrast: As expressed in Equation (3), contrast is used to evaluate the intensity difference between the reference pixel and its neighbor. The low-intensity value represents the GLCM's poor contrast. Correlation: Equation (4) represents the linear dependency of gray-level values in the co-occurrence matrix. These four features for one image are taken into consideration. The 10 cross-validation technique is used for the correct estimation of accuracy. Let the iris image be I (r,c) of size r × c pixels, grayscale. The TSBTC [29, 30] feature vector of N-ary may be considered as [T1, T2, Tn]. Here, Ti indicates the i th cluster centroids of the grayscale image using TSBTC N-ary. In TSBTC 2-ary, for iris image I (r,c) of size r × c pixels, the grayscale image is converted into a one-dimensional array sorted as sortrows. Using this one-dimensional sorted array, the TSBTC-2ary feature vector is computed as [T1, T2], as shown in Equations (5) and (6). Figure 3 shows how features are extracted using TSBTC. (5) and (6). Figure 3 shows how features are extracted using TSBTC. Here is the proposed ILD. TSBTC has experimented with all 10 variations of TSBTC 2-ary, 3-ary, 4-ary, 5-ary, 6-ary, 7-ary, 8-ary, 9-ary, 10-ary, and 11-ary with a resized iris image. These extracted features are passed to classifiers and ensembles of classifiers to train them. The best performance from TSBTC N-ary and local-level GLCM features are concatenated to get the feature-level fusion for ILD. Here, both fusions are considered TSBTC 10-ary + GLCM and TSBTC 11-ary + GLCM. Let the grayscale iris image be I Here is the proposed ILD. TSBTC has experimented with all 10 variations of TSBTC 2-ary, 3-ary, 4-ary, 5-ary, 6-ary, 7-ary, 8-ary, 9-ary, 10-ary, and 11-ary with a resized iris image. These extracted features are passed to classifiers and ensembles of classifiers to train them. The best performance from TSBTC N-ary and local-level GLCM features are concatenated to get the feature-level fusion for ILD. Here, both fusions are considered TSBTC 10-ary + GLCM and TSBTC 11-ary + GLCM. Let the grayscale iris image be I (r,c) of size r × c pixels. The fusion of the feature vector of TSBTC N-ary and GLCM can be represented as [T1, T2, Tn, G1, G2, G3, G4]. The proposed approach of ILD uses different ML classifiers with an ensemble combination. The tenfold cross-validation approach is used for training these classifiers for ILD. Tenfold cross-validation is one of the best approaches for training ML classifiers. It provides all samples from the dataset a chance to be used as training or test data, resulting in a trained classifier that is less biased. The ML classifiers employed here are support vector machine (SVM), naïve Bayes (NB), random forest (RF), random tree, and J48, with ensembles of a few of the ML classifiers. Majority of voting logic is used here for creating the ensembles of ML classifiers. SVM-Its main aim is that in an N (number of features)-dimensional space, it must find a hyperplane that distinctly classifies the data points. The objective of an SVM is to find a plane with the maximum distance between data points of classes [31] . J48-It is a classification algorithm. It is the decision tree-based classification algorithm as that of the decision tree classifier [31] . Random Forest-It takes the mean prediction of the individual trees formed from an ensemble of various decision trees. This algorithm makes sure that the decision tree classifier's drawback of overfitting the training data is overcome [31] . Random Tree-Random tree is a parameter-based supervised learning algorithm with continuous data splitting. The random tree algorithm is similar to the decision tree algorithm, which is made by selecting random features [32] . Naï ve Bayes-This algorithm is based on the theorem of Bayes and is a collection of classification algorithms. It is a family of algorithms, as all of them share a common objective. It predicts probabilities belonging to a class for each data point [32] Ensemble method-It is always better to use multiple models simultaneously on a single set for classification rather than just a single model. This method is called ensemble learning [17] . A model is trained by using different classifiers, and the final output is an ensemble of those classifiers. Majority voting logic has been used for an ensemble of ML classifiers in the proposed method. The proposed approach of ILD uses different ML classifiers with an ensemble combination. The tenfold cross-validation approach is used for training these classifiers for ILD. Tenfold cross-validation is one of the best approaches for training ML classifiers. It provides all samples from the dataset a chance to be used as training or test data, resulting in a trained classifier that is less biased. The ML classifiers employed here are support vector machine (SVM), naïve Bayes (NB), random forest (RF), random tree, and J48, with ensembles of a few of the ML classifiers. Majority of voting logic is used here for creating the ensembles of ML classifiers. SVM-Its main aim is that in an N (number of features)-dimensional space, it must find a hyperplane that distinctly classifies the data points. The objective of an SVM is to find a plane with the maximum distance between data points of classes [31] . J48-It is a classification algorithm. It is the decision tree-based classification algorithm as that of the decision tree classifier [31] . Random Forest-It takes the mean prediction of the individual trees formed from an ensemble of various decision trees. This algorithm makes sure that the decision tree classifier's drawback of overfitting the training data is overcome [31] . Random Tree-Random tree is a parameter-based supervised learning algorithm with continuous data splitting. The random tree algorithm is similar to the decision tree algorithm, which is made by selecting random features [32] . Naïve Bayes-This algorithm is based on the theorem of Bayes and is a collection of classification algorithms. It is a family of algorithms, as all of them share a common objective. It predicts probabilities belonging to a class for each data point [32] Ensemble method-It is always better to use multiple models simultaneously on a single set for classification rather than just a single model. This method is called ensemble learning [17] . A model is trained by using different classifiers, and the final output is an ensemble of those classifiers. Majority voting logic has been used for an ensemble of ML classifiers in the proposed method. The experiments were performed using an Intel (R) Core (TM) i3-6006U CPU @ 2.0 GHz, 12 GB RAM, and 64-bit operating system with MATLAB R2015a as a programming platform. The experimentation code is available on request. The datasets used for experimental explorations of the proposed approach of ILD are Clarkson LiveDet2013, Clarkson LiveDet2015, IIITD Contact Lens, and IIITD Combined Spoofing. The detailed description of the four standard and publicly available datasets is as follows. Clarkson LivDet2013-Clarkson LivDet2013 dataset has around 1536 iris images [33] . This dataset is separated into training and testing sets. To acquire images, the Dalsa sensor is used. During this experiment, the training set images are used. Table 3 shows details related to the dataset, the sensors used to acquire images, and the number of images used during this experiment. Figure 5 shows samples of images from the dataset. • Clarkson LivDet2015-Images used in this dataset are captured using Dalsa and LG sensors [34] . Images are divided into three categories: live, pattern, and printed. In total, 25 subjects are used for live images and patterns are printed; 15 subjects each are used. The whole dataset is partitioned into training and testing. • IIITD Combined Spoofing Database-Images used in this dataset are captured using two iris sensors, Cogent and Vista [35] . The images are divided into three categories: normal, print-scan attack, and print-capture attack. • IIITD Contact Lens-Images used in this dataset are captured using two iris sensors, Cogent dual iris sensor and Vista FA2E single iris sensor [36, 37] . The images are di-vided into three categories: normal, transparent, and colored. In total, 101 subjects are used. Both left and right iris images of each subject are captured; therefore, there are 202 iris classes. To compare the performance of all the investigated variations of the proposed ILD method, accuracy, recall, F-measure, and precision are used as performance metrics. Let TP, TN, FP, and FN, respectively, be the true positive, true negative, false positive, and false negative of the ILD. TP indicates the data samples that are predicted as live iris and are actually live samples. TN gives the data samples detected as spoofed iris and actually spoofed iris samples. FP indicates the samples identified as live but are fake. FN shows the data samples detected as spoofed but are live iris samples. The confusion matrix is shown in Figure 6 . Equations (7)-(13) were used to calculate the accuracy, precision, recall, F-measure, attack presentation classification error rate (APCER) [38] , normal presentation classification error rate (NPCER), and average classification error rate (ACER), respectively. This section is organized into three subsections. Section 5.1 presents the results and graphs of the TSBTC approach. Section 5.2 presents the results of the GLCM technique. The fusion of TSBTC and GLCM is discussed in Section 5.3. The proposed approach of ILD experiments with four benchmark datasets. Accuracy, recall, precision, and F-measure are used as performance metrics to evaluate the variants of the proposed approach of ILD. With 128 ×128 iris images, TSBTC experiments with all 10 varieties of the TSBTC 2-ary, 3-ary, 4-ary, 5-ary, 6-ary, 7-ary, 8-ary, 9-ary, 10-ary, and 11-ary. These extracted features are passed to classifiers and ensembles of classifiers to train them. The performance comparison of the TSBTC N-ary global features considered for specific ML classifiers in the proposed approach of ILD tested on the Clarkson 2013 dataset is shown in Figure 7 . It can be observed that 10-ary TSBTC outperforms other N-ary TSBTC approaches for all classifiers for the Clarkson 2013 dataset. From Table 4 , it is observed that the highest ILD accuracy comes to around 94.16% with 6-ary TSBTC using the RF classifier, immediately followed by an ensemble of RF + SVM + RT classifiers. The underlined values indicate the highest obtained recognition rates. This section is organized into three subsections. Section 5.1 presents the results and graphs of the TSBTC approach. Section 5.2 presents the results of the GLCM technique. The fusion of TSBTC and GLCM is discussed in Section 5.3. The proposed approach of ILD experiments with four benchmark datasets. Accuracy, recall, precision, and F-measure are used as performance metrics to evaluate the variants of the proposed approach of ILD. With 128 ×128 iris images, TSBTC experiments with all 10 varieties of the TSBTC 2-ary, 3-ary, 4-ary, 5-ary, 6-ary, 7-ary, 8-ary, 9-ary, 10-ary, and 11-ary. These extracted features are passed to classifiers and ensembles of classifiers to train them. The performance comparison of the TSBTC N-ary global features considered for specific ML classifiers in the proposed approach of ILD tested on the Clarkson 2013 dataset is shown in Figure 7 . It can be observed that 10-ary TSBTC outperforms other N-ary TSBTC approaches for all classifiers for the Clarkson 2013 dataset. This section is organized into three subsections. Section 5.1 presents the results and graphs of the TSBTC approach. Section 5.2 presents the results of the GLCM technique. The fusion of TSBTC and GLCM is discussed in Section 5.3. The proposed approach of ILD experiments with four benchmark datasets. Accuracy, recall, precision, and F-measure are used as performance metrics to evaluate the variants of the proposed approach of ILD. With 128 ×128 iris images, TSBTC experiments with all 10 varieties of the TSBTC 2-ary, 3-ary, 4-ary, 5-ary, 6-ary, 7-ary, 8-ary, 9-ary, 10-ary, and 11-ary. These extracted features are passed to classifiers and ensembles of classifiers to train them. The performance comparison of the TSBTC N-ary global features considered for specific ML classifiers in the proposed approach of ILD tested on the Clarkson 2013 dataset is shown in Figure 7 . It can be observed that 10-ary TSBTC outperforms other N-ary TSBTC approaches for all classifiers for the Clarkson 2013 dataset. Table 4 , it is observed that the highest ILD accuracy comes to around 94.16% with 6-ary TSBTC using the RF classifier, immediately followed by an ensemble of RF + SVM + RT classifiers. The underlined values indicate the highest obtained recognition rates. From Table 4 , it is observed that the highest ILD accuracy comes to around 94.16% with 6-ary TSBTC using the RF classifier, immediately followed by an ensemble of RF + SVM + RT classifiers. The underlined values indicate the highest obtained recognition rates. The performance comparison of the TSBTC N-ary global features considered for specific ML classifiers in the proposed approach of the ILD tested on the Clarkson 2015 dataset is shown in Figure 8 . As per comparison, 11-ary TSBTC outperforms other N-ary TSBTC approaches for the Clarkson 2015 dataset for all classifiers. The performance comparison of the TSBTC N-ary global features considered for specific ML classifiers in the proposed approach of the ILD tested on the Clarkson 2015 dataset is shown in Figure 8 . As per comparison, 11-ary TSBTC outperforms other N-ary TSBTC approaches for the Clarkson 2015 dataset for all classifiers. Table 5 , it is observed that the highest observed ILD accuracy comes to around 95.64% with 10-ary TSBTC using the RF classifier, immediately followed by an ensemble of RF + SVM + RT classifiers. From Table 5 , it is observed that the highest observed ILD accuracy comes to around 95.64% with 10-ary TSBTC using the RF classifier, immediately followed by an ensemble of RF + SVM + RT classifiers. The performance comparison of the TSBTC N-ary global features considered for specific ML classifiers in the proposed approach of ILD tested on the IIITD Contact dataset is shown in Figure 9 . It can be observed that 11-ary TSBTC outperforms other N-ary TSBTC approaches for all classifiers for the IIITD Contact dataset. The performance comparison of the TSBTC N-ary global features considered for specific ML classifiers in the proposed approach of ILD tested on the IIITD Contact dataset is shown in Figure 9 . It can be observed that 11-ary TSBTC outperforms other N-ary TSBTC approaches for all classifiers for the IIITD Contact dataset. Table 6 , it is observed that the highest observed ILD accuracy comes to around 76.73% with 11-ary TSBTC using the random forest classifier, immediately followed by an ensemble of RF + SVM + RT classifiers. From Table 6 , it is observed that the highest observed ILD accuracy comes to around 76.73% with 11-ary TSBTC using the random forest classifier, immediately followed by an ensemble of RF + SVM + RT classifiers. Table 6 . Performance evaluation using accuracy for variants of the proposed approach of ILD with the N-ary TSBTC and ML classifiers used for the IIITD Contact dataset. Figure 10 shows the performance comparison of the global features of TSBTC N-ary considered for specific ML classifiers in the proposed approach of ILD tested on the IIITD Combined Spoofing dataset. It can be observed that 10-ary TSBTC outperforms other N-ary TSBTC approaches for all classifiers for the IIITD Combined Spoofing dataset. Figure 10 shows the performance comparison of the global features of TSBTC Nary considered for specific ML classifiers in the proposed approach of ILD tested on the IIITD Combined Spoofing dataset. It can be observed that 10-ary TSBTC outperforms other N-ary TSBTC approaches for all classifiers for the IIITD Combined Spoofing dataset. Table 7 , it is observed that the highest observed ILD accuracy comes to around 99.57% with 7-ary TSBTC using an ensemble of J48 + RF + MLP classifiers, immediately followed by RF classifiers. Table 7 , it is observed that the highest observed ILD accuracy comes to around 99.57% with 7-ary TSBTC using an ensemble of J48 + RF + MLP classifiers, immediately followed by RF classifiers. Table 7 . Performance evaluation using accuracy for variants of the proposed approach of ILD with N-ary TSBTC and the ML classifiers used for IIITD Combined Spoofing dataset. In the proposed ILD, features are extracted using GLCM by using equations explained in Section 3.2.1. These extracted features are passed to classifiers and ensembles of classifiers to train them. Figure 11 shows the performance comparison of the GLCM local features considered for specific ML classifiers in the proposed approach of ILD tested across all datasets. Here, it can be observed that random forest and ensembles of RF + SVM + MLP give the best performance across all datasets. In the proposed ILD, features are extracted using GLCM by using equations explained in Section 3.2.1. These extracted features are passed to classifiers and ensembles of classifiers to train them. Figure 11 shows the performance comparison of the GLCM local features considered for specific ML classifiers in the proposed approach of ILD tested across all datasets. Here, it can be observed that random forest and ensembles of RF + SVM + MLP give the best performance across all datasets. Figure 11 . Performance evaluation of GLCM local features for the specific ML classifiers in the proposed approach of ILD across all datasets using percentage accuracy. The performance evaluation of GLCM local features across all datasets for specific ML classifiers in the proposed approach of ILD using percentage accuracy is shown in Figure 12 . The graph shows that IIITD Combined Spoofing gives good performance across all classifiers and ensembles of classifiers. The performance evaluation of GLCM local features across all datasets for specific ML classifiers in the proposed approach of ILD using percentage accuracy is shown in Figure 12 . The graph shows that IIITD Combined Spoofing gives good performance across all classifiers and ensembles of classifiers. The best performance from TSBTC N-ary and GLCM local-level features are concatenated to get the feature-level fusion for ILD. Here, both fusions are considered, TSBTC 10-ary + GLCM and TSBTC 11-ary + GLCM. The best performance from TSBTC N-ary and GLCM local-level features are concatenated to get the feature-level fusion for ILD. Here, both fusions are considered, TSBTC 10-ary + GLCM and TSBTC 11-ary + GLCM. catenated to get the feature-level fusion for ILD. Here, both fusions are considered, TSBTC 10-ary + GLCM and TSBTC 11-ary + GLCM. Figures 13-16 show the performance comparison of TSBTC, GLCM, and the fusion of the global features of TSBTC N-ary and local features of GLCM for specific ML classifiers in the proposed approach of ILD tested on Clarkson 2013, Clarkson 2015, IIITD Contact, and IIITD Combined Spoofing dataset, respectively. Here, an observed fusion of TSBTC and GLCM gives the best performance across all datasets. From Table 8 , it is observed that the highest ILD accuracy comes to around 93.78% with the fusion of TSBTC's highest performance with GLCM features using the RF classifier for the Clarkson 2013 dataset. The highest accuracy comes to around 95.57%, with the fusion of TSBTC's highest performance with GLCM features using the RF classifier for the Clarkson 2015 dataset. For the IIITD Contact dataset, the highest accuracy comes to around 78.88%, with the fusion of TSBTC's highest performance with GLCM features by using the RF classifier. The highest observed ILD accuracy comes to around 99.68%, with the fusion of TSBTC's highest performance with GLCM features using RF and an ensemble of J48 + RF + MLP classifiers for the IIITD Combined Spoofing dataset. From Table 8 , it is observed that the highest ILD accuracy comes to around 93.78% with the fusion of TSBTC's highest performance with GLCM features using the RF classifier for the Clarkson 2013 dataset. The highest accuracy comes to around 95.57%, with the fusion of TSBTC's highest performance with GLCM features using the RF classifier for the Clarkson 2015 dataset. For the IIITD Contact dataset, the highest accuracy comes to around 78.88%, with the fusion of TSBTC's highest performance with GLCM features by using the RF classifier. The highest observed ILD accuracy comes to around 99.68%, with the fusion of TSBTC's highest performance with GLCM features using RF and an ensemble of J48 + RF + MLP classifiers for the IIITD Combined Spoofing dataset. Figure 17 shows the performance comparison of the fusion of global features of TSBTC N-ary and GLCM local features considered for specific ML classifiers in the proposed approach of ILD tested on all datasets. Here, the fusion of TSBTC and GLCM gives the best performance for all datasets used during the experiments. The highest accuracy achieved is 99.68% for the IIITD Combined Spoofing dataset, which shows that it outperforms others. It is observed from Table 9 that ensembles of J48 + RF + MLP classifiers give the highest accuracy (99.68%) and lowest rate of ACER (0.48%) using the fusion of TSBTC and GLCM local features in the proposed approach of ILD used with the IIITD Combined Spoofing dataset. Based on the current experiments, GLCM and TSBTC are the widely utilized image feature extraction methods. Thepade's sorted block truncation coding (TSBTC) has been employed in various image classification applications. For the very first time, TSBTC has been designed to assess iris presentation attacks. The feature vector generated by the methods described in Section 3.2 is then supplied as an input to machine learning and ensembles of classifiers described in Section 3.3 using the Weka tool. Groupings of TSBTC and GLCM features are used to achieve feature-level fusion. For testing purposes, four standard datasets are used: Clarkson 2013, Clarkson 2015, IIITD Contact, and IIITD Combined Spoofing; additionally, databases such as Clarkson 2017 and CASIA can be examined in the future. GLCM, a local feature extraction approach that has delivered an excellent average classification accuracy, as stated in Section 5.2, has been seen in investigations of iris presentation attack detection. As explained in Section 5.1, TSBTC has shown better accuracy versus GLCM. A fusion of TSBTC with GLCM has provided the best iris presentation attack detection accuracy as 93.78% for the Clarkson 2013 dataset, 95.57% for It is observed from Table 9 that ensembles of J48 + RF + MLP classifiers give the highest accuracy (99.68%) and lowest rate of ACER (0.48%) using the fusion of TSBTC and GLCM local features in the proposed approach of ILD used with the IIITD Combined Spoofing dataset. Bold values indicate the highest obtained recognition rates. Based on the current experiments, GLCM and TSBTC are the widely utilized image feature extraction methods. Thepade's sorted block truncation coding (TSBTC) has been employed in various image classification applications. For the very first time, TSBTC has been designed to assess iris presentation attacks. The feature vector generated by the methods described in Section 3.2 is then supplied as an input to machine learning and ensembles of classifiers described in Section 3.3 using the Weka tool. Groupings of TSBTC and GLCM features are used to achieve feature-level fusion. For testing purposes, four standard datasets are used: Clarkson 2013, Clarkson 2015, IIITD Contact, and IIITD Combined Spoofing; additionally, databases such as Clarkson 2017 and CASIA can be examined in the future. GLCM, a local feature extraction approach that has delivered an excellent average classification accuracy, as stated in Section 5.2, has been seen in investigations of iris presentation attack detection. As explained in Section 5.1, TSBTC has shown better accuracy versus GLCM. A fusion of TSBTC with GLCM has provided the best iris presentation attack detection accuracy as 93.78% for the Clarkson 2013 dataset, 95.57% for the Clarkson 2015 dataset, 78.88% for the IIITD Contact dataset, and 99.68% for the IIITD Combined Spoofing datasets. A comparison of the performance of different machine learning classifiers such as naïve Bayes, SVM, random forest, J48, and multilayer perceptron and ensembles of SVM + RF + NB, SVM + RF + RT, RF + SVM + MLP, and J48 + RF + MLP are used for classification accuracy of live and spoof iris detection. J48 + RF + MLP ensembles of classifiers have given a maximum accuracy of 99.68%. Though TSBTC has shown promise in the image classification of colored images for various applications such as land usage identification, gender classification, and so on, it has also shown promising results for the detection of iris presentation attacks. The experimental results showed that the proposed approach efficiently identifies iris spoofing attacks using various sensors. The feature-level fusion of local GLCM and global TSBTC can distinguish between live and faked artifacts and offer improved outcomes compared to the latest state-of-theart approaches. The findings show that our proposed approach decreases classification error and improves accuracy compared with the previous approaches used to detect presentation attacks in an iris detection system. This has been tabulated in Table 10 . The proposed approach is compared to recent research done in this area, and it has already been concluded that this approach outperforms other methods. Therefore, it works with images of 128 × 128 pixels. Only 10 variations of TSBTC are used during implementation. Advanced iris recognition using fusion techniques Face recognition at a distance for a stand-alone access control system Bibliometric Survey on Biometric Iris Liveness Detection A secure image encryption algorithm based on fractional transforms and scrambling in combination with multimodal biometric keys An approach for iris contact lens detection and classification using ensemble of customized DenseNet and SVM Iris Liveness Detection: A Survey Human iris recognition in post-mortem subjects: Study and database Entropy with Local Binary Patterns for Efficient Iris Liveness Detection Novel Fingerprint Liveness Detection with Fractional Energy of Cosine Transformed Fingerprint Images and Machine Learning Classifiers Cross-sensor iris spoofing detection using orthogonal features A multimodal liveness detection using statistical texture features and spatial analysis Performances of proposed normalization algorithm for iris recognition An approach to human iris recognition using quantitative analysis of image features and machine learning Local binary hexagonal extrema pattern (LBHXEP): A new feature descriptor for fake iris detection. Vis Iris liveness detection for next generation smartphones Feature fusion approach for image retrieval with ordered color means based description of keypoints extracted using local detectors Usage Identification with Fusion of Thepade SBTC and Sauvola Thresholding Features of Aerial Images Using Ensemble of Machine Learning Algorithms Enhanced Image Classification with Feature Level Fusion of Niblack Thresholding and Thepade's Sorted N-Ary Block Truncation Coding using Ensemble of Machine Learning Algorithms Iris liveness detection using regional features Pupil dynamics for iris liveness detection Ocular-net: Lite-residual encoder decoder network for accurate ocular regions segmentation in various sensor images CNN hyperparameter tuning applied to iris liveness detection. arXiv 2020 Convolutional neural networks for face anti-spoofing and liveness detection Detecting iris liveness with batch normalized convolutional neural network A Review on Iris Liveness Detection Techniques A Texture Feature Based Approach for Person Verification Using Footprint Bio-Metric Ameliorating the Accuracy Dimensional Reduction of Multi-modal Biometrics by Deep Learning Feature-Level vs. Score-Level Fusion in the Human Identification System Effect of image binarization thresholds on breast cancer identification in mammography images using OTSU, Niblack, Burnsen, Thepade's SBTC Image Retrieval using Weighted Fusion of GLCM and TSBTC Features Fingerprint Liveness Detection Using Directional Ridge Frequency with Machine Learning Classifiers Fingerprint liveness detection with machine learning classifiers using feature level fusion of spatial and transform domain features LivDet-iris 2013-Iris Liveness Detection Competition LivDet-Iris 2015-Iris Liveness Detection Competition Detecting medley of iris spoofing attacks using DESIST Unraveling the Effect of Textured Contact Lenses on Iris Recognition Revisiting iris recognition with color cosmetic contact lenses Iris presentation attack detection: Where are we now? Iris Liveness Detection Competition (LivDet-Iris)-The 2020 Edition Biometric System for Secure User Identification Based on Deep Learning An Iris Recognition System Using Deep convolutional Neural Network A Deep Learning Iris Recognition Method Based on Capsule Network Architecture Cross-spectral iris recognition using CNN and supervised discrete hashing A Multiclassification Method for Iris Data Based on the Hadamard Error Correction Output Code and a Convolutional Network Presentation Attack Detection Using Wavelet Transform and Deep Residual Neural Net Acknowledgments: Thanks to the Symbiosis Institute of Technology, Symbiosis International (Deemed University), and Symbiosis Centre for Applied Artificial Intelligence for supporting this research. We thank all individuals for their expertise and assistance throughout all aspects of our study and for their help in writing the manuscript. The authors declare no conflict of interest. P. Das This paper proposed the novel method of ILD to prevent iris spoofing through a textured lens and print attacks. The proposed approach identified both kinds of print attacks (capture/scan) and detected iris spoofing attempted using different sensors. Many approaches have been used in preprocessing as iris segmentation, normalization, and localization, all of which are computationally based on the ILD method. In this research, TSBTC and GLCM features are extracted directly from iris images to overcome this drawback. Feature-level fusions are carried out using global TSBTC and local GLCM features. Various ML algorithms and their ensemble combinations are trained using these fusion features of iris images. The experimental validation of the proposed liveness detection approach is done on four benchmark datasets.The performance comparison of variants of the proposed approach is made using ISO/IEC biometric performance evolution metrics, including APCER, NPCER, ACER, accuracy precision, recall, and F-measures. For the Clarkson 2013 dataset, fake images are identified with 93.78% accuracy, whereas for Clarkson 2015, the accuracy of the dataset achieved is 95.57% with the RF model. The accuracy obtained for IIITD Contact is 78.88%, and for IIITD Combined Spoofing it is 99.68%. Comparing the performances with Iris Liveness Detection Competition (LivDet-Iris) 2020, the proposed approach got the lowest ACER of 0.48%. The experimental results showed that the proposed approach efficiently identifies iris spoofing attacks using various sensors. In future work, this framework may be extended with the best performance features. Currently, the presented work explored Thepade's SBTC as a global representation of iris content. However, the local content presentation of Thepade's SBTC would be an exciting exploration in the future. Moreover, the proposed fusion framework may be applied for the liveness detection of other biometric traits. Data Availability: The data supporting this study's findings are available on request from the corresponding author. The data are not publicly available due to the privacy concern of research participants.