Abstract
The field of neural networks has seen significant advances in recent years with the development of deep and convolutional neural networks. Although many of the current works address real-valued models, recent studies reveal that neural networks with hypercomplex-valued parameters can better capture, generalize, and represent the complexity of multidimensional data. This paper explores the quaternion-valued convolutional neural network application for a pattern recognition task from medicine, namely, the diagnosis of acute lymphoblastic leukemia. Precisely, we compare the performance of real-valued and quaternion-valued convolutional neural networks to classify lymphoblasts from the peripheral blood smear microscopic images. The quaternion-valued convolutional neural network achieved better or similar performance than its corresponding real-valued network but using only 34% of its parameters. This result confirms that quaternion algebra allows capturing and extracting information from a color image with fewer parameters.
This work was supported in part by Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.
Access provided by University of Notre Dame Hesburgh Library. Download conference paper PDF
Similar content being viewed by others
1 Introduction
In recent years, machine learning has influenced how we solve a variety of real-world problems. Indeed, artificial neural networks (NN) outperformed many state-of-the-art approaches in several applications with the development of deep neural networks (DNN) and convolutional neural networks (CNN) architectures.
Most neural network architectures are real-valued neural networks (RVNN). In such architectures, the input data is arranged into real-valued vectors, matrices, or tensors to be processed by the neural network. In some sense, this approach assumes that all the input data components have equal importance and, thus, they are all evaluated in the same way. However, in some cases, the data sets contain multidimensional information that requires a specific approach to treat them as single entities. For example, a pixel’s color is obtained by combining the red, green, and blue components in image processing. The three coordinates position in the color space represents a plethora of colors such as pink or brown, and the color information is lost if the components are treated separately [38]. In some practical image recognition tasks, the complexity of the color space needs to be captured by the neural networks to generalize well and represent the multidimensional nature of the colors [20, 27]. Indeed, Parcollet et al. showed that RVNNs might fail to capture the color information [39]. Also, Matsui et al. remarked that RVNNs are not able to preserve the 3D shape of an input object when transformed into the 3D space [30]. From these remarks, neural networks based on hypercomplex numbers, such as complex and quaternions, have been proposed and extensively investigated in the last years.
1.1 Complex and Quaternion-Valued Neural Networks
A complex-valued neural network (CVNN) is based on the algebra of complex numbers, which allows preserving or treating the relationship between magnitude and the phase information during the learning [38]. Furthermore, the algebraic structure of complex numbers yields CVNNs better generalization capability [17] besides being easier to train [35]. As long as the processed information are correlated two-dimensional data, CVNNs mostly outperformed or at least matched the real-valued ones [3, 4, 16, 29, 48].
The encouraging performance of CVNNs inspired the development of quaternion-valued neural networks (QVNNs). QVNNs use quaternion algebra and can represent colors efficiently, with the advantage of fully representing colors through unique structures [6].
As far as we know, the first QVNN has been introduced by Arena et al. [6], who developed a specific backpropagation algorithm able to learn the local relations that exist between quaternions. Furthermore, like the real-valued neural networks, single hidden layer QVNNs are universal approximators [6]. An extensive list of applications and investigations with different QVNN architectures can be found in references [7, 11, 36,37,38, 40, 46]. A detailed up-to-date review on quaternion-valued neural networks, including some of their successful applications, can be found at [38].
In contrast to RVNN, which represented color channels as independent variables, QVNN can benefit from representing colors as single quaternions. For example, Greenblatt et al. applied a QVNN model to prostate cancer [13]. Gaudet and Maida investigated the use of quaternion-valued convolutional neural networks (QVCNN) for image processing [11]. Pavllo et al. modeled human motion using QVNNs [41]. Zhu et al. proposed a QVCNN for color image classification and denoising tasks [51]. The localization of color image splicing by a fully quaternion-valued convolutional network was explored by Chen et al. [9]. A deformable quaternion Gabor convolutional neural network for recognition of color facial expression was proposed by Jin et al. [22]. Takahashi et al. have merged histograms of oriented gradients (HOG) for human detection with a QVNN to determine human facial expression [47]. Quaternion multi-layer perceptron has been successfully applied to polarimetric synthetic aperture radar (PolSAR) land classification [24, 43].
1.2 Contributions and the Organization of the Paper
Corroborating with the development of hypercomplex-valued neural networks, we present a quaternion-valued convolutional neural network (QVCNN) development to classify isolated white cells as lymphoblasts. Precisely, the QVCNN receives a white cell image like the one shown in Fig. 1 and classifies it as a lymphoblast or not. The classification of lymphoblast is essential for diagnosing acute lymphoblastic leukemia, a kind of blood cancer. The performance of the QVCNN is compared with a real-valued convolutional neural network with a similar architecture.
Candidate cell to be a lymphoblast, from ALL-IDB dataset [28].
The paper is structured as follows: Sect. 2 presents the medical problem of acute lymphoblastic leukemia and presents a literature review on the computer-aided diagnosis of leukemia. Section 3 addresses real-valued and quaternion-valued convolutional neural networks. The experimental results are detailed in Sect. 4. The Sect. 5 presents the concluding remarks and future works.
2 Acute Lymphoblastic Leukemia (ALL)
According to the national cancer institute of the United States, acute lymphoblastic leukemia (ALL) is a type of leukemia, cancer in the blood, that appears and multiplies rapidly [32]. ALL is characterized by the presence of many lymphoblasts in the blood and also in the bone marrow. In this context, a lymphoblast is an immature cell that can be converted into a mature lymphocyte [33].
There are several methods used for the diagnosis of ALL that can be found in the literature [34], including the peripheral blood smear technique [23]. The peripheral blood smear technique allows observing the information of a blood sample taken from the patient through a microscope. A specialist (hematologist) counts the number of lymphoblasts observed by microscope and, based on that, makes a diagnosis [45]. Figure 2 shows a picture of a blood smear that a hematologist sees for analysis. It is worth mentioning that the white cells appear stained with a bluish-purple coloration, which serves as a guide to find lymphoblasts.
Blood smear image from ALL-IDB dataset [28].
The manual counting of lymphoblasts under the microscope is a somewhat dull task that takes much time from a professional who could be more productive in other matters. In effect, the time spent analyzing the microscope image has an economical cost because a specialist has significant value in the labor market. In addition, the analysis can be affected by human factors such as tiredness and stress. The operator’s experience also plays an important role, and therefore, there is a subjectivity component affecting the results of the lymphoblast count. For these reasons, computational models to perform automatic lymphoblast counting in a blood smear image have been proposed in the literature [42].
Many methods divide the problem of automatic lymphoblast counting into two stages. The first stage, usually called the identification phase, aims to find white cells to be lymphoblast. Labeling a candidate cell as a lymphoblast or healthy cell is performed in the second stage, referred to as the classification phase. In this paper, we use real-valued and quaternion-valued convolutional neural networks to classify white cells, that is, in the second stage of the blood smear image analysis. In the following sections, we review real-valued and quaternion-valued neural networks. Before, however, we provide a literature review on automatic leukemia diagnosis methods.
2.1 Computer-Aided Diagnosis of Leukemia: Literature Review
Current literature has shown a large number of studies on computer-aided leukemia diagnosis with different approaches, including support vector machines (SVM), k-nearest neighbor (k-NN), principal component analysis (PCA), naive Bayes classifier, and random forest [8].
In [26], the authors used 60 sample images to develop a model to detected ALL using kNN and naive Bayes classifier with 92.8% accuracy. A method to extract features of microscopic images using discrete orthogonal Stockwell transform (DOST) and linear discriminant analysis (LDA) has been proposed in [31]. The paper [50] applies three pre-trained CNN architectures to extract features for image classification. In [2], a CNN reached 88.25% of accuracy in classifying ALL versus healthy cells. To distinguish between the four subtypes of leukemia, this CNN hits 81.74% accuracy. Using ALL-IDB dataset, [1] presents a k-medoids algorithm with 98.60% accuracy to classify white blood cells. Furthermore, a method based on generative adversarial optimization (GAO) [49], a neural network with statistic features [5], and a deep CNN with chronological sine-cosine algorithm (SCA) [21] have been proposed for ALL detection with 93.84%, 97.07%, and 98.70% accuracy, respectively.
A table summarizing the results from 16 papers on automated detection of leukemia and its subtypes can be found in [8]. This reference also presents a framework for automated leukemia diagnosis based on the ResNet-34 [15] and the DenseNet-121 [19]. The accuracy reported was 99.56% for the ResNet-34 and 99.91% for the DenseNet-121 [8].
3 Convolutional Neural Networks
In many machine learning applications, identifying appropriate representations of a large amount of data is usually challenging. A successful model must efficiently encode local relations within the input resources and their structural relations. Moreover, an adequate representation of data also offers a positive side effect by reducing the number of neural parameters needed to well-learn the input features, leading to a natural solution to the overfitting phenomenon [38].
Convolutional neural networks (CNN) are feed-forward neural networks with a robust feature representation method widely applied in machine learning. For example, the ResNet set a milestone in 2015 by outperforming humans in the ImageNet competition [10, 15]. The successful AlexNet [25] also inspired the development of many novel CNNs including the VGG [44] and the DenseNet [19]. In addition, deep neural networks have been successfully used, for example, for segmentation tasks as well as for the automatic classification of objects in images [14, 18].
One crucial aspect of the deep networks is the convolution layer, which extracts features from high-dimensional data through a set of convolution kernels [51]. Although convolutions perform well in many practical situations, it has some drawbacks in color image processing tasks. Firstly, a convolution layer sums up the outputs corresponding to different channels and ignores their complicated interrelationships. As a consequence, it may eventually lose important information of a color image. Secondly, simply summing up the outputs gives too many degrees of freedom, and thus, the network has a high risk of overfitting even when imposing heavy regularization terms [51]. Accordingly, García-Retuerta et al. argue that quaternion-valued neural networks may have a significant advantage in color image processing tasks because of quaternion’s four-dimensional algebraic structure [10]. The following section reviews the basic concepts of quaternion-valued convolutional neural networks.
3.1 Quaternion-Valued Convolutional Neural Networks
Quaternions are a four-dimensional extension of complex numbers. Developed by Hamilton in 1843, the set of all quaternions is defined by
where \(q_0\) is the real part of a quaternion, \(q_1\), \(q_2\), and \(q_3\) denote the imaginary components while \(\boldsymbol{i}\), \(\boldsymbol{j}\), \(\boldsymbol{k}\) are the hypercomplex units. The product of the hypercomplex units is governed by the following identities, knows as Hamilton rules:
Alternatively, a quaternion can be written as
where \(z_0 = q_0 + q_1\boldsymbol{i}\) and \(z_1 = q_2+q_3\boldsymbol{j}\) are complex numbers.
The addition of quaternions is performed adding the real and imaginary components. Precisely, given \(p = {p}_0 + {p}_1 \boldsymbol{i}+ {p}_2 \boldsymbol{j}+ {p}_3 \boldsymbol{k}\) and \(q ={q}_0 + {q}_1 \boldsymbol{i}+ {q}_2 \boldsymbol{j}+ {q}_3 \boldsymbol{k}\), their sum is
The main result in quaternion algebra is the Hamilton product between two quaternions \(p = {p}_0 + {p}_1 \boldsymbol{i}+ {p}_2 \boldsymbol{j}+ {p}_3 \boldsymbol{k}\) and \(q = {q}_0 + {q}_1 \boldsymbol{i}+ {q}_2 \boldsymbol{j}+ {q}_3 \boldsymbol{k}\), denoted by \(p \otimes q\) and defined by
Quaternions and quaternion algebra allow building processing entities composed of four elements that share information via the Hamilton product.
According to Gaudet and Maida [11], a quaternion-valued convolutional layer is obtained convolving a quaternion-valued filter matrix \(\boldsymbol{W} = {\boldsymbol{W}}_0 + {\boldsymbol{W}}_1 \boldsymbol{i}+ {\boldsymbol{W}}_2 \boldsymbol{j}+ {\boldsymbol{W}}_3 \boldsymbol{k}\) by a quaternion-valued vector \(\boldsymbol{h} = {\boldsymbol{h}}_0 + {\boldsymbol{h}}_1 \boldsymbol{i}+ {\boldsymbol{h}}_2 \boldsymbol{j}+ {\boldsymbol{h}}_3 \boldsymbol{k}\). Here, \(\boldsymbol{W}_0\), \(\boldsymbol{W}_1\), \(\boldsymbol{W}_2\), and \(\boldsymbol{W}_3\) are real-valued matrices while \(\boldsymbol{h}_0\), \(\boldsymbol{h}_1\), \(\boldsymbol{h}_2\), and \(\boldsymbol{h}_3\) are real-valued vectors. Details on the implementation of quaternion-valued convolutional layers can be found in [11].
4 Computational Experiments
Let us compare real-valued and quaternion-valued convolutional neural networks’ performance for classifying a white cell image as a lymphoblast. Both real-valued and quaternion-valued neural networks have been implemented in python using the Keras and Tensorflow libraries.
The real-valued model is a sequential feed-forward network composed of three convolutional layers, three max-pooling layers, and a dense layer. Precisely, the first convolutional layer has 32 filters with a (3, 3) kernel and ReLU activation function. A max-pooling follows the convolutional layer with a (2, 2) kernel. The second and third two-dimensional convolutional layers have 64 and 128 filters and have ReLU activation functions. Furthermore, they are also followed by max-pooling layers with (2, 2) kernels. Figure 3 shows the architecture of the real-valued convolutional neural network. The total number of trainable parameters of the real-valued convolutional neural network is 106, 049.
The quaternion-valued convolutional neural network has been designed similarly. Precisely, to maintain the same parameter budget among the real and quaternion-valued models, the number of filters per layer of the real-valued network was divided by four to build a quaternion-valued convolution. Thus, the quaternion-valued convolutional neural network has the same structure as the real-valued network depicted in Fig. 3, but with a quarter of the number of filters per layer. The number of trainable parameters of the quaternion-valued CNN model is 36, 353. Table 1 summarizes the number of trainable parameters of both neural networks per layer.
The dense layer of both real-valued and quaternion-valued networks has a single output neuron without activation function. Such a single neuron is used to classify the input image as a lymphoblast or not. Moreover, the parameters of all layers have been initialized according to Glorot and Bengio [12]. The optimizer used was Adam, an algorithm based on the stochastic gradient descent method with adaptive estimation of first-order and second-order moments.
To evaluate the performance of the RVCNN and QVCNN classifiers, we used the ALL-IDB: The Acute Lymphoblastic Leukemia Image Database for Image Processing provided by the “Università Degli Studi di Milano” [28]. This image database contains 260 images of white blood cells with \(257 \times 257\) pixels, labeled by experts and evenly distributed among lymphoblast and health cells. Figure 1 shows an example of a color image used in the computational experiment.
We resized the \(257 \times 257\) white blood cells images to \(100 \times 100\) pixels. Also, the set of 260 color images was randomly divided into training and test images with different ratios. Data augmentation has been applied on the training set to improve the accuracy of the convolutional neural networks. Precisely, the images used for training were all submitted to a pre-processing data generation, which consists of obtaining new images through horizontal and vertical flips.
In our experiments, images were converted to RGB (red, green, and blue) and HSV (hue, saturation, and value) color spaces and used as input to neural networks. As a consequence, we performed the four experiments detailed in Table 2. The first experiment considers a real-valued CNN whose input is obtained by concatenating the three RGB channels in a single tensor with values in the unit interval [0, 1]. The second experiment also considers real-valued CNNs, but the input is obtained by concatenating the three HSV channels. Here, hue is arranged in a radial slice \(H \in [0,2\pi )\) while saturation and value belong to the unit interval, i.e., \(S, \ V \in [0,1]\).
The last two experiments were performed using quaternion-valued CNN. Specifically, in the third experiment, the RGB image is encoded in a quaternion structure with real part null, and each channel as one imaginary part of a quaternion as follows:
Finally, in the fourth experiment, a color is encoded in a quaternion through the following expression using the HSV representation:
The dataset has been divided into training and test sets with 5 different training/test ratios and trained by 100 epochs. One hundred simulations were performed for each different training/test ratio and, the average and standard deviation of the accuracy was calculated. Figure 4 presents the average accuracy of both real-valued and quaternion-valued convolutional neural networks for different percentages used for testing the networks in the four experiments. This figure also presents the interval between the \(25\%\) and the \(75\%\) quantiles of accuracy as shaded area.
Note from Fig. 4 that the quaternion-valued convolutional neural network with images in HSV color space (QVCNN-HSV) obtained the best performance, reaching 98.2% of accuracy in the test phase with 10% of training/test ratio.
The real and quaternion-valued networks with RGB encoded images exhibited similar performance, with accuracy between \([93.6\%, \ 97.1\%]\) and \([94.4\%,\) \(97.3\%]\), respectively, depending on the ratio training/test. The real and quaternion-valued CNN models with RGB encoded images exhibited statistically equivalent performances. The real-valued neural network with HSV encoded images yielded the worst performance, reaching an average accuracy of 95.3% in the best case.
Concluding, the QVCNN-HSV exhibits a better generalization capability than the QVCNN-RGB, RVCNN-RGB, and RVCNN-HSV models. Moreover, the performance of the quaternion-valued convolutional neural network with images encoded using the HSV color space and (7) compares well with the results reported in the literature (see Sect. 2.1). However, the quaternion-valued convolutional neural network is much simpler than many of the architectures considered previously.
5 Concluding Remarks and Future Works
Acute lymphoblastic leukemia is characterized by many lymphoblasts in the blood and the bone marrow. Such disease can be diagnosticated by counting the number of lymphoblasts in a blood smear microscope image. This paper investigated the application of convolutional neural networks for classifying a white cell as lymphoblast or not. Precisely, we compared the performance of real-valued and quaternion-valued models. The QVCNN with input images encoded using the HSV color space showed the best result in our experiments. Also, the performance of the QVCNN is comparable with other deeper neural networks from the literature, including the ResNet and the DenseNet [8]. This computational experiment suggests that quaternion-valued neural networks exhibit better generalization capability than the real-valued convolutional neural network, possibly because it treats colors as single quaternion entities. Furthermore, it is noticeable that the quaternion-valued convolutional neural network has about 34% of the parameters of the corresponding real-valued model.
We plan to develop neural networks that segment and classify white blood cells on a blood smear microscope image as future work. Further research can also address the application of QVCNN for the classification of other types of leukemia.
References
Acharya, V., Kumar, P.: Detection of acute lymphoblastic leukemia using image segmentation and data mining algorithms. Med. Biol. Eng. Comput. 57(8), 1783–1811 (2019). https://doi.org/10.1007/s11517-019-01984-1
Ahmed, N., Yigit, A., Isik, Z., Alpkocak, A.: Identification of leukemia subtypes from microscopic images using convolutional neural network. Diagnostics (Basel) 9(3) (2019). https://doi.org/10.3390/diagnostics9030104
Aizenberg, I., Alexander, S., Jackson, J.: Recognition of blurred images using multilayer neural network based on multi-valued neurons. In: 2011 41st IEEE International Symposium on Multiple-Valued Logic, pp. 282–287 (2011)
Aizenberg, I., Gonzalez, A.: Image recognition using MLMVN and frequency domain features. In: 2018 International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2018). https://doi.org/10.1109/IJCNN.2018.8489301
Aljaboriy, S., Sjarif, N., Chuprat, S., Abduallah, W.: Acute lymphoblastic leukemia segmentation using local pixel information. Pattern Recogn. Lett. 125, 85–90 (2019). https://doi.org/10.1016/j.patrec.2019.03.024
Arena, P., Fortuna, L., Muscato, G., Xibilia, M.G.: Multilayer perceptrons to approximate quaternion valued functions. Neural Netw. 10(2), 335–342 (1997). https://doi.org/10.1016/S0893-6080(96)00048-2
Bayro-Corrochano, E., Lechuga-Gutiérrez, L., Garza-Burgos, M.: Geometric techniques for robotics and hmi: interpolation and haptics in conformal geometric algebra and control using quaternion spike neural networks. Robot. Auton. Syst. 104, 72–84 (2018)
Bibi, N., Sikandar, M., Din, I.U., Almogren, A., Ali, S.: Iomt-based automated detection and classification of leukemia using deep learning. J. Healthc. Eng. (2020). https://doi.org/10.1155/2020/6648574
Chen, B., Gao, Y., Xu, L., Hong, X., Zheng, Y., Shi, Y.Q.: Color image splicing localization algorithm by quaternion fully convolutional networks and superpixel-enhanced pairwise conditional random field. Math. Biosci. Eng. 6(16), 6907–6922 (2019). https://doi.org/10.3934/mbe.2019346
García-Retuerta, D., Casado-Vara, R., Martin-del Rey, A., De la Prieta, F., Prieto, J., Corchado, J.M.: Quaternion neural networks: state-of-the-art and research challenges. In: Analide, C., Novais, P., Camacho, D., Yin, H. (eds.) IDEAL 2020. LNCS, vol. 12490, pp. 456–467. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-62365-4_43
Gaudet, C.J.; Maida, A.: Deep quaternion networks. In: International Joint Conference on Neural Networks (IJCNN), pp. 1–8 (2018)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Teh, Y.W., Titterington, M. (eds.) Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research, vol. 9, pp. 249–256. PMLR, Chia Laguna Resort, Sardinia, Italy (13–15 May 2010). http://proceedings.mlr.press/v9/glorot10a.html
Greenblatt, A., Mosquera-Lopez, C., Agaian, S.: Quaternion neural networks applied to prostate cancer Gleason grading. In: 2013 IEEE International Conference on Systems, Man, and Cybernetics, pp. 1144–1149 (2013). https://doi.org/10.1109/SMC.2013.199
He, K., Gkioxari, G., Dollár, P., Girshick, R.: Mask r-cnn (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). https://doi.org/10.1109/CVPR.2016.90
Hirose, A.: Complex-Valued Neural Networks. Studies in Computational Intelligence, 2nd edn. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-27632-3
Hirose, A., Yoshida, S.: Generalization characteristics of complex-valued feedforward neural networks in relation to signal coherence. IEEE Trans. Neural Netw. Learn. Syst. 23(4), 541–551 (2012). https://doi.org/10.1109/TNNLS.2012.2183613
Hu, J., Shen, L., Sun, G.: Squeeze-and-excitation networks. In: 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 7132–7141 (2018). https://doi.org/10.1109/CVPR.2018.00745
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2017). https://doi.org/10.1109/CVPR.2017.243
Isokawa, T., Matsui, N., Nishimura, H.: Quaternionic neural networks: fundamental properties and applications. In: Complex-Valued Neural Networks: Utilizing High-Dimensional Parameters, pp. 411–439 (2009)
Jha, K.K., Sekhar Dutta, H.: Mutual information based hybrid model and deep learning for acute lymphocytic Leukaemia detection in single cell blood smear images. Comput. Methods Program. Biomed. 179, 104987 (2019). https://doi.org/10.1016/j.cmpb.2019.104987
Jin, L., Zhou, Y., Liu, H., Song, E.: Deformable quaternion Gabor convolutional neural network for color facial expression recognition. In: 2020 IEEE International Conference on Image Processing (ICIP), pp. 1696–1700 (2020). https://doi.org/10.1109/ICIP40778.2020.9191349
Kasvi: Hematologia: Como é realizada a técnica de esfregaço de sangue? https://kasvi.com.br/esfregaco-de-sangue-hematologia/ (2021). Accessed 18 Feb 2021
Kinugawa, K., Shang, F., Usami, N., Hirose, A.: Isotropization of quaternion-neural-network-based PolSAR adaptive land classification in Poincare-sphere parameter space. IEEE Geosci. Remote Sens. Lett. 15(8), 1234–1238 (2018). https://doi.org/10.1109/LGRS.2018.2831215
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017). https://doi.org/10.1145/3065386
Kumar, S., Mishra, S., Asthana, P.: Pragya: automated detection of acute leukemia using k-mean clustering algorithm. In: Bhatia, S.K., Mishra, K.K., Tiwari, S., Singh, V.K. (eds.) Advances in Computer and Computational Sciences, pp. 655–670. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-3773-3_64
Kusamichi, H., Isokawa, T., Matsui, N., Ogawa, Y., Maeda, K.: A new scheme for color night vision by quaternion neural network. In: Proceedings of the 2nd International Conference on Autonomous Robots and Agents (ICARA 2004), pp. 101–106 (2004)
Labati, R.D., Piuri, V., Scotti, F.: All-idb: the acute lymphoblastic leukemia image database for image processing. In: 2011 18th IEEE International Conference on Image Processing (2011). https://doi.org/978-1-4577-1303-3
Mandic, D.P., Goh, V.S.L.: Complex Valued Nonlinear Adaptive Filters: Noncircularity, Widely Linear and Neural Models, vol. 59. Wiley, New York (2009)
Matsui, N., Isokawa, T., Kusamichi, H., Peper, F., Nishimura, H.: Quaternion neural network with geometrical operators. J. Intell. Fuzzy Syst. 15(3), 149–164 (2004)
Mishra, S., Majhi, B., Sa, P.K.: Texture feature based classification on microscopic blood smear for acute lymphoblastic leukemia detection. Biomed. Sig. Process. Control 47, 303–311 (2019). https://doi.org/10.1016/j.bspc.2018.08.012
NCI: Acute lymphoblastic leukemia. https://www.cancer.gov/publications/dictionaries/cancer-terms/def/acute-lymphoblastic-leukemia (2021). Accessed 18 Feb 2021
NCI: Lymphoblast. https://www.cancer.gov/publications/dictionaries/cancer-terms/def/lymphoblast (2021). Accessed 18 Feb 2021
NHS: Acute lymphoblastic leukemia diagnosis. https://www.nhs.uk/conditions/acute-lymphoblastic-leukaemia/diagnosis/ (2021). Accessed 18 Feb 2021
Nitta, T.: On the critical points of the complex-valued neural network. In: Proceedings of the ICONIP 2002 9th International Conference on Neural Information Processing: Computational Intelligence for the E-Age, pp. 411–439. Singapore (2002)
Ogawa, T.: Neural network inversion for multilayer quaternion neural networks. Comput. Technol. Appl. 7, 73–82 (2016)
Onyekpe, U., Palade, V., Kanarachos, S., Christopoulos, S.R.: A quaternion gated recurrent unit neural network for sensor fusion. Information (2021). https://doi.org/10.3390/info12030117
Parcollet, T., Morchid, M., Linarès, G.: A survey of quaternion neural networks. Artif. Intell. Rev. 53(4), 2957–2982 (2020). https://doi.org/10.1007/s10462-019-09752-1
Parcollet, T., Morchid, M., Linarès, G.: Quaternion Convolutional Neural Networks for Heterogeneous Image Process. (2018). https://doi.org/arXiv:1811.02656v1
Parcollet, T., et al.: Quaternion convolutional neural networks for end-to-end automatic speech recognition. In: Proceedings of the Interspeech 2018, pp. 22–26 (2018). https://doi.org/10.21437/Interspeech.2018-1898
Pavllo, D., Feichtenhofer, C., Auli, M., Grangier, D.: Modeling human motion with quaternion-based neural networks. Int. J. Comput. Vis. 128(4), 855–872 (2019). https://doi.org/10.1007/s11263-019-01245-6
Shafique, S., Tehsin, S.: Computer-aided diagnosis of acute lymphoblastic Leukaemia. Comput. Math. Methods Med. 2018, 6125289 (2018). https://doi.org/10.1155/2018/6125289
Shang, F., Hirose, A.: Quaternion neural-network-based PolSAR land classification in Poincare-sphere-parameter space. IEEE Trans. Geosci. Remote Sens. 52, 5693–5703 (2014)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2015)
Terwilliger, T., Abdul-Hay, M.J.B.C.J.: Acute lymphoblastic leukemia: a comprehensive review and 2017 update. Blood Cancer J. (2017). https://doi.org/10.1038/bcj.2017.53
Takahashi, K., Isaka, A., Fudaba, T., Hashimoto, M.: Remarks on quaternion neural network-based controller trained by feedback error learning. In: IEEE/SICE International Symposium on System Integration, pp. 875–880 (2017)
Takahashi, K., Takahashi, S., Cui, Y., Hashimoto, M.: Remarks on computational facial expression recognition from HOG features using quaternion multi-layer neural network. In: Mladenov, V., Jayne, C., Iliadis, L. (eds.) EANN 2014. CCIS, vol. 459, pp. 15–24. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11071-4_2
Trabelsi, C., et al.: Deep complex networks (May 2017)
Tuba, M., Tuba, E.: Generative adversarial optimization (goa) for acute lymphocytic leukemia detection. Stud. Inf. Control 28, 245–254 (2019). https://doi.org/10.24846/v28i3y201901
Vogado, L.H., Veras, R.M., Araujo, F.H., Silva, R.R., Aires, K.R.: Leukemia diagnosis in blood slides using transfer learning in CNNs and SVM for classification. Eng. Appl. Artif. Intell. 72, 415–422 (2018). https://doi.org/10.1016/j.engappai.2018.04.024
Zhu, X., Xu, Y., Xu, H., Chen, C.: Quaternion convolutional neural networks. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11212, pp. 645–661. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01237-3_39
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Granero, M.A., Hernández, C.X., Valle, M.E. (2021). Quaternion-Valued Convolutional Neural Network Applied for Acute Lymphoblastic Leukemia Diagnosis. In: Britto, A., Valdivia Delgado, K. (eds) Intelligent Systems. BRACIS 2021. Lecture Notes in Computer Science(), vol 13074. Springer, Cham. https://doi.org/10.1007/978-3-030-91699-2_20
Download citation
DOI: https://doi.org/10.1007/978-3-030-91699-2_20
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-91698-5
Online ISBN: 978-3-030-91699-2
eBook Packages: Computer ScienceComputer Science (R0)




