Abstract
COVID-19, caused by SARS-COV-2, resulted in 774 million cases and 7 million deaths by March 2024. This study proposes an approach to detect pulmonary lesions in computed tomography scans, integrating classification, preprocessing, and segmentation. Initially, a model based on LeNet-5 classifies the relevant slices of the scans, eliminating the irrelevant ones. Subsequently, the selected images undergo contrast adjustments, binarization, and normalization. Afterwards, segmentation is performed using a U-Net-based architecture, allowing for detailed segmentation. The methodology achieved 78.40% Dice, 64.80% IoU, 78% Sensitivity, 100% Specificity, 89.60% AUC, and 81% Precision, using only 9 million parameters. These results offer a practical and efficient solution, supporting specialists in patient treatment.
Access provided by University of Notre Dame Hesburgh Library. Download conference paper PDF
Similar content being viewed by others
1 Introduction
COVID-19, caused by the novel coronavirus (SARS-COV-2), is a highly infectious disease with varying symptoms that can resemble other respiratory pathologies such as viral or bacterial pneumonia [19]. While some patients experience mild clinical conditions, others suffer from severe complications such as pneumonia, respiratory failure, and organ failure [21]. As of March 2024, the global count of COVID-19 cases and deaths has reached 774 million and over 7 million, respectively [25]. In this context, early diagnosis is crucial to increase patient survival rates and prevent the spread of the disease.
The standard diagnosis for COVID-19 involves RT-PCR with pharyngeal swabs [23], a method widely adopted worldwide [6, 11]. However, the impact of COVID-19 has been significantly mitigated by advancements in vaccinations, resulting in a substantial reduction in cases and saving millions of lives globally [24]. As vaccination campaigns and effective treatments continue to evolve, proactive identification and management of associated complications, along with ongoing patient support during recovery, are essential [7].
Other diagnostic and monitoring methods such as X-ray and computed tomography (CT) have been extensively studied for assessing and tracking patient conditions [15]. X-ray is widely available and convenient for assessing areas like the chest, but its sensitivity and specificity are lower than those of CT for COVID-19 [22]. CT, on the other hand, is an effective and reliable method for detecting lung lesions caused by COVID-19 as well as other viral and bacterial pneumonias due to its superior detail [2]. However, analyzing CT images is labor-intensive and demands significant manual effort, making the process exhaustive for specialists.
This work aims to develop an automated approach for segmenting lung lesions in CT images, focusing on patient assessment and recovery. The goal is to reduce the manual effort for specialists and optimize analysis time by automatically identifying lesions, facilitating the management of infection complications. The proposed methodology includes robust preprocessing that standardizes images from different CT devices and uses data augmentation techniques to expand the sample set. The U-Net neural network architecture [18] is employed to segment lesion regions.
The generated segmentations are used for detailed visualizations, assisting specialists in monitoring and treating pulmonary complications in patients. Additionally, we implement a method for classifying and separating CT images that show visible lung regions from those that do not, ensuring a more accurate analysis focused on areas of interest. The remainder of the work is organized as follows: Sect. 2 describes related works, Sect. 3 details the materials and methods used, Sect. 4 presents the results obtained during the experiments, Sect. 5 discusses the findings, and Sect. 6 concludes the study and suggests future directions.
2 Related Work
Since the modernization of technology in the healthcare sector and the increased availability of diagnostic imaging exams, Computer-Aided Diagnosis (CAD) systems have seen significant growth, detecting and diagnosing various diseases with results comparable to those of specialists [5, 13, 30].
Numerous studies employ computed tomography (CT) images to detect respiratory diseases like COVID-19 using robust metrics. Notable works by Castiglione et al. (2021) [4], Ardakani et al. (2020) [1], and Tao Zhou et al. (2021) [31] report sensitivity and specificity rates exceeding 90%, utilizing deep learning methods for classification and image preprocessing.
However, specialists must thoroughly assess the severity of lung lesions to monitor patient recovery, making segmentation methods indispensable. The authors [29] propose a modified model of U-Net, termed D2A U-Net, used for lesion segmentation, achieving 72.9% Dice and 70.7% Recall. In the study by Canu et al. [32], they proposed a U-Net-based network with a Tversky loss function to handle the segmentation of small lesions, achieving 83.1% Dice and 18.8% Hausdorff distance.
DUDA-Net, proposed by Xie et al. (2021) [26], introduces a dilated convolutional attention (DCA) mechanism, allowing the network to focus on subtle areas of the lesions. The method achieved 87% Dice, 90.8% sensitivity, 99.5% specificity, and 96.5% AUC. LCOV-Net, combined with the 3D U-Net, was presented by Qianfei et al. (2021) [28], achieving 78.6% Dice. In the work of Zhang et al. (2021) [27], the QC-HC U-Net is proposed, based on the 3D U-Net, this architecture combines residual and dense connections to form a new connection method and apply it to both encoder and decoder. The method achieved 85.3% Dice, 83.6% sensitivity, and 99.9% specificity. In the study by Hasanzadeh et al. (2020) [8], a review of four segmentation methods, U-Net, Attention U-Net, R2U-Net, and Attention R2U-Net, is proposed. The R2U-Net outperformed the others, achieving 79% Dice. CogSeg, proposed by Sang et al. (2021) [20], uses image super-resolution as an auxiliary task, achieving 89.7% IoU, 83% Dice, 86.9% sensitivity, and 98% specificity.
Previous works indicate the adoption of U-Net variants with significantly complex architectures, which implies a high computational cost. This characteristic represents a substantial barrier for implementation in environments with limited computational resources. In response to this limitation, our approach utilizes a simplified version of U-Net, deliberately designed to minimize computational expense without significantly compromising performance. This strategy not only makes the technology more accessible but also maintains efficacy comparable to state-of-the-art models, demonstrating that it is possible to achieve an optimal balance between architectural simplicity and analytical capability in resource-constrained scenarios.
The difficulty in acquiring CT images for research is also a limitation, with many authors using private or public databases with few samples. Moreover, there is a lack of preprocessing techniques to enhance network performance and treat images from different CT devices. In this work, we develop a methodology for preprocessing CT images, using data augmentation techniques to overcome the limitation of small databases. We also include a method for classifying and separating CT images with visible lung regions, ensuring a more accurate and focused analysis in areas of interest.
3 Materials and Methods
The proposed methodology is divided into the following steps: i) Image acquisition; ii) Detection of lung regions; iii) Image preprocessing; iv) Lesion segmentation. Figure 1 displays the flowchart with all the steps of the proposed method.
3.1 Image Acquisition
A dominant challenge in the task of classifying or segmenting computed tomography (CT) images for COVID-19 context lies in the scarcity of comprehensive public databases. In this work, we utilize two of the few available databases: COVID-19-CT-SEG (COVIDSeg) [12]Footnote 1, and MosMed [16]Footnote 2.
COVIDSeg is publicly accessible on the Zenodo platform [10]. This database includes a total of 20 chest CT scans with dimensions of \(512\,\times \,512\), \(630\,\times \,401\), and \(630\,\times \,630\), featuring lesion markings for diagnosing COVID-19. Each sample in the database contains infection masks created by two radiologists and verified by a third. The scans vary in the number of images due to the imaging device used; the database encompasses 3520 images. Figure 2 shows part a) samples from COVIDSeg, where it is evident that the images have different characteristics.
The second database, MosMed, comprises 50 chest CT scans with dimensions of \(512\,\times \,512\). Each sample contains a positive lesion marking for COVID-19, totaling 2049 images. In Fig. 2, part b) presents samples from MosMed. As can be observed, the images from the scans have similar characteristics and differ from those in COVIDSeg, also featuring different dimensions.
From these two databases, it is also possible to note the difference in styles between the segmentation masks of these databases, where the markings from COVIDSeg were manually made by radiologists and are more consistent. In contrast, the markings from MosMed are not well-defined. This underscores the need for a preprocessing methodology capable of handling images under these conditions. Moreover, this reflects the real-world scenario where images are produced by various specialists and devices.
3.2 Detection of the Presence of Lung Regions
The detection stage in the proposed method, which involves analyzing slices from computed tomography (CT) scans, aims to distinguish slices containing visible lung areas from those that do not. This distinction is crucial to avoid false positives in images without visible lung areas and to reduce the number of slices processed by the segmentation model. Images classified as devoid of lungs automatically receive masks without lesions, while those showing visible lungs proceed to the segmentation process.
To develop the classification model, we chose LeNet-5, which, according to a literature review by Marques et al. (2022) [14], stands out due to its low computational cost and effectiveness in terms of metrics and parameter quantity compared to more complex and computationally demanding models. The model architecture was adjusted to include two blocks of convolutional and fully connected layers. The first block consists of three convolutional layers with four feature filters each and a MaxPooling layer. The second block has four convolutional layers with eight feature filters and MaxPooling. A flatten layer is used to convert the feature matrices into a vector for the dense layers.
To prevent overfitting, a dropout layer with a value of 0.3 is used after the flatten layer. The dense layers comprise two layers with 4 and 8 neurons, respectively, and the final dense layer has 2 neurons for the model output, using the softmax activation function. The ReLu activation function is applied to all layers. The model is compiled using the Adam optimizer with a learning rate of 0.00001 and the binary crossentropy loss function, suitable for binary classification tasks. The training batch size is set to 8, and the number of epochs to 100. Figure 3 provides an illustration of the proposed architecture.
During training, we use several callback strategies, such as Early Stopping, to monitor validation loss and terminate training if there is no improvement for 15 epochs. Learning rate reduction is applied to overcome plateaus, adjusting the rate to 0.000001 if the validation loss remains constant for 7 epochs. Additionally, a model checkpoint is used to save the best model based on the lowest validation loss.
This approach ensures that only relevant slices containing lung areas are processed by the segmentation model, guaranteeing more accurate and efficient analysis.
3.3 Image Pre-processing
Image preprocessing is a crucial step in our method, applied after lung detection to optimize the effectiveness of the process. This methodology is employed to remove noise, enhance visual quality, eliminate regions irrelevant to lesion identification, and standardize images, making them invariant to the acquisition method. The rationale for conducting preprocessing after lung detection is to maximize computational efficiency and ensure that only the isolated regions of interest are refined. This approach not only enhances the robustness of the proposal but also minimizes the unnecessary processing of non-pulmonary areas, which could distort the results of segmentation and classification.
As illustrated in Fig. 2, the databases include chest CT scans that vary significantly from one another, reflecting the different acquisition methods used in each case. Due to these variations, the images undergo a detailed preprocessing step shown in Fig. 4, essential for standardizing the images before subsequent analysis.
The first step of preprocessing is to check the average pixel value. This process helps identify images with high and low contrast. Through empirical testing, we defined that high-contrast images have an average greater than 165 pixels, and low-contrast images have a value less than 165 pixels. The images are then subjected to histogram equalization contrast stretching (HECS) [9]. We apply HECS with the 2nd and 98th percentiles on high-contrast images. In low-contrast images, the applied values are the 25th and 98th percentiles. We observed that higher values might discard small lesions present in the lungs, and lower values may not work for contrast enhancement. The aim of this step is to standardize the images.
The next step is to apply the Otsu algorithm [17] to binarize the images. As a result, the rib cage and the pulmonary parenchyma region are highlighted, and we can remove all regions outside the rib cage. Using only the region of interest, we perform histogram equalization to normalize all samples. Finally, the last step was resizing to \(512\,\times \,512\) pixels. This procedure is carried out because deep learning algorithms require all inputs to be of the same size.
After preprocessing, we have standardized images that are noise-free and devoid of features outside the lung region. Thus, we reduce the possibility of bias in the models. The images after preprocessing highlight only the region of interest.
3.4 Data Augmentation
It is common for CT scans to have a higher number of images without COVID-19 lesions compared to those with lesions. As a result, most scans present an imbalance between classes, which can affect model learning and reduce system performance. To address this issue, we apply data augmentation techniques using the Albumentations library [3]. This approach allows us to generate images and masks to balance the images present in the scans, enhancing the model’s generalization capability.
To generate the images, we apply the following operations: horizontal flipping, vertical flipping, and transposition. We also create images with random contrast and brightness adjustments, with a 30% probability, and images with Gaussian noise, with a 50% probability.
Operations such as horizontal flipping, vertical flipping, and transposition are used to generate more samples for training the model and balancing the classes. Additionally, some scans contain noisy images, and the model may not be able to generalize to these images. Therefore, the Gaussian noise operation is used to address this issue, creating noisy images in other scans to contribute to the overall model generalization.
These data augmentation techniques are crucial for improving the robustness of the model, ensuring it can handle the diversity of images found in real-world scans. Implementing these operations helps create a more balanced and varied dataset, which is vital for the effective training of deep learning models.
3.5 Segmentation
U-Net [18] is a Convolutional Neural Network (CNN) architecture for image segmentation. Upon reviewing the literature, a trend toward the use of U-Net-based architectures is evident. These studies consistently report robust results in the task of segmenting COVID-19 lesions.
The U-Net architecture comprises contraction and expansion pathways. The contraction path is similar to a typical CNN pathway, involving repeated application of two \(3\,\times \,3\) convolutions, each followed by a ReLU activation function, and a \(2\,\times \,2\) max-pooling operation with a stride of two for downsampling. With each downsampling step, the number of feature filters is doubled.
In the expansive path, each step involves upsampling of feature filters followed by a \(2\,\times \,2\) convolution. This is coupled with concatenation with features from the contraction path and two additional \(3\,\times \,3\) convolutions, each followed by a ReLU activation. In the final layer, a \(1\,\times \,1\) convolution is used to map each feature vector to the desired number of classes.
In this work, the implemented U-Net uses input images with dimensions of \(512\,\times \,512\) pixels and starts with 32 filters. Due to computational environment limitations, we used a batch size of 6 during training. The loss function employed was the Dice loss. For the learning process, we utilized the Adam optimizer with a learning rate of 0.0003, as determined from preliminary experiments and literature reports. The Dice and IoU metrics are used to monitor the model’s performance on the validation set during training.
Several callback strategies are applied to execute actions during model training. The first strategy is Early Stopping, where training is monitored, and if the validation loss does not decrease for ten epochs, training is halted. Learning rate reduction on plateau is also implemented to draw the model out of a plateau. During training, if the validation loss plateaus for five epochs, the learning rate is reduced by 0.00003. Finally, the model is saved based on the lowest validation loss. The number of epochs set for training the model is 100.
3.6 Experiments
To evaluate the efficacy of the proposed methodology, we conducted a comprehensive experiment aimed at testing the preprocessing, data augmentation, classification, and segmentation of pulmonary lesions in computed tomography (CT) images. Employing cross-validation, we sought to improve performance metrics, ensuring the robustness and generalization of the model.
Initially, we divided the scans from the COVIDSeg and MosMed databases into subsets. For the COVIDSeg database, which contains 20 exams, 60% (12 exams) are used for training, 20% (4 exams) for validation, and 20% (4 exams) for testing. This division is repeated five times to ensure that all images are included in the test set at some point.
In the first stage, we trained a classification model based on LeNet-5 using the MosMed dataset subset, along with data augmentation techniques. This model is responsible for classifying the scan slices, identifying those that contain visible pulmonary regions. The COVIDSeg database, which has detailed lung markings, is used to evaluate the classification model. Thus, we ensure that only images with visible pulmonary regions are passed to the segmentation model.
In the second stage, we trained the segmentation model using exclusively the COVIDSeg database, which contains precise markings of lesions caused by COVID-19. The training is conducted with cross-validation, following the division of 60% (12 exams) for training and 20% (4 exams) for validation, as previously described.
In the third stage, we evaluate the model on the test set during each iteration of the cross-validation. The complete flow of the experiment simulates a realistic environment and is composed of the following steps:
-
1.
Initial Examination Processing: The original test set exam is processed with all available slices.
-
2.
Slice Classification: The classification model separates the slices with and without lungs. Slices without lungs automatically receive masks without lesions.
-
3.
Preprocessing: Slices classified as containing lungs undergo the preprocessing process.
-
4.
Lesion Segmentation: Lesions are segmented using the trained model.
This experiment integrates all stages of preprocessing, data augmentation, classification, and segmentation, aiming for a comprehensive and accurate analysis of the model’s efficacy in a realistic environment. By simulating a complete clinical workflow, we ensure that the model is rigorously tested for its generalization capacity and robustness under practical conditions.
4 Results
The classification task was performed using a model trained to detect the slices of the exams that contain lungs. After training, the model was evaluated using the COVIDSeg dataset. The results of this evaluation are presented in Table 1.
The results in Table 1 indicate that the lung detection model is effective, showing high accuracy, sensitivity, and AUC. The Kappa index further reinforces substantial agreement between the model’s predictions and the markings by specialists. While precision could be improved, the overall results demonstrate that the model is robust for detecting lungs in the slices of CT exams.
Table 2 presents the results for the task of segmenting COVID-19 lesions in CT images.
The original images exhibited precision and specificity (82.00% and 100%, respectively), but the Dice coefficient (70.80%) and IoU (55.59%) metrics suggest that there is room for improvement. The sensitivity of 63.80% indicates that the model is missing some lesions in the images. With the application of preprocessing, all metrics except precision improved, with Dice increasing to 74.20%, IoU rising to 60.80%, and sensitivity increasing to 71.20%, indicating an enhanced ability of the model to detect lesions.
The combination of data augmentation with preprocessing resulted in the best metrics, with Dice reaching 78.40%, IoU achieving 64.80%, and sensitivity reaching 78.00%. The AUC also showed a high value of 88.60%, indicating robust model performance in differentiating between lesions and healthy tissue. These results demonstrate that preprocessing and data augmentation are effective in improving the detection of pulmonary lesions in CT images.
Table 3 presents the results of the complete methodology, which integrates the classification model for slice separation, the preprocessing method, and the segmentation model. We observed an increase in precision and an increase in the AUC. Figure 5 highlights the main contribution of this stage, which is the reduction of false positives related to noise found in images that do not display visible lung regions. Additionally, Fig. 6 illustrates the performance of segmentation with the complete methodology, showing that the segmentations are more detailed and closer to those of experts, compared to segmentations performed on original images without preprocessing.
The figure compares pulmonary lesion segmentations. Panel (a) shows results from the model without preprocessing, and panel (b) from the model with full methodology. Predictions are in green and expert annotations in red. The complete methodology yields more detailed, expert-like segmentations, achieving higher IoU values. (Color figure online)
The results presented indicate that the proposed methodology, integrating classification, preprocessing, and data augmentation, is effective for the detection and segmentation of pulmonary lesions in computed tomography images. The lung detection model demonstrated high accuracy, sensitivity, and AUC, ensuring reliable identification of slices relevant for segmentation. Improvements in Dice, IoU, and sensitivity metrics, especially with the combination of data augmentation and preprocessing, highlight the effectiveness of these techniques in enhancing the performance of the segmentation model. The inclusion of the classification model contributed to the reduction of false positives, further refining the accuracy of the segmentations. In summary, the integrated approach provides a more precise and efficient analysis of pulmonary lesions, optimizing support for the diagnosis and treatment of patients with COVID-19.
5 Discussions
The proposed method for detecting and segmenting pulmonary lesions in computed tomography images presents promising results, even with the limitations of the databases and the presence of small lesions that challenge the performance of U-Net models and their variations. In all experiments conducted, we observed that, without the need for a modified and complex U-Net, we achieved results comparable to the state of the art. Therefore, the proposed method offers a viable alternative to complex models, particularly useful in real-world scenarios with hardware limitations.
The results obtained were compared with other methods in the literature. Table 4 presents a comparison of the results of the proposed method with the state of the art, which uses training, validation, and test split in the exam slices. However, this comparison is unfeasible for our method due to differences in evaluation procedures. Our method uses cross-validation to ensure a more robust and generalizable evaluation, whereas the methods listed in Table 4 do not use this technique. This methodological difference prevents a fair and direct comparison of the results.
Table 5 offers a more appropriate comparison, where the analyzed methods use the division of training, validation, and testing on the exams, similar to our proposed method. The results show that the proposed method presents similar metrics, standing out especially in the IoU metric and in terms of specificity (100%) and AUC (89.60%). These results reinforce the effectiveness of our method in segmenting pulmonary lesions, comparable to the state of the art, despite using a significantly lower number of parameters.
In addition to presenting similar results, the proposed method also uses significantly fewer parameters compared to other methods in the literature, as shown in Table 6. This is particularly important in scenarios with hardware limitations, where lighter and more efficient models are preferable. Our method, with 9 million parameters, offers a viable and effective alternative to more complex models that require higher computational capacity.
6 Conclusions
The proposed method for detecting and segmenting pulmonary lesions in computed tomography (CT) images has demonstrated promising results despite the limitations of databases, image acquisition methods, and the detection of small lesions. In all conducted experiments, the approach using the proposed U-Net, combined with preprocessing and data augmentation techniques, achieved results comparable to the state of the art. The obtained metrics, especially in terms of IoU, specificity, and AUC, highlight the effectiveness of our method.
The preprocessing methodology for CT images proved effective in standardizing exams acquired by different methods, significantly improving performance in segmentation tasks. Combining this with data augmentation techniques reduces the chance of overfitting, balances classes, and generates new samples for training, resulting in superior performance on the test set. Additionally, comparative analysis with other methods in the literature revealed that, despite using a smaller number of parameters, our method demonstrated competitive performance. This characteristic is particularly relevant for applications in scenarios with hardware limitations, where lighter and more efficient models are necessary. Using only 9 million parameters, compared to tens of millions in other studies, underscores the feasibility and efficiency of our approach.
To further enhance the proposed method, we suggest the following future directions: Expand the dataset by incorporating more images from different sources and clinical conditions to improve the model’s robustness and generalization; Investigate and incorporate new data augmentation techniques that can help handle the variability of CT images and improve the detection of smaller lesions; And Utilize pre-trained models on large medical datasets to enhance the performance of the model on smaller and specific datasets, such as those used in this study.
By following these directions, we expect not only to improve the accuracy and efficiency of the proposed method but also to facilitate its adoption in clinical settings, contributing to faster diagnosis and better patient outcomes.
References
Ardakani, A.A., et al.: Application of deep learning technique to manage Covid-19 in routine clinical practice using CT images: results of 10 convolutional neural networks. Comput. Biol. Med. 121, 103795 (2020)
Axiaq, A., et al.: The role of computed tomography scan in the diagnosis of covid-19 pneumonia. Curr. Opin. Pulm. Med. 27(3), 163–168 (2021)
Buslaev, A., Iglovikov: albumentations: fast and flexible image augmentations. Information 11(2) (2020)
Castiglione, A., et al.: Covid-19: Automatic detection of the novel coronavirus disease from ct images using an optimized convolutional neural network. IEEE Trans. Industr. Inf. 17(9), 6480–6488 (2021)
Chhikara, P., et al.: Deep convolutional neural network with transfer learning for detecting pneumonia on chest x-rays. In: Jain, L.C. (ed.) Advances in Bioinformatics, Multimedia, and Electronics Circuits and Signals, pp. 155–168. Springer, Singapore (2020)
Corman, V.M., Drosten, C.: Authors’ response: Sars-cov-2 detection by real-time rt-pcr. Eurosurveillance 25(21) (2020)
George, P.M., et al.: Respiratory follow-up of patients with covid-19 pneumonia. Thorax 75(11), 1009–1016 (2020)
Hasanzadeh, N., et al.: Segmentation of Covid-19 infections on CT: comparison of four unet-based networks. In: 2020 27th National and 5th International Iranian Conference on Biomedical Engineering (ICBME), pp. 222–225 (2020)
Jagatheeswari, P., et al.: Contrast stretching recursively separated histogram equalization for brightness preservation and contrast enhancement. In: 2009 International Conference on Advances in Computing, Control, and Telecommunication Technologies, pp. 111–115 (2009)
Jun, M., et al.: Covid-19 CT lung and infection segmentation dataset. Zenodo (2020)
LeBlanc, J.J., et al.: Real-time PCR-based SARS-COV-2 detection in Canadian laboratories. J. Clin. Virol. 128, 104433 (2020)
Ma, J., et al.: Toward data-efficient learning: a benchmark for Covid-19 CT lung and infection segmentation. Med. Phys. 48(3), 1197–1210 (2021)
Marcomini, K.D., et al.: Evaluation of a computer-aided diagnosis system in the classification of lesions in breast strain elastography imaging. Bioengineering (Basel) 5(3) (2018)
Marques, J.V., et al.: Detection of covid-19 in computed tomography images using deep learning: a literature review. Revista de Sistemas e Computação-RSC 12(1) (2022)
Martínez Chamorro, E., et al.: Radiologic diagnosis of patients with covid-19. Radiologia 63(1), 56–73 (2021)
Morozov, S., et al.: Mosmeddata: chest CT scans with covid-19 related findings dataset. medRxiv (2020)
Otsu, N., et al.: A threshold selection method from gray-level histograms. IEEE Trans. Syst. Man Cybern. 9(1), 62–66 (1979)
Ronneberger, O., et al.: U-net: convolutional networks for biomedical image segmentation. In: Navab, N. (ed.) Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, pp. 234–241. Springer (2015)
Samir, A., et al.: Covid-19 versus h1n1: challenges in radiological diagnosis–comparative study on 130 patients using chest HRCT. Egypt. J. Radiol. Nucl. Med. 52(1), 77 (2021)
Sang, Y., et al.: Super-resolution and infection edge detection co-guided learning for covid-19 CT segmentation. In: ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1665–1669 (2021)
Siddiqi, H.K., Mehra, M.R.: Covid-19 illness in native and immunosuppressed states: a clinical-therapeutic staging proposal. J. Heart Lung Transplant. 39(5), 405–407 (2020)
Stephanie, S., et al.: Determinants of chest radiography sensitivity for covid-19: a multi-institutional study in the united states. Radiol.: Cardiothoracic Imaging 2(5), e200337 (2020)
Wang, W., et al.: Detection of sars-cov-2 in different types of clinical specimens. JAMA 323(18), 1843–1844 (2020)
Watson, O.J., et al.: Global impact of the first year of covid-19 vaccination: a mathematical modelling study. The Lancet Infectious Diseases (2022)
WHO. Covid-19 epidemiological update – 12 April 2024. https://www.who.int/publications/m/item/covid-19-epidemiological-update-edition-166. Accessed 15 Jan 2024
Xie, F., et al.: Duda-net: a double u-shaped dilated attention network for automatic infection area segmentation in covid-19 lung CT images. Int. J. Comput. Assist. Radiol. Surg. 16(3) (2021)
Zhang, Q., et al.: Segmentation of infected region in ct images of covid-19 patients based on qc-hc u-net. Sci. Rep. 11(1), 22854 (2021)
Zhao, Q., et al.: Lcov-net: a lightweight neural network for covid-19 pneumonia lesion segmentation from 3d CT images. In: 2021 IEEE 18th International Symposium on Biomedical Imaging (ISBI), pp. 42–45 (2021)
Zhao, X., et al.: D2A U-Net: automatic segmentation of COVID-19 CT slices based on dual attention and hybrid dilated convolution. Comput. Biol. Med. 135, 104526 (2021)
Zhou, L., et al.: A rapid, accurate and machine-agnostic segmentation and quantification method for CT-based covid-19 diagnosis. IEEE Trans. Med. Imaging 39(8), 2638–2652 (2020)
Zhou, T., et al.: The ensemble deep learning model for novel covid-19 on ct images. Appl. Soft Comput. 98, 106885 (2021)
Zhou, T., et al.: Automatic Covid-19 CT segmentation using u-net integrated spatial and channel attention mechanism. Int. J. Imaging Syst. Technol. 31(1), 16–27 (2021)
Acknowledgments
This work was carried out with the support of the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001. We are also grateful for the support of Fundação de Amparo à Pesquisa do Estado do Piauí (FAPEPI) -http://www.fapepi.pi.gov.br.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare relevant to the content of this article.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Marques, J.V.M., de Araújo Gonçalves, C., de Carvalho Filho, A.O., de Melo Souza Veras, R., Veloso e Silva, R.R. (2025). Automated Segmentation of Computed Tomography Images for COVID-19 Patient Evaluation. In: Paes, A., Verri, F.A.N. (eds) Intelligent Systems. BRACIS 2024. Lecture Notes in Computer Science(), vol 15414. Springer, Cham. https://doi.org/10.1007/978-3-031-79035-5_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-79035-5_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-79034-8
Online ISBN: 978-3-031-79035-5
eBook Packages: Computer ScienceComputer Science (R0)





