Abstract
Wind turbines capture the kinetic energy produced by the wind, with the blades being the component most susceptible to damage. Unplanned stops result in significant losses, which highlights the need to detect these failures early. As a step in the preventative maintenance procedure, hundreds of color photographs of the blades are taken for subsequent analysis by an expert. In this work, we present a method to highlight superficial damages in wind turbine blades using a classification and localization process. A new dataset was created using images of wind blades, each divided into uniform slices and labeled by an expert according to the type of fault identified. Then, we apply class balancing and data augmentation methods prior to fine-tuning a general-propose pre-trained deep convolutional neural network model. The best model was used to classify and locate damages in the wind blades images. As a result, we obtained an overall precision of 96.1%, accuracy of 97% and recall of 94.5% when classifying the presence of damages. We show that our method can be integrated into the monitoring of wind blade tasks, helping the specialist to highlight and identify images containing damage.
Access provided by University of Notre Dame Hesburgh Library. Download conference paper PDF
Similar content being viewed by others
1 Introduction
Wind energy is one of the most important sustainable energy sources. From the 70 s of the 20th century, with constant crises of oil and the increase in concern about its scarcity is why there has been growth in the development of large wind turbines (WTs) for energy production [1]. The main components of WT are: set of blades, responsible for capturing the kinetic energy of the winds; nacelle, which houses the essential equipment for energy production; tower, responsible for supporting the nacelle and blades; and transformer, which connects the WT to electrical grid [2].
The set of blades is considered the most important part of WT, as it is responsible for capturing the kinetic energy of the winds, in addition to being the component with the highest cost of WT. Generally, blades are made of composite materials (carbon fiber, fiberglass, among others). These materials are characterized by low weight, mechanical resistance and flexibility [3]. Blades are also the components most susceptible to damage, being responsible for 15.19% of all incidents recorded till 31 March 2023 [4] with average of 3,800 incidents by year [5]. The main damages are: detachment of the shell, failure of the adhesive joint, detachment of the sandwich panel, delamination due to tensional load, fiber breakage, cracks, among others. The most common causes of damage are: strong winds, storms, lightning, ice, problems with materials and assembling [6].
Blade damages can lead to unscheduled shutdown of WT, generating large maintenance and operation costs or the total loss of the equipment, causing major financial impact [8]. Therefore, it is very important to identify this damage as soon as possible, in order to avoid the spread of damage and prevent accidents. The use of premature damage detection techniques as monitoring tools can significantly reduce maintenance costs and increase equipment availability. Many techniques are employed, namely strain measurement, acoustic emission, ultrasound, vibration, thermography and computer vision. The last is often prefered because is non-invasive, low-cost, has no environmental impacts, is highly accurate and quickly applied, although good quality images are required for an effective result [9].
Several recent research uses computer vision to identify damage to wind blades. Moreno et al. [10] proposed a deep learning vision-based approach for detecting wind turbines blades (WTBs) damages using a unmanned aerial vehicles (UAV). Three types of damages were considered: lightning impact, wearing or fracture. The authors used 78 public images collected from the Internet to train the model, which was validated on a wind blade prototype built on a 3D printer contained simulated damage. The UAV was simulated using a webcam on a robotic arm. The final accuracy of the model achieved 81.25%.
Yang et al. [11] applied a deep learning classification method using the ResNet50 [12] as backbone to identify blade damage. A UAV was used to acquire 1,594 images from 20 different types of wind blades, then 557,900 small crops were generated, of which 13,200 were selected for training and testing. The performance of the algorithm was compared with the AlexNet [13] network, demonstrating a better result. The final accuracy found was 95.58%. The model was able to classify five types of images: normal, background, cracks, holes and mixed damage.
Shihavuddin et al. [14] captured images by drone inspection and applied data augmentation techniques to train a Faster R-CNN model [15] with Inception-ResNet-V2 backbone to detect wind blades damages. The acquired images were manually labeled by experts into four classes: Leading Edge erosion, Vortex Generator panel (VG panel), VG panel with missing teeth and lightning receptor. The authors found that the use of data augmentation techniques and deep CNN architectures greatly improved the model’s performance and the proposed method managed to achieve an accuracy of 81.10% in Mean Average Precision (MAP). Another contribution of the work was to produce a public image dataset from a wind turbine inspection containing 701 unlabeled images in high definition called DTU Drone Inspection Dataset [16].
Foster et al. [17] extended the work of Shihavuddin et al. [14] and created a new dataset of public images, but this time labeled. The authors generated more than 13,000 crops of the original images in the size of \(586 \times 371\) pixels with 3,000 labels from the damage and dirt classes. The objective of this work was also to detect wind blades damage using bounding boxes. The authors compared the performance of Faster R-CNN models with ResNet-101 backbone and YOLOv5 [18] using this dataset. Data augmentation techniques were applied to the images. The authors concluded that the best model was YOLOv5s.
The researches presented above are focused on improving the accuracy of classification models. In the proposal presented in this article, the objective is to classify and locate faults in images obtained from a local wind farm, taking special care not only with precision, but also with accuracy and recall.
Nowadays, in these local Wind farms, the inspection is carried out by technical teams. Often, the technician responsible for acquiring the images does not have the necessary knowledge to correctly identify the damage contained in the blades. His responsibility is to capture several photos of each wind turbine and send them to a damage specialist, who is usually located at the company’s headquarters. This specialist needs to analyze a very large number of photos, then filter only those that contain damage and identify its specific type and location, to finally decide what is the best plan for its correction. The specialist must evaluate each image fragment in search of non-uniformities, which is quite repetitive and exhausting, prone to evaluation errors after hours of work. So we use computer vision techniques to classify external structural WTB damages from color photographs, with the aim of reducing the number of images analyzed by experts by discarding undamaged images and also indicating damage existing in the remaining images.
The paper is structured as follows: Sect. 2 presents concepts about the types and possible causes of damage to wind turbine blades. Section 3 presents techniques, parameters and processes used to classify and locate WTB damages. The experimental results are detailed in Sect. 4. The Sect. 5 presents the concluding remarks.
2 Wind Turbine Blades Damage
Damage to wind turbine blades can occur due to two factors: the first is related to manufacturing processes and/or human errors; the second is caused by external factors such as lightning strikes, ice, uneven loads, moisture, strong wind gusts, among others. Damage is detrimental as it affects the energy generation performance of the wind turbine, leading to reduced energy production and, in extreme cases, total equipment loss.
WTBs have lightning protection devices, but these are not entirely effective in preventing all lightning strikes, resulting in damage to the blade structure. Damage caused by lightning strikes usually includes holes and burns in the area around the point where the lightning struck the blade [19]. The severity of the damage depends on the extent of the lightning strike impact. Figure 1A illustrates an example of damage from a lightning strike. The blades also subject to repeated and constant collisions with particles such as raindrops, hail, sand, insects, dust, salt, oil, etc. This leads to erosion/corrosion of the area initially exposed to the wind, known as the leading edge of the blade. Erosion affects the aerodynamic efficiency of the blade, which can reduce the capacity for electricity generation. Damage caused by erosion is small and superficial in the initial stages, but over time, it increases and extends along the entire leading edge. Figure 1B illustrates an example of erosions.
Damage to paint is often superficial and often considered cosmetic damage as it does not compromise the structure of the blade. Examples include paint peeling, oil stains, among others. Over time, this type of damage can sometimes progress to erosion damage. Usually, they do not require maintenance intervention but should be monitored. Figure 1C illustrates an example of cosmetics damage. Cracks damages are typically the result of material fatigue in the components of the blades due to prolonged use [8]. They initially start small and shallow but progressively enlarge and deepen, potentially compromising the entire structure of the blade. This type of damage is characterized by a thin and longitudinal shape. Figure 1D illustrates an example of a crack.
3 Proposed Method
The architecture of the method proposed can be subdivided into two stages: the model training stage and the damage identification stage. These two stages will be presented below.
3.1 Architecture of the Training Stage
The training stage aims to introduce the necessary steps to obtain a trained model that will be used in the next stage, damage identification. Figure 2 illustrates the architecture of the training stage. Details of all these steps will be shown as follows.
Images Dataset. Photographs of damaged wind blades were made available by a company that provides technical support to several wind farms. Table 1 summarizes the quantity, minimum and maximum resolution in pixel of the images by damage type. The photographs have varying sizes and a large area of image not related to damage, with the sky as a background or areas that do not have damage and some have text on the image indicating the date and time of the photo.
Cropping and Labeling Images. The dataset used in this paper contains high-resolution images in order to better capture the damage details. Since the pre-trained general purpose models used in this work require much smaller size images as input, it could make some of the smaller damages indistinguishable.
Instead, we decided to crop the high-resolution images and tackle the problem of damage classification in each of the crops. The original image is received as input. Then, to define the position of the cuts in the image, a grid is calculated that serves as a guide for making the cuts. In order to prevent damage to the division of the grids, we define an overlap of the generated grids. Thus, three parameters are necessary for generating the grids: the image size, the grid size and the overlap size. The overlap size parameter defines the minimum value to be considered. If the image size is not divisible by the size of the overlapping grids, the algorithm adds another set of grids (horizontal or vertical) and increases the size of the overlap to achieve a more adequate adjustment of the grids.
Using this approach, we build an images dataset containing 1,828 crops. All crops generated were contextually analyzed by a wind blade damage specialist, appointed by the wind turbine manufacturing company. Each crops received a label indicating which class the image belongs to. The final dataset includes 1,243 normal, 195 lightning, 151 erosion, 51 cosmetic, and 188 crack images.
In Fig. 3 we present the cropping and labeling procedure. The original image that was passed as an input parameter is displayed in Fig. 3A. Figure 3B shows several crops generated from the grid with overlap. And finally, Fig. 3C shows the red highlights crops labeled “lightning damage” and non highlighted crops labeled “normal”.
Datasets.
The new labeled crops were arrange in a dataset used to design the classification models. We reserve 25% of the crops (457 images) for the validation set. The test set contains 15% (206 images) and the remainder (1,165 images) are used for the training set. All sets are stratified by available classes. In the model training stage, detailed in Sect. 3.1, the training and test data sets are separated with different samples in each of the training repetitions.
There is a large imbalance between the classes, with images from the normal class having the largest number of samples. Thus, a new dataset is created, applying algorithms for class balancing and data augmentation were applied using images transformations. The balancing method used is oversampling with data-level methods [20].
The data augmentation process applies methods from the single-image model-free approach, as defined by [21]. The following transformation functions are applied: vertical flip, horizontal flip, brightness adjustment with a random \(\delta \in [-0.10, 0.10]\) and contrast adjustment with a random factor value in [0.7, 1.3]. The brightness delta value is added to each color channel of the image, this causes the brightness of the image to change. The contrast factor is also applied to each color channel of the image. Each of these functions has a 30% probability of being applied, and for each image, at least one of the functions, randomly chosen, will have the probability of being applied increased to 100%. That is, at least one transformation function will always be applied to the image.
Models. The model is built using the framework TensorFlow 2.12.1 which is a free and open source software tool for machine learning and artificial intelligence focused mainly on deep learning.
The transfer learning process based on progressive learning [22] is used to optimize model training time. Four pre-trained models from the ImageNet [23] dataset were used, Table 2 lists these models. The pre-trained layers are frozen and a new output layer is added with softmax activation for classifying the five defined classes. Then, this last layer of the models is refined using the two sets of data generated: with and without balancing and augmentation. Each of the four initial models generates two refined models, resulting in a total of eight models built.
Metrics.
One of the initial problems faced by a specialist in damage to wind turbine blades is having to analyze thousands of photographs produced by technical teams in the field, with only a minority of the images showing any damage that requires in-depth analysis. To facilitate this specialist’s analysis, it is necessary to select images with damage and provide a prior classification of the damage contained in the image. Therefore, it is very important that the model can precisely identify whether or not an image has damage.
The metrics analyzed in the model construction process are: accuracy, recall and precision. Models are optimized to reduce cross-entropy loss. The authors decided to implement 5 training realizations (repetitions) for each model. The idea is to evaluate the robustness of the models to the initialization of the weights of the last layer based on the analysis of the mean and standard deviation of the adopted metrics. For each of the five training repetitions, metrics are calculated using the test data set and evaluated in the classification of normal or damage classes (lightning, erosion, cosmetics or cracking). Then, the choice of the best repetition is made by measuring the precision metric.
This choice is motivated by the goal of using the model to select only the photos with damage, which will then be reviewed by an expert. Therefore, it is important to minimize the number of false negatives, which occur when precision is high. A false negative occurs when images with damage are classified as normal. On the other hand, a false positive occurs when normal images are classified as damaged. The indication of absence of failure when there is an established or ongoing failure can have catastrophic results.
As a tiebreaker criterion, the accuracy metric is considered first, followed by recall. Once the best iteration of each model is chosen, the models are evaluated again using the validation set. Finally, the model with the best precision, according to the selection criteria, is evaluated in detail by type of damage (normal, lightning damage, erosion damage, cosmetics damage or crack damage).
3.2 Architecture of the Damage Identification Stage
The previous best model is used in the damage identification stage, as illustrated in Fig. 4. The steps in Fig. 4A–D were detailed in the previous sections, while Fig. 4E–G will be detailed in the following sections.
Classification of Original Images. The classification models are trained and evaluated using the various crops generated from the original images. The crops are classified using the model with the best generalization and, finally, a vote is applied on the crops results to define a unique class for each original image where they belong. In this analysis, damage predictions have priority over the normal class prediction, i.e., if all crops for an original image have the normal prediction with only one crop predicted as erosion (for example), the original image is classified as erosion. This process is used to quickly screen images containing damage.
Locating Image Damage. The classification of the crops also allows mapping the damage in the input blade image. In the classification process, the output layer of the best model generates normalized values of the neuron outputs. As a result, the sum of outputs is equal to 1. This value is used as a class intensity for each crop. Therefore, it is possible to locate the damage in the original image using a heat map of each class.
This damage localization process is illustrated in Fig. 5. An image of atmospheric discharge damage is shown in Fig. 5A. The region is highlighted. The cutting guide grid is shown in blue sky color. The region is classified as lighting with 0.9938. In Fig. 5B a heat map is displayed using the intensity value of each crop for the lightning class. The closer to 1.00, the redder the crop area becomes. The closer to 0.00, the bluer the crop appears. The final result of the lightning damage location is presented in Fig. 5C.
4 Experimental Results
4.1 Training Models
All trained models use the following parameters: a learning rate of 0.001, 200 epochs, the Adam optimizer, GPU parallelism, and callbacks. The callbacks used are as follows: a 20% reduction in the learning rate when a plateau is reached in the test set loss metric for 5 epochs, with a minimum learning rate limit of 0.0001; and early stopping of training when the test set loss metric does not decrease after 20 epochs, with the best weights found being restored. The synchronous distributed training strategy is used on two GPUs on the same machine.
The models were trained using 5 repetitions, with the training and test sets being regenerated with new samples each time to evaluate robustness. The training metrics, evaluated on the test set, were used to determine the best model for evaluation on the validation set.
4.2 Test Dataset Results
Results of the models evaluated on the test set for Normal and Damage classes are detailed in Table 3. The best average values are highlighted in bold. The suffix ‘DE’ indicates that the model is trained using the unbalanced training set. The suffix ‘BA’ indicates that training is carried out on the balanced training set, with the application of data augmentation.
The ResNet50-DE model, trained with the unbalanced dataset, has the highest average values in the precision (0.990) and accuracy (0.979) metrics, and the EfficientNetB2-DE model has the best result in recall (0.992). Considering the standard deviation for the precision metric, there is a tie with the models EfficientNetB2-DE (0.976±0.013), EfficientNetB2-BA (0.984±0.011), and VGG16-BA (0.981±0.013).
Comparing the balanced and unbalanced data for each backbone, we observe that the balanced data perform better in the precision metric in three out of four backbones (EfficientNetB2-BA, MobileNetV2-BA, and VGG16-BA). The behavior reverses when analyzing the accuracy and recall metrics, with the unbalanced data performing better in most cases.
4.3 Validation Dataset Results
The best result among the repetitions for each model is selected for evaluation on the validation set. Table 4 summarizes the results obtained based on the classification of the Normal and Damage classes evaluated on the validation set.
The results show that the models achieving the best generalizations are EfficientNetB2-BA and VGG16-BA, both with a precision of 0.993. The best accuracy (0.982) and recall (0.997) results are achieved by the MobileNetV2-DE model. Models based on balanced data have better precision values in three cases (EfficientNetB2-BA, MobileNetV2-BA, and VGG16-BA) and a tie (ResNet50-DE and ResNet50-BA). According to the tie-breaking criterion indicated in Sect. 3.1, the best model found is EfficientNetB2-BA.
4.4 Input Images Result
The model with the best generalization, EfficientNetB2-BA, is used to classify all generated crops, and voting is applied, as described in Sect. 3.2, to determine a single class for each original image. Figure 6 illustrates the final classification results of the original images, showing an average precision of 96.1%, an average recall of 94.9%, and an overall accuracy of 97.0%. The average is calculated by applying the metrics individually to each class and then computing an unweighted average of these values.
5 Conclusions
In this work, we present a method to identify wind turbine blade damages in one of four types: lightning, erosion, cosmetics damage, or crack. The results show that among the eight trained models, the EfficientNetB2-BA model achieved the best generalization performance for classifying crops of the original images into normal or damaged classes, with a precision of 99.3%, as well as excellent performance in detailed classification of the classes (Lightning, Erosion, Normal, Cosmetics, and Cracks). When applied to the classification of the provided original images, we obtained an overall precision of 96.1%, accuracy of 97%, and recall of 94.5%.
As evidenced by the results, the proposal can assist in image screening and analysis tasks, providing a tool for the specialist to identify damage in wind blades.
In future work, we aim to enhance the precision of damage localization, using the quadtree technique. With this method, we anticipate achieving more precise localization of damaged areas, thereby improving the computing time and overall detection process. Additionally, we plan to enhance the visualization of the damage location through the implementation of a salience map. This will provide a more intuitive and detailed representation of the damage, facilitating better analysis and interpretation. These advancements are expected to contribute to the effectiveness of image-based damage assessment methodologies.
References
Farias, L.M., Sellitto, M.A.: Uso da energia ao longo da história: evolução e perspectivas futuras. Revista Liberato 12(17), 07–16 (2011)
Lage, E.S., Processi, L.D.: Panorama do setor de energia eólica. Banco Nacional de Desenvolvimento Econômico e Social (2013)
Mishnaevsky, L., Branner, K., Petersen, H.N., Beauson, J., McGugan, M., Sørensen, B.F.: Materials for wind turbine blades: an overview. Materials 10(11), 1285 (2017)
Yang, Z., et al.: Detection of wind turbine blade abnormalities through a deep learning model integrating VAE and neural ODE. Ocean Eng. 302, 117689 (2024)
Mishnaevsky, J.R.L.: Root causes and mechanisms of failure of wind turbine blades: overview. Materials 15(9), 2959 (2022)
Du, Y., Zhou, S., Jing, X., Peng, Y., Wu, H., Kwok, N.: Damage detection techniques for wind turbine blades: a review. Mech. Syst. Signal Process. 141, 106445 (2020)
Kaewniam, P., Cao, M., Alkayem, N.F., Li, D.M.E.: Recent advances in damage detection of wind turbine blades: a state-of-the-art review. Renew. Sustain. Energy Rev. 167, 112723 (2022)
Wang, W., Xue, Y., He, C., Zhao, Y.: Review of the typical damage and damage-detection methods of large wind turbine blades. Energies 15(15), 5672 (2022)
Kaewniam, P., Cao, M., Alkayem, N.F., Li, D.M.E.: Recent advances in damage detection of wind turbine blades: a state-of-the-art review. Renew. Sustain. Energy Rev. 167, 112723 (2022)
Moreno, S., Peña, M., Toledo, A., Treviño, R., Ponce, H.: A new vision-based method using deep learning for damage inspection in wind turbine blades. In: IEEE 2018 15th International Conference on Electrical Engineering, Computing Science and Automatic Control (CCE), pp. 1–5. IEEE, Mexico City (2018)
Yang, P., Dong, C., Zhao, X., Chen, X.: The surface damage identifications of wind turbine blades based on resnet50 algorithm. In: IEEE 2020 39th Chinese Control Conference (CCC), pp. 6340–6344. IEEE, Shenyang (2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. IEEE, Las Vegas (2016)
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25 (2012)
Shihavuddin, A.S.M., et al.: Wind turbine surface damage detection by deep learning aided drone inspection analysis. Energies 12(4), 676 (2019)
Ren, S., He, K., Girshick, R., Sun, J.: Faster r-cnn: towards real-time object detection with region proposal networks. Adv. Neural Inf. Process. Syst. 28 (2015)
Shihavuddin, A.S.M., Chen, X.: DTU - Drone inspection images of wind turbine. Mendeley Data V2 (2018). https://doi.org/10.17632/hd96prn3nc.2
Foster, A., Best, O., Gianni, M., Khan, A., Collins, K., Sharma, S.: Drone footage wind turbine surface damage detection. In: 2022 IEEE 14th Image, Video, and Multidimensional Signal Processing Workshop (IVMSP), pp. 1–5. IEEE, Piscataway (2022)
Redmon, J., Divvala, S., Girshick, R., Farhadi, A.: You only look once: unified, real-time object detection. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 779–788. IEEE, Las Vegas (2016)
Garolera, A.C., Madsen, S.F., Nissim, M., Myers, J.D., Holboell, J.: Lightning damage to wind turbine blades from wind farms in the US. IEEE Trans. Power Deliv. 31(3), 1043–1049 (2014)
Buda, M., Maki, A., Mazurowski, M.A.: A systematic study of the class imbalance problem in convolutional neural networks. Neural Netw. 106, 249–259 (2018)
Xu, M., Yoon, S., Fuentes, A., Park, D.S.: A comprehensive survey of image augmentation techniques for deep learning. Pattern Recogn. 2023, 109347 (2023)
Iman, M., Arabnia, H.R., Rasheed, K.: A review of deep transfer learning and recent advancements. Technol. MDPI 11(2), 40 (2023)
Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115, 211–252 (2015)
Tan, M., Le, Q.: Efficientnet: rethinking model scaling for convolutional neural networks. In: PMLR International Conference on Machine Learning, pp. 6105–6114. Long Beach, California (2019)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L. C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4510–4520. IEEE, Salt Lake City (2018)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Acknowledgments
The authors are grateful to Talisson Araujo Figueiredo for supporting with data labeling process. Their expertise significantly contributed to the quality and rigor of this research. Without their support, this work would not have been possible.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Ethics declarations
Disclosure of Interests
The authors have no competing interests to declare that are relevant to the content of this article.
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
de Oliveira, A.R.A., de Sá Medeiros, C.M., Ramalho, G.L.B. (2025). Damage Identification of Wind Turbine Blades. In: Paes, A., Verri, F.A.N. (eds) Intelligent Systems. BRACIS 2024. Lecture Notes in Computer Science(), vol 15414. Springer, Cham. https://doi.org/10.1007/978-3-031-79035-5_13
Download citation
DOI: https://doi.org/10.1007/978-3-031-79035-5_13
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-79034-8
Online ISBN: 978-3-031-79035-5
eBook Packages: Computer ScienceComputer Science (R0)





