Open Access
Recognition of Vibration Dampers Based on Deep Learning Method in UAV Images

Jingjing LIU, Chuanyang LIU, Yiquan WU, Zuo SUN

  • Full Text Views

    206

  • Cite this
  • Free PDF (15.8MB)

Summary :

As one of electrical components in transmission lines, vibration damper plays a role in preventing the power lines dancing, and its recognition is an important task for intelligent inspection. However, due to the complex background interference in aerial images, current deep learning algorithms for vibration damper detection often lack accuracy and robustness. To achieve vibration damper detection more accurately, in this study, improved You Only Look Once (YOLO) model is proposed for performing damper detection. Firstly, a damper dataset containing 1900 samples with different scenarios was created. Secondly, the backbone network of YOLOv4 was improved by combining the Res2Net module and Dense blocks, reducing computational consumption and improving training speed. Then, an improved path aggregation network (PANet) structure was introduced in YOLOv4, combined with top-down and bottom-up feature fusion strategies to achieve feature enhancement. Finally, the proposed YOLO model and comparative model were trained and tested on the damper dataset. The experimental results and analysis indicate that the proposed model is more effective and robust than the comparative models. More importantly, the average precision (AP) of this model can reach 98.8%, which is 6.2% higher than that of original YOLOv4 model; and the prediction speed of this model is 62 frames per second (FPS), which is 5 FPS faster than that of YOLOv4 model.

Publication
IEICE TRANSACTIONS on Information Vol.E107-D No.12 pp.1504-1516
Publication Date
2024/12/01
Publicized
2024/07/30
Online ISSN
1745-1361
DOI
10.1587/transinf.2024EDP7015
Type of Manuscript
PAPER
Category
Artificial Intelligence, Data Mining

1.  Introduction

The increasing dependence of modern society on electricity poses significant challenges to the patrol and maintenance of the power grids. To address this challenge and ensure the safe and reliable operation of the power system, it is necessary to conduct regular inspections of transmission lines [1]. The collaborative inspection mode, which mainly relies on UAV inspection and supplemented by manual inspection, greatly improves the safety and reliability of inspection tasks and has gradually become a normalized patrol method in some countries [2], [3]. Consequently, the recognition and fault diagnosis of electrical components in transmission lines based on UAV images has become a hot research direction [4]-[6].

The vibration damper is one of an important metal fittings in high-voltage transmission lines, and its main function is to prevent power lines dancing [7]. When the power line vibrates, the vibration dampers will also move up and down, generating forces that are not synchronous or even opposite to the vibration of power lines. This can reduce the amplitude of the power lines vibration and even eliminate the vibration, ensuring the normal operation of transmission lines. However, vibration dampers are installed outdoors, which are exposed to wind, sunlight, and rain all year round, making them susceptible to be rusted. Once the vibration damper is rusted, it may fall off during the vibration process of power lines, which not only causes harm to people or objects under the transmission lines, but also affects the normal operation of transmission lines. Therefore, it is necessary to identify vibration dampers in aerial images of transmission lines [8]. Aerial images of vibration dampers in transmission lines are shown in Fig. 1, which are circled by the red rectangular boxes.

Fig. 1  Aerial images of vibration dampers

In recent years, with the rapid development of artificial intelligence technology, a large number of methods have been proposed to achieve object detection in aerial images of high-voltage transmission lines [9]. According to statistics, the electrical components used for detection research in transmission lines in the past decade is mainly divided into four categories: insulator [10]-[12], power lines [13], [14], power tower [15], and metal fittings [16]. Early researchers mostly first used traditional image processing methods to mine UAV images, and then performed object detection and fault diagnosis on the obtained data. Reference [17] proposed a recognition method of icing insulator based on texture feature description operators. Reference [7] combined histogram equalization, morphological processing, and RGB color model to achieve the rusted defect detection of vibration dampers. Reference [18] applied class Haar features and cascaded Ada-boost classifiers to identify dampers in complex backgrounds. Reference [19] utilized spatial morphology features to detect faults in glass and ceramic insulators.

The traditional image processing methods have achieved many valuable research results, however, those methods typically perform detection tasks by processing fixed features of a single object (such as edge detection, color feature, texture feature, contour feature, etc.), which makes it difficult to apply to multiple object detection simultaneously. In addition, traditional methods do not have robustness in the complex and ever-changing environment of high-voltage transmission lines, and are easily affected by factors such as light intensity, filming angle, scale changes, and foreign object occlusion, making them unsuitable for practical detection tasks.

Currently, with the development of object detection technology based on deep learning, representative network algorithms (e.g., Regions with Convolutional Neural Networks (R-CNN) [20], Faster R-CNN [21], Region-based Fully Convolutional Networks (R-FCN) [22], Mask R-CNN [23], Single Shot multi-box Detector (SSD) [24], YOLO [25], [26], RetinaNet [27], CornerNet [28], CenterNet [29], etc.) have achieved excellent results on ImageNet, MS COCO, Pascal VOC and other standard datasets. Therefore, combining deep learning technology to solve the problem of object detection and fault diagnosis of transmission lines has also become the mainstream trend in this field [3], [4], [10]-[12].

According to whether the region candidate boxes are generated, object detection algorithms based on deep learning are mainly divided into two categories: two-stage algorithms and one-stage algorithms. The two-stage algorithm transforms the object detection process into generating region suggestions and classification, achieving good detection performance on public datasets. The representative algorithm is Faster R-CNN. Reference [8] used Faster R-CNN for defect detection of vibration dampers, the detection accuracy in 4 kinds of defects were all above 90%, and the average accuracy of the total defects was 5.4% higher than that of SSD algorithm. Reference [30] performed insulator detection based on Faster R-CNN, with accuracy and recall rates of 96.6% and 97.1%, respectively. In order to improve the accuracy of vibration dampers detection, Ref. [31] used ResNeXt-101 as the backbone network of Cascade R-CNN. The detection accuracy on the test dataset reached 91.2%, which is 5% higher than that of the RetinaNet algorithm. To achieve detection of blocked insulators, Ref. [32] improved the Faster R-CNN algorithm using anchor generation method and non maximum suppression (NMS), and the average accuracy of insulator detection was increased by 17% compared to the original model. Although two-stage algorithms have higher detection accuracy compared to one-stage algorithms, they cannot meet the requirements of real-time detection due to the complexity of network models and a large number of parameters.

The one-stage algorithm can predict the final result in just one step without generating candidate regions, which is an end-to-end object detection method, and the representative algorithms are SSD and YOLO. Reference [6] applied the SSD model to the automatic detection system of transmission lines, achieving good detection results (the average accuracy of insulator detection reached 94.12%, and the running time of each image was only 23ms). Reference [33] used YOLOv2 algorithm to identify insulators from aerial images with background interference. The model was first trained on the training set, and then tested on the testing set. The average recognition accuracy of YOLOv2 reached 88%, and the average prediction speed was about 25FPS. Reference [34] improved the YOLOv3 model using ResNet50 as the backbone network, achieving better detection results compared to the original YOLOv3 model, e.g., 50 FPS and 89.96% accuracy. In our previous work [35], YOLOv3 used a cross stage local network (CSPNet) and DenseNet to detect defects in insulators. Compared with the original YOLOv3 model, the accuracy and recall of proposed model have significantly improved. Reference [36] employed YOLOv4 to detect and recognize targets in autonomous driving. The detection accuracy of YOLOv4 was higher than that of SSD and YOLOv3 models, and the inference time of YOLOv4 was only 21ms. The experimental results showed that the YOLOv4 model can meet the real-time requirements and be applied to vision based real-time object detection and recognition tasks.

In summary, the YOLO model can meet the requirements of real-time detection tasks and has good performance in balancing detection accuracy and speed, making it one of the preferred deep learning algorithms in the current engineering technology field. Nevertheless, there are still many difficulties in recognizing vibration dampers in UAV images based on deep learning algorithms, such as different sizes of vibration dampers, complex and variable backgrounds, small targets, and insufficient samples, etc. To address the aforementioned issues and achieve vibration dampers detection more accurately, in this study, improved YOLOv4 model is proposed to perform vibration dampers detection in aerial images. Specifically, to address the issue of insufficient samples, we collected UAV images from transmission lines and constructed a dataset named Damper. In order to improve the sensitivity and feature extraction ability of the YOLOv4 for small targets, the Res2Net modules were introduced in the network, replacing the original ResNet modules. To enhance the feature fusion of vibration dampers of different sizes, an improved PANet was adopted to the proposed network.

The remainder of this study is organized, as follows. A detailed description of the proposed network (improved YOLOv4) is presented in Sect. 2. Experimental results and discussion are exhibited in Sect. 3. Finally, the conclusion of this study is presented in Sect. 4.

2.  Materials and Methods

Presently, transmission lines inspection based on UAV has become an important means of power inspection, and each inspection will generate massive and high-definition images. However, a large number of images and videos captured by UAV need to be checked by human eyes, or judged whether there are potential hidden dangers based on experience. The oversized images and videos bring enormous workload to manual detection, making it difficult to timely grasp the operation of transmission lines. Meanwhile, due to differences in professional skills, visual fatigue, and insufficient energy, it is easy to cause errors and omissions in human detection. To address the above issues, improved YOLOv4 model is proposed for vibration dampers autonomous real-time detection from aerial images. Firstly, UAV equipped with camera is adopted to inspect transmission lines automatically, and then the collected images are stored on the local memory card or transmitted to the local server. Secondly, aerial images captured by UAV are cropped and labeled, and the dataset named Damper is created, which is divided into training set and testing set, using to train and test the improved YOLOv4, respectively. Finally, UAV images are send to the trained network model for vibration dampers real-time detection. The process of the proposed method is shown in Fig. 2.

Fig. 2  Process of the proposed method for vibration dampers real-time detection.

2.1  The Entire Architecture of the Proposed Model

Currently, thanks to the effectiveness of deep learning technology in extracting image features for high-dimensional representation, it has achieved explosive growth in the field of object detection. At the same time, a series of object detection algorithms based on deep learning have been proposed, inspiring researchers in various fields to develop their own applications on this basis. The YOLO model may be the most popular object detection algorithm in practical applications, and improving the network on the basis of YOLO can easily achieve the expected results.

The YOLOv3 algorithm can meet the requirements of real-time detection tasks and has outstanding advantages in average accuracy and detection speed. It has become one of the preferred deep learning algorithms in the current engineering field. The YOLOv4 algorithm has made many improvements and optimizations on the basis of YOLOv3, enabling it to achieve better detection results with the same execution efficiency. YOLOv4 is an one-stage object detection algorithm with strong real-time performance, which consists of three parts: backbone network for feature extraction, neck for feature fusion, detection head for classification and regression. Compared with YOLOv3 algorithm, YOLOv4 is based on YOLOv3’s “Darknet 53\(+\)FPN\(+\)YOLO-Head” structure, integrating the model ideas and training techniques of excellent algorithms in deep neural networks in recent years. On the basis of Darknet 53, the backbone network of YOLOv4 integrates the idea of Cross Stage Partial Network (CSPNet) [37] algorithm to form CSPDarknet 53, which achieves the effect of reducing network computation while ensuring network accuracy. The neck of YOLOv4 adopts the PANet [38] and a Spatial Pyramid Pooling (SPP) [39], which fuses the deep feature and the shallow feature outputted from the backbone network, improving the problem of the loss of shallow feature caused by Feature Pyramid Networks (FPN) [40].

Since vibration dampers in aerial images are small and have background interference, as well as the high real-time requirements for object detection in UAV detection, therefore, this paper takes vibration dampers in aerial images as research objects, on the basis of the optimal object detection algorithm YOLOv4, Res2Net [41], DenseNet [42] and PANet structure are combined to improve YOLOv4 model. In this work, the k-means\(++\) clustering algorithm is firstly used to optimize the anchor boxes, and the optimized YOLOv4 model is used as the baseline. Specifically, in the backbone network, Res2Net modules are introduced to the CSPDarknet to replace ResNet modules, which are used to improve the sensitivity of small targets and enlarge the receptive fields. In addition, Dense blocks are employed to feature layers of \(52 \times 52\) and \(26 \times 26\) to achieve features reuse in low resolution, which improve the ability of feature extraction. In the neck region, to obtain more information about the localization of vibration dampers, the effective feature layers \(13 \times 13\), \(26 \times 26\), \(52 \times 52\), and \(104 \times 104\) are employed to fuse shallow-level feature in higher resolution and high-level semantic feature in lower resolution. Furthermore, in order to avoid gradient vanishing and enhance the semantic information of different resolutions, Residual units are adopted to replace the five sequential convolutions in PANet. The entire structure of the proposed YOLO model is shown in Fig. 3, which is composed of backbone network, SPP, PANet, and YOLO heads.

Fig. 3  The entire structure of the proposed YOLO model.

2.2  Feature Extraction Network of the Proposed Model

As the number of CNN layers increases, the resolution of feature maps decreases accordingly. CNNs can continuously transform local and shallow information into global and high-level semantic information, thereby extracting high-quality image features. The deeper the network layer, the better the feature extraction. However, when the network model reaches saturation, the accuracy of detection begins to decline, so blindly increasing the network depth will lead to higher training errors. To address this issue, Ref. [43] proposed the residual network structure ResNet, which adopted shortcut connections to obtain deeper network structures and alleviate gradient vanishing. In order to enhance the ability of feature expression, YOLOv3 employed ResNet in the process of feature extraction. To further expand the receptive fields of each network layer, this study applies Res2Net modules to the backbone of the YOLO model to extract more effective features and fully utilize the features within a single layer.

2.2.1  The Structure of Res2Net and CSPR

The Res2Net module mainly uses a set of cascaded Convs \(3 \times 3\), which makes the module have a certain depth and residual. In the feature extraction network of improved YOLOv4, all the Convs \(3 \times 3\) in the original Residual unit are replaced with the Res2Net module, as shown in Fig. 4.

Fig. 4  The structure of Res2Net module in improved YOLOv4.

The working process of Res2Net module is as follows. Firstly, Conv \(1 \times 1\) is used to carry out the transition operation between layers on the input feature maps. And then the output feature maps will be split into 4 groups, each group is actually a convolution channel, e.g., Xi, \(\mathrm{i} \in \{1,\, 2,\, 3,\, 4\}\). The output of each convolution channel is given by Yi, \(\mathrm{i} \in \{1,\, 2,\, 3,\, 4\}\). Secondly, Conv \(3 \times 3\) is used in each channel represented by Ki, \(\mathrm{i} \in \{1,\, 2,\, 3,\, 4\}\). Specifically, the first channel X1 does not carry out convolution operation and still retains the output of the original Conv \(1 \times 1\). Except for the convolution channel X1 and X2, other channels will first fuse the convolution output of the previous channel before carrying out the convolution operation, and then perform the convolution operation Ki on the fused features, and the extracted features output to the corresponding channel Yi. As the number of channels increases, the features extracted from the following channels become richer and have more high-level semantic information. The output convolution channel Yi can be expressed by Formula (1). Finally, all the feature maps of channel Yi are concated together, and features fusion are realized through Conv \(1 \times 1\).

\[\begin{equation*} \mathrm{Yi} = \left\{ \begin{array}{@{\,}c@{}} \mathrm{X}1,\ \mathrm{i} = 1 \\ \mathrm{K}2 (\mathrm{X}2),\ \mathrm{i}= 2 \\ \mathrm{Ki} (\mathrm{Xi} + \mathrm{Y}(\mathrm{i} - 1)),\ 2 < \mathrm{i} \leq 4 \end{array} \right. \tag{1} \end{equation*}\]

The Res2Net module divides the feature maps into multiple groups for feature extraction, which has two advantages. On the one hand, the information redundancy is reduced, and only 1/4 feature maps are extracted by each group compared with the original Conv \(3 \times 3\). On the other hand, compared with the cascading feature extraction operation for all dimension features, hierarchical feature extraction can reduce the calculation of network parameters and does not introduce too much calculation due to the grouping structure. Obviously, Conv \(3 \times 3\) under each feature maps in Res2Net can utilize previous feature maps, and its output can obtain a larger receptive fields. Therefore, the Res2Net module has increased the scale of a single layer, expanded the range of receptive fields, and better utilized contextual information. By fully combining contextual information, classifiers can more easily detect specific target. Meanwhile, the use of multi-scale methods to extract features enhances the semantic representation and feature extraction capabilities of the network.

CSPNet is a novel backbone network proposed by Wang et al. in 2020, which can be used to enhance the learning ability of CNNs. In this study, CSPNet and Res2Net module are combined as CSPR structure to extract image feature of the vibration dampers. The structure of CSPR is shown in Fig. 5.

Fig. 5  The structure of CSPR.

Specifically, a Conv \(3 \times 3\) convolution operation with stride by 2 reduces the size of input feature layers by half. Then, the feature layers are divided into two parts: Shortconv and Mainconv. The Shortconv is actually a large residual edge that is directly connected to the end after convolution operation. With Mainconv as the main part, the Res2Net module continuously Concated n times, that is, the number of channels is adjusted by a Conv \(1 \times 1\), and then feature maps are extracted by a set of Convs \(3 \times 3\). Then the output feature maps are Concated with small residual edge, and the number of channels is adjusted by another Conv \(1 \times 1\) to make it the same as the Shortconv. Finally, the Shortconv and Mainconv are connected as the output feature layers. The values of n in different feature layers of feature extraction network are 1, 2, 4, 4, 4, respectively.

2.2.2  The Architecture of Dense Blocks

In the UAV images of vibration dampers, due to the influence of complex backgrounds, illumination, over-exposure, different filming distances and angles, ti is difficult to accurately recognize vibration dampers. In addition, the feature extraction network uses less dimensions for each down-sampling, and will lose more feature information, resulting in insufficient target information on the feature maps of the detection layer. To solve this problem, image feature maps are reused in the feature extraction network through Dense blocks to achieve better results. Specifically, Dense-blc1 is added in front of the effective feature layers (\(52 \times 52\)) and Dense-blc2 is added in front of the effective feature layers (\(26 \times 26\)). Dense block can make the feature extraction network more compact, as shown in Fig. 6, each convolution layer can obtain the information of all previous convolution layers, that is, the Lth layer receives all the previous (L-1) layers as input, its expression is shown in Eq. (2).

\[\begin{equation*} \mathrm{X}_{\mathrm{L}} = \mathrm{H}_{\mathrm{L}} ([\mathrm{X}_0,\, \mathrm{X}_1, \cdots, \mathrm{X}_{\mathrm{L}-1}]) \tag{2} \end{equation*}\]

\([\mathrm{X}_0,\, \mathrm{X}_1, \cdots, \mathrm{X}_{\mathrm{L}-1}]\) represents the output feature layers of \(\mathrm{X}_0,\, \mathrm{X}_1, \cdots, \mathrm{X}_{\mathrm{L}-1}\) spliced in series. \(\mathrm{H}_{\mathrm{L}}\) denotes a nonlinear transmission function, BN-ReLU-Conv (\(1 \times 1\)) and BN-ReLU-Conv (\(3 \times 3\)) are the common used functions, which are composed of batch normalization (BN), rectified linear unit (ReLU) and convolution (Conv).

Fig. 6  The architecture of Dense block.

In the network architecture of Dense-blk1, firstly, Conv \(1 \times 1\) with filters 128 is used to adjust the number of feature layers, and the input feature layers X0 are \(52 \times 52 \times 128\). Then the functions H are represented by BN-ReLU-Conv (\(1 \times 1 \times 32\)) and BN-ReLU-Conv (\(3 \times 3 \times 32\)), the feature layers [X0, X1], [X0, X1, X2], [X0, X1, X2, X3] are obtained and the feature maps are \(52 \times 52 \times 160\), \(52 \times 52 \times 192\), \(52 \times 52 \times 224\), respectively. After the operation of BN-ReLU-Conv (\(1 \times 1 \times 32\)) and BN-ReLU-Conv (\(3 \times 3 \times 32\)) on layers [X0, X1, X2, X3], the feature layers X0, X1, X2, X3, and X4 are Concated as [X0, X1, X2, X3, X4], and the output feature layers [X0, X1, X2, X3, X4] are \(52 \times 52 \times 256\), which are used as the input feature layers of CSPR (\(52 \times 52\)). Similarly, in the network architecture of Dense-blk2, Conv \(1 \times 1\) with filters 256 is used to adjust the number of feature layers, and the input feature layers X0 are \(26 \times 26 \times 256\). BN-ReLU-Conv (\(1 \times 1 \times 64\)) and BN-ReLU-Conv (\(3 \times 3 \times 64\)) are employed as nonlinear transmission functions. The final output feature layers are \(26 \times 26 \times 512\), which are used as the input feature layers of CSPR (\(26 \times 26\)).

2.3  Feature Pyramid Network

Considering that the feature extraction network performs a series of convolutional operations and down-sampling, it has rich global semantic information. In order to further obtain multi-scale local feature information, the SPP structure is added after the last feature layer of the feature extraction network. The SPP structure can greatly increase the receptive fields of the last feature layer and isolate the most important context features, thereby obtaining richer local feature information.

Generally speaking, shallow feature layers have rich detailed and positional information. However, as the feature layer gradually deepens, positional information will decrease at a lower resolution, while semantic information will increase. In this study, a feature fusion strategy is proposed to improve the feature pyramid network. The feature layers P2 (\(104 \times 104\)), Residual module, and shortcut connection of the effective feature layers (C3, C4, C5) are introduced into the feature pyramid network, and enhance feature representation through top-down and bottom-up fusion strategies to further achieve feature reuse. The structure of feature pyramid network is shown in Fig. 7.

Fig. 7  The structure of feature pyramid network.

As shown in Fig. 7, the feature fusion strategy of multi-scale prediction is as follows. Firstly, four effective feature layers (C2, C3, C4, C5) are extracted from the backbone network of CSPRs. Secondly, SPP is performed on the feature layers C5 to obtain feature layers P5, and the feature layers P5 via up-sampling are Concated with feature layers C4 after Conv \(1 \times 1\) operation, obtaining feature layers P4. Similarly, the feature layers P3 and P2 are obtained. Then the feature layers P2 via Residual module and down-sampling, and feature layers P3 via Residual module, and the effective feature layers C3 are fused to generate large feature layers (LFL \(52 \times 52\)). The feature layers P3 via Residual module and down-sampling, and feature layers P4 via Residual module, and the effective feature layers C4 are fused to generate medium feature layers (MFL \(26 \times 26\)). The feature layers P4 via Residual module and down-sampling, and feature layers P5 via Residual module, and the effective feature layers C5 are fused to obtain small feature layers (SFL \(13 \times 13\)). Finally, LFL (\(52 \times 52\)), MFL (\(26 \times 26\)), and SFL (\(13 \times 13\)) are connected to YOLO Heads (\(52 \times 52\), \(26 \times 26\), \(13 \times 13\)) via Residual module, respectively.

3.  Experiments Results and Discussion

This study is conducted in an experimental environment of Windows 10 system, using Python 1.8.0 deep learning framework and NVIDIA GeForce GTX 3080 GPU for training and testing.

3.1  Dataset Preparation

At present, there is no publicly available dataset for vibration dampers detection, so data collection is an important part of this study. First, a camera-equipped UAV is used to automatically detect the transmission lines, and the captured images are stored on the local memory card or transmitted to the local server. Then a total of 1900 UAV images which have more common aerial scenes are cropped, as shown in Fig. 8. Finally, a dataset named Damper is constructed. All the UAV images are adjusted to the size of \(416 \times 416\) pixels, and the ground-truth of vibration dampers are labeled by the LabelImg tool. Among them, 1330 images are randomly selected as the training set, and the remaining 570 images are assigned as the testing set.

Fig. 8  The samples of vibration damper with diverse scenes.

3.2  Evaluation Metrics

In order to deploy the research results of vibration dampers detection on embedded devices for transmission lines on-line inspection, it is necessary to evaluate the performance of network model and clarify its advantages and disadvantages. In deep learning classification tasks, there are four common prediction results: True Positive (TP), False Positive (FP), True Negative (TN), and False Negative (FN). Specifically, TP represents the number of positive samples predicted correctly, FP denotes the number of negative samples predicted to be positive, TN represents the number of negative samples predicted correctly, FN denotes the number of positive samples predicted to be negative. TP, FP, TN and FN are used to describe the relationship between prediction results and ground truth.

The commonly used performance evaluation indexes for target detection include Precision, Recall, F1-score, average precision (AP), which are defined in Formulas (3)-(6). Specifically, Precision describes the ratio of all correctly predicted objects to all predicted objects, and measures the precision of the network model. Recall is defined as the ratio of all correctly predicted objects to all objects that should be predicted, measuring the recall rate of the network model. The F1 score is the harmonic mean of Precision and Recall. AP is the Precision-Recall curve, which takes Precision as the vertical axis and Recall as the horizontal axis.

\[\begin{align} & \mathit{Precision} = \frac{TP}{TP + FP} \tag{3} \\ & \mathit{Recall} = \frac{TP}{TP + FN} \tag{4} \\ & \mathit{F1} - \mathit{score} = \frac{2 \mathit{Precision} \cdot \mathit{Recall}} {\mathit{Precision} + \mathit{Recall}} \tag{5} \\ & AP = \int_0^1 P(R) dR \tag{6} \end{align}\]

3.3  Experimental Results and Analysis
3.3.1  Comparison of Different Network Models

In order to verify the performance of the proposed model and evaluate its effectiveness in vibration dampers detection, comparative experiments are performed on the proposed model and the mainstream target detection network models (Faster R-CNN, SSD, YOLOv3, YOLOv4, YOLOv4-tiny, YOLOv5s, and YOLOx). For fair comparison, all the network models are trained and then tested on the Damper dataset, the experimental effects of vibration dampers detection are list in Table 1.

Table 1  The experimental effects of eight network models.

Specifically, the Precision values of the eight detection models are: Faster R-CNN (89.2%), SSD (88.1%), YOLOv3 (90.3%), YOLOv4 (91.2%), YOLOv4-tiny (81.8%), YOLOv5s (92.8%), YOLOx (98.3%) and the proposed model (98.1%). It can be seen that the detection precision of YOLOv3, YOLOv4, YOLOv5s, YOLOx and the proposed model all exceed 90%. Among them, the detection precision of the proposed model is as high as 98.1%, which is only 0.2% lower than YOLOx. The Recall rates of all detection models are: Faster R-CNN (91.5%), SSD (90.2%), YOLOv3 (92.1%), YOLOv4 (95.4%), YOLOv4-tiny (83.7%), YOLOv5s (94.6%), YOLOx (97.5%), and the proposed model (98.7%). Numerically speaking, the proposed model has the highest recall rate, reaching 98.7%, which is nearly 3%, 4%, and 1% higher than YOLOv4, YOLOv5s, and YOLOx, respectively. The F1-score values of the eight detection models are: Faster R-CNN (90.3%), SSD (89.1%), YOLOv3 (91.2%), YOLOv4 (93.3%), YOLOv4-tiny (82.7%), YOLOv5s (93.7%), YOLOx (97.9%), and the proposed model (98.4%). As can be seen that the proposed model has the highest F1-score, reaching 98.4%, which is 8.1%, 7.2%, 5.1%, 4.7%, and 0.5% higher than that of Faster R-CNN, YOLOv3, YOLOv4, YOLOv5s, and YOLOx, respectively.

The AP values of all the detection models are: Faster R-CNN (90.0%), SSD (89.3%), YOLOv3 (90.5%), YOLOv4 (92.6%), YOLOv4-tiny (79.2%), YOLOv5s (94.4%), YOLOx (98.7%) and the proposed model (98.8%). It can be seen that the AP value of the proposed model is the highest among the eight models, which is 6.2% higher than that of YOLOv4. Faster R-CNN is the representative of two-stage detection network, and its AP value of vibration dampers detection is 8.8% lower than that of the proposed model in this study. As one of the excellent representatives of the one-stage detection network, the AP value of SSD is 9.5% lower than that of improved YOLOv4. Those above indicate that the proposed detection model is more accurate than the comparative network models.

From the experimental results, it can be seen that the proposed model has good detection performance in vibration dampers detection. All indicators in the proposed model are superior to YOLOv4, and its detection performance is even slightly better than excellent object detection algorithm YOLOx. In summary, the experimental results indicate that the proposed model has more accurate and reliable detection results, and has advantages over mainstream object detection algorithms in vibration dampers detection.

Vibration damper images captured by UAV usually contains diverse backgrounds, e.g., sky, farmland, forest, buildings, and so on. Moreover, vibration dampers in UAV images have multi-scale due to the different filming angles and distances in real-world applications. Some typical images with different backgrounds are selected to validate the accuracy of the proposed model in vibration dampers detection, as shown in Fig. 9, which exhibits the experimental scenes with backgrounds of sky, farmland, forest, buildings, and the color of damper is close to the background, respectively. As can be seen from the figure, all the vibration dampers in the images can be accurately predicted by the proposed model with a high level of confidence.

Fig. 9  Experimental results with different scenes conducted by the proposed model.

3.3.2  Ablation Experiments

Ablation experiment is a common experimental method in the field of deep learning, mainly used to analyze the influence of different network parts on the proposed model. In order to further analyze the impact of the improved based on YOLOv4 model, ablation experiments are conducted in this study. The improved model is divided into five groups for training and testing respectively. The first group is the original YOLOv4 model. In the second group, Res2Net modules are introduced to the feature extraction network to replace the original ResNet modules. And in the third group, Dense blocks are added to the backbone network based on the previous group. The fourth group is the improved Feature Pyramid Network structure on the basis of the third group. And the fifth group uses Residual modules on the basis of the previous group, that is, the fifth group is the proposed model in this study. The experimental results of the five groups are shown in Table 2.

Table 2  Comparison of ablation experimental results.

From Table 2, it can be seen that for the first set of experiment, the AP value of the original YOLOv4 in vibration dampers detection is 92.6%, and its FPS is 57. For the second set of experiment, due to the introduction of the Res2Net modules in the feature extraction network, the AP value of vibration dampers detection increases by 3.5%, and the FPS increases by 7. Due to its better semantic representation and stronger feature extraction capabilities compared with ResNet module, Res2Net module eliminates most computational bottlenecks, reduces memory consumption, and thus improving detection speed and accuracy. In the third group of experiment, the Dense blocks are added on the basis of the second group, achieving an AP value of 96.5% and an FPS of 63. Although sacrificing 1 FPS compared to the second group, the AP value is 0.4% higher than that of the second group, ascribing to the Dense blocks can achieve feature reuse at low resolution. For the fourth group of experiment, due to the improvement of feature pyramid network on the basis of the third group, the AP value is nearly 2% higher than that of the third group, while the detection speed reduces 1 FPS. Since shortcut connection of the effective feature layers are introduced into the original feature pyramid network to further achieve feature reuse, the feature representation is enhanced through the top-down and bottom-up fusion strategy, thereby further improving the detection accuracy of the predicted network. For the fifth group of experiment, three Residual modules are added on the basis of the fourth group, and the AP value increases by 0.4% compared to the fourth group. Especially compared to the original YOLOv4, there is a significant improvement on the whole, and better real-time effect has also been achieved, with an AP value increases of 6.2% and an FPS increases of 5. In conclusion, the improvement strategy proposed in this study based on the YOLOv4 network model is of great significance for improving the detection performance of vibration dampers in complex scenes.

To verify the accuracy and robustness of the proposed method, YOLOv4(1), YOLOv4(2) and YOLOv4(5) models are designed to perform vibration dampers detection in aerial images. Some typical images with different scenes are selected to exhibit the visualization performance of the proposed model and the comparative models, as shown in Fig. 10 and Fig. 11. In Fig. 10, the experimental scenes with backgrounds of sky, farmland, trees and building are exhibited from the first row to the fourth row, and experimental scene with vibration damper’s color being similar to the background is shown in fifth row. Figure 11 exhibits the small targets detection capability of the models. In both Fig. 10 and Fig. 11, the first column are the predicted results by the original YOLOv4, that is YOLOv4(1), the second column are the predicted results by YOLOv4(2), and third column are the predicted results by the proposed model in this study, that is YOLOv4(5).

Fig. 10  Comparisons of prediction results under diverse scenes.

Fig. 11  Comparisons of prediction results with small targets detection.

As can be seen from Fig. 10, the first column of prediction results obtained by YOLOv4(1) have a certain degree of missing detection (circled by the red ellipses) and error detection (circled by the yellow ellipse), e.g., vibration dampers in images with different scenes are not completely predicted, in addition, there is an error detection in the image with background of building. In the second column, vibration dampers in the images with backgrounds of sky, trees and building are not completely predicted by YOLOv4(2). While all the vibration dampers in the third column images are correctly predicted by YOLOv4(5). In Fig. 11, there are multiple vibration dampers in the images with different scenes, which can be seen as small target detection. In the first column, the small target detection ability of YOLOv4(1) model is not good, and some vibration dampers can not be correctly predicted. In the second column, although the YOLOv4 (2) model has better small object detection ability than the YOLOv4 (1) model, it cannot fully predict all dampers in the image. In the third column, the YOLOv4 (5) model has the best small object detection ability among the three models, and can predict almost all multi-scale vibration dampers. Therefore, it can be concluded that the proposed model can achieve good performance in damper detection in different scenarios.

3.3.3  Experiments on VOC2012 Dataset

In order to verify the generalization performance of the proposed model, PASCAL VOC2012 dataset is used, which contains 23403 images with labeled, 21063 images are randomly selected as training and validation set to train the network models, and the remaining 2340 images are assigned as the testing set to test the detection accuracy of the models. Taking YOLOv4 model as the baseline, the comparison results of YOLOv4 and the proposed model in VOC 2012 dataset are shown in Table 3. Compared with the original YOLOv4 model, the mAP value of the improved model is increased by 1.67%, and the detection accuracy of most targets is improved, which proves the effectiveness of the improved YOLO model based on YOLOv4.

Table 3  Comparison of detection results in VOC 2012 dataset

4.  Conclusions

In this study, aiming at the problems of backgrounds interference and small targets in intelligent inspection tasks, on basis of YOLOv4 algorithm, an accurate and robust YOLO model is proposed for vibration dampers detection in UAV-based aerial images. First of all, we collect and create a novel dataset named Damper. This dataset contains 1900 aerial images captured by UAV with diverse scenes. After that, to reduce the network computing consumption and improve the training speed, Res2Net modules are introduced to the CSPDarknet network replacing the original ResNet modules. Dense blocks are employed to feature layers of \(52 \times 52\) and \(26 \times 26\) to further improve the ability of feature extraction in low resolution. An improved PANet structure is introduced to the proposed YOLO model, and a top-down and bottom-up feature fusion strategy is combined to achieve feature enhancement. Finally, the proposed YOLO model and comparative models are trained and tested on the Damper dateset. Experimental results with the comparative models show that: the AP value of the proposed YOLO model is 8.8% and 9.5% higher than that of Faster R-CNN and SSD, and the Precision value of the proposed YOLO model is nearly 9% and 10% higher than that of Faster R-CNN and SSD, proving the proposed model is superior to Faster R-CNN and SSD. More importantly, compared with YOLOv4 model, the AP value of the proposed YOLO model increases 6.2%, the Precision of the proposed YOLO model increases 6.9%, and the predicted speed improves 5 FPS. Consequently, it can be concluded that the proposed YOLO model can achieve good performance in vibration dampers detection, and is expected to be deployed and applied in real-time detection of transmission lines.

For a future study, the primary task is to further expand the Damper dataset while considering lighting and other factors, so that the proposed model can adapt to more types of vibration dampers detection. In addition, to achieve online inspection of transmission lines by UAV, and further improve the accuracy of electrical components identification and fault diagnosis, designing an end-to-end object detection model based on lightweight Transformer-YOLO is of great research significance.

Acknowledgments

This work was supported by Natural Research Project of College in Anhui Province under grant 2023AH052358, 2024AH051365; Collaborative Innovation Project of University in Anhui Province under grant GXXT-2022-088; Excellent Scientific Research and Innovation Team of Anhui Colleges under grant 2022AH010098.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

References

[1] V.N. Nguyen, R. Jenssen, and D. Roverso, “Automatic autonomous vision-based power line inspection:A review of current status and the potential role of deep learning,” International Journal of Electrical Power & Energy Systems, vol.99, pp.107-120, 2018. DOI: 10.1016/j.ijepes.2017.12.016.
CrossRef

[2] N. Norodin, K. Nakamura, and M. Hotta, “Effects of lossy mediums for resonator-coupled type wireless power transfer system using conventional single- and dual-spiral resonators,” IEICE Trans. Electron., vol.E105-C, no.3, pp.110-117, 2022. DOI: 10.1587/transele.2021ECP5025.
CrossRef

[3] Y. Li, S. Wei, X. Liu, Y. Luo, Y. Li, and F. Shuang, “An improved insulator and spacer detection algorithm based on dual network and SSD,” IEICE Trans. Inf. & Syst., vol.E106-D, no.5, pp.662-672, 2023. DOI: 10.1587/transinf.2022DLP0062.
CrossRef

[4] J. Han, Z. Yang, H. Xu, G. Hu, C. Zhang, H. Li, S. Lai, and H. Zeng, “Search Like an Eagle: A cascaded model for insulator missing faults detection in aerial images,” Energies, vol.13, no.3, p.713, 2020. DOI: 10.3390/en13030713.
CrossRef

[5] J. Bian, X. Hui, X. Zhao, and M. Tan, “A monocular vision-based perception approach for unmanned aerial vehicle close proximity transmission tower inspection,” International Journal of Advanced Robotic Systems, vol.16, no.1, 2019. DOI: 10.1177/1729881418820227.
CrossRef

[6] X. Miao, X. Liu, J. Chen, S. Zhuang, J. Fan, and H. Jiang, “Insulator detection in aerial images for transmission line inspection using single shot multibox detector,” IEEE Access, vol.7, pp.9945-9956, 2019. DOI: 10.1109/ACCESS.2019.2891123.
CrossRef

[7] W. Song, D. Zuo, B. Deng, H. Zhang, K. Xue, and H. Hu, “Corrosion defect detection of earthquake hammer for high voltage transmission line,” Chinese Journal of Scientific Instrument, vol.37, pp.113-117, 2016.

[8] K. Zhang, Q. Hou, and W. Huang, “Defect detection of anti-vibration hammer based on improved Faster R-CNN,” Proc. 7th IFEEA, pp.889-893, 2020. DOI: 10.1109/IFEEA51475.2020.00186.
CrossRef

[9] Z. Liu, X. Miao, J. Chen, and H. Jiang, “Review of visible image intelligent processing for transmission line inspection,” Power System Technology, vol.44, pp.1058-1069, 2020.

[10] C. Liu, Y. Wu, J. Liu, and Z. Sun, “Improved YOLOv3 network for insulator detection in aerial images with diverse background interference,” Electronics, vol.10, no.7, p.771, 2021. DOI: 10.3390/electronics10070771.
CrossRef

[11] K. Zhang, S. Qian, J. Zhou, C. Xie, J. Du, and Y. Tao, “ARFNet: adaptive receptive field network for detecting insulator self-explosion defects,” Signal Image and Video Processing, vol.16, no.8, pp.2211-2219, 2022. DOI: 10.1007/s11760-022-02186-3.
CrossRef

[12] X. Tao, D. Zhang, Z. Wang, X. Liu, H. Zhang, and D. Xu, “Detection of Power Line Insulator Defects Using Aerial Images Analyzed With Convolutional Neural Networks,” IEEE Transactions on Systems Man Cybernetics-Systems, pp.1486-1498, 2018. DOI: 10.1109/TSMC.2018.2871750.
CrossRef

[13] L. Zhao, X. Wang, H. Yao, and M. Tian, “Survey of power line extraction methods based on visible light aerial image,” Power System Technology, vol.45, pp.1536-1546, 2020. DOI: 10.13335/j.1000-3673.pst.2020.0300a.

[14] V.N. Nguyen, R. Jenssen, and D. Roverso, “LS-Net: fast single-shot line-segment detector,” Machine Vision and Applications, vol.32, no.1, pp.1-16, 2021. DOI: 10.48550/arXiv.1912.09532.
CrossRef

[15] F. Li, J. Xin, T. Chen, L. Xin, Z. Wei, Y. Li, Y. Zhang, H. Jin, Y. Tu, X. Zhou, and H. Liao, “An Automatic Detection Method of Bird’s Nest on Transmission Line Tower Based on Faster_RCNN,” IEEE Access, vol.8, pp.164214-164221, 2020. DOI: 10.1109/ACCESS.2020.3022419.
CrossRef

[16] H. Yang, T. Guo, P. Shen, F. Chen, W. Wang, X. Liu, “Anti-vibration hammer detection in UAV image,” Proc. 2nd ICPRE, pp.204-207, 2017. DOI: 10.1109/ICPRE.2017.8390528.
CrossRef

[17] L. Yang, X. Jiang, Y. Hao, L. Li, H. Li, R. Li, and B. Luo, “Recognition of natural ice types on in-service glass insulators based on texture feature descriptor,” IEEE Trans. Dielectr. Electr. Insul., vol.24, no.1, pp.535-542, 2017. DOI: 10.1109/TDEI.2016.006049.
CrossRef

[18] L. Jin, S. Yan, and Y. Liu, “Vibration damper recognition based on Haar-like features and cascade AdaBoost classifier,” Journal of System Simulation, vol.24, pp.1806-1809, 2012.

[19] Y. Zhai, R. Chen, Q. Yang, X. Li, and Z. Zhao, “Insulator fault detection based on spatial morphological features of aerial images,” IEEE Access, vol.6, pp.35316-35326, 2018. DOI: 10.1109/ACCESS.2018.2846293.
CrossRef

[20] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” Proc. CVPR, pp.580-587, 2014. DOI: 10.1109/cvpr.2014.81.
CrossRef

[21] S. Ren, K. He, R. Girshick, and J. Sun, “Faster R-CNN: Towards real-time object detection with region proposal networks,” IEEE Trans. Pattern Anal. Mach. Intell., vol.39, no.6, pp.1137-1149, 2017. DOI: 10.1109/TPAMI.2016.2577031.
CrossRef

[22] M. Tomaszewski, P. Michalski, and J. Osuchowski, “Evaluation of Power Insulator Detection Efficiency with the Use of Limited Training Dataset,” Applied Sciences, vol.10, no.6, p.2104, 2020. DOI: 10.3390/app10062104.
CrossRef

[23] K. He, G. Gkioxari, P. Dollár, and R. Girshick, “Mask R-CNN,” IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.42, no.2, pp.386-397, 2020. DOI: 10.1109/TPAMI.2018.2844175.
CrossRef

[24] W. Liu, D. Anguelov, D. Erhan, C. Szegedy, S. Reed, C.Y. Fu, and A.C. Berg, “Ssd: Single shot multibox detector,” Proc. ECCV, pp.21-37, 2016. DOI: 10.1007/978-3-319-46448-0_2.
CrossRef

[25] J. Redmon and A. Farhadi, “Yolov3: An incremental improvement,” arXiv preprint arXiv:1804.02767, 2018.

[26] A. Bochkovskiy, C. Wang, and H. Liao, “YOLOv4: optimal speed and accuracy of object detection,” arXiv preprint arXiv:2004.10934, 2020.

[27] T.-Y. Lin, P. Goyal, R. Girshick, K. He, and P. Dollár, “Focal loss for dense object detection,” IEEE Trans. Pattern Anal. Mach. Intell., vol.42, no.2, pp.2999-3007, 2017. DOI: 10.1109/TPAMI.2018.2858826.
CrossRef

[28] H. Law and J. Deng, “CornerNet:detecting objects as paired keypoints,” International Journal of Computer Vision, vol.128, no.3, pp.642-656, 2020. DOI: 10.1007/978-3-030-01264-9_45.
CrossRef

[29] K. Duan, S. Bai, L. Xie, H. Qi, Q. Huang, and Q. Tian, “CenterNet: keypoint triplets for object detection,” Proc. ICCV, pp.6568-6577, 2019. DOI: 10.1109/ICCV.2019.00667.
CrossRef

[30] Z. Ling, R.C. Qiu, Z. Jin, Y. Zhang, and C. Lei, “An accurate and real-time method of self-blast glass insulator location based on Faster R-CNN and U-net with aerial images,” CSEE Journal of power and energy systems, pp.474-482, 2019. DOI: 10.48550/arXiv.1801.05143.
CrossRef

[31] W. Bao, Y. Ren, D. Liang, X. Yang, and Q. Xu, “Defect detection algorithm of anti-vibration hammer based on improved Cascade R-CNN,” Proc. ICHCI, pp.294-297, 2020. DOI: 10.1109/ICHCI51889.2020.00070.
CrossRef

[32] Z. Zhao, Z. Zhen, L. Zhang, Y. Qi, Y. Kong, and K. Zhang, “Insulator Detection Method in Inspection Image Based on Improved Faster R-CNN,” Energies, vol.12, no.7, p.1204, 2019. DOI: 10.3390/en12071204.
CrossRef

[33] D. Sadykova, D. Pernebayeva, M. Bagheri, and A. James, “IN-YOLO: real-time detection of outdoor high voltage insulators using UAV imaging,” IEEE Trans. Power Del., vol.35, no.3, pp.1599-1601, 2019. DOI: 10.1109/TPWRD.2019.2944741.
CrossRef

[34] J. Han, Z. Yang, Q. Zhang, C. Chen, H. Li, S. Lai, G. Hu, C. Xu, H. Xu, D. Wang, and R. Chen, “A method of insulator faults detection in aerial images for high-voltage transmission lines inspection,” Applied Sciences, vol.9, no.10, p.2009, 2019. DOI: 10.3390/app9102009.
CrossRef

[35] C. Liu, Y. Wu, J. Liu, Z. Sun, and H. Xu, “Insulator faults detection in aerial images from high-voltage transmission lines based on deep learning model,” Applied Sciences, vol.11, no.10, p.4647, 2021. DOI: 10.3390/app11104647.
CrossRef

[36] Y. Li, H. Wang, L.M. Dang, T.N. Nguyen, D. Han, A. Lee, I. Jang, and H. Moon, “A deep learning-based hybrid framework for object detection and recognition in autonomous driving,” IEEE Access, vol.8, pp.194228-194239, 2020. DOI: 10.1109/ACCESS.2020.3033289.
CrossRef

[37] C.-Y. Wang, H.-Y. Mark Liao, Y.-H. Wu, P.-Y. Chen, J.-W. Hsieh, and I.-H. Yeh, “CSPNet: A new backbone that can enhance learning capability of CNN,” Proc. CVPRW, pp.1571-1580, 2020. DOI: 10.1109/CVPRW50498.2020.00203.
CrossRef

[38] S. Liu, L. Qi, H. Qin, J. Shi, and J. Jia, “Path aggregation network for instance segmentation,” Proc. CVPR, pp.8759-8768, 2018. DOI: 10.1109/CVPR.2018.00913.
CrossRef

[39] Z. Huang, J. Wang, X. Fu, T. Yu, Y. Guo, and R. Wang, “DC-SPP-YOLO: Dense connection and Spatial Pyramid Pooling based YOLO for object detection,” Information Sciences, vol.522, pp.241-258, 2020. DOI: 10.1016/j.ins.2020.02.067.
CrossRef

[40] T.-Y. Lin, P. Dollár, R. Girshick, K. He, B. Hariharan, and S. Belongie, “Feature Pyramid Networks for object detection,” Proc. CVPR, pp.936-944, 2017. DOI: 10.1109/CVPR.2017.106.
CrossRef

[41] S.-H. Gao, M.-M. Cheng, K. Zhao, X.-Y. Zhang, M.-H. Yang, and P. Torr, “Res2Net: A New Multi-Scale Backbone Architecture,” IEEE Trans. Pattern Anal. Mach. Intell., vol.43, no.2, pp.652-662, 2021. DOI: 10.1109/TPAMI.2019.2938758.
CrossRef

[42] G. Huang, Z. Liu, L. Van Der Maaten, and K.Q. Weinberger,“Densely Connected Convolutional Networks,” Proc. CVPR, pp.2261-2269, 2017. DOI: 10.1109/CVPR.2017.243.
CrossRef

[43] K. He, X. Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” Proc. ECCV, pp.630-645, 2016. DOI: 10.1007/978-3-319-46493-0_38.
CrossRef

Authors

Jingjing LIU
  Chizhou University

received her M.S. degree in signal processing from the Nanjing University of Aeronautics and Astronautics in 2010. She is currently a lecturer at the School of electrical and mechanical engineering at the University of Chizhou. She research interests include digital signal processing, machine vision and deep learning.

Chuanyang LIU
  Nanjing University of Aeronautics and Astronautics

received his M.S. degree in Power electronic technology from Nanjing University of Aeronautics and Astronautics in 2010. He is currently a doctoral candidate at Nanjing University of Aeronautics and Astronautics. His research interests include image processing, machine vision, and deep learning.

Yiquan WU
  Nanjing University of Aeronautics and Astronautics

received his doctorate from the Nanjing University of Aeronautics and Astronautics in 1998. He is currently a professor and PhD Tutor at the Nanjing University of Aeronautics and Astronautics. His main research fields are remote sensing image processing and understanding, infrared target detection and recognition, vision detection and image measurement, video processing and intelligent analysis.

Zuo SUN
  Chizhou University

received his master degree in power electronics and power transmission engineering from Southeast University in 2008. He is currently a professor at the School of electrical and mechanical engineering at the University of Chizhou. His main research direction is microcomputer measurement and control, power quality analysis, smart grid monitoring.

Keyword

FlyerIEICE has prepared a flyer regarding multilingual services. Please use the one in your native language.