[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN111784685A - Power transmission line defect image identification method based on cloud edge cooperative detection - Google Patents

Power transmission line defect image identification method based on cloud edge cooperative detection Download PDF

Info

Publication number
CN111784685A
CN111784685A CN202010691927.XA CN202010691927A CN111784685A CN 111784685 A CN111784685 A CN 111784685A CN 202010691927 A CN202010691927 A CN 202010691927A CN 111784685 A CN111784685 A CN 111784685A
Authority
CN
China
Prior art keywords
frame
network
image
candidate
transmission line
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010691927.XA
Other languages
Chinese (zh)
Other versions
CN111784685B (en
Inventor
吴晟
唐远富
徐晓晖
甘湘砚
肖剑
田建伟
徐先勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd
State Grid Hunan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Electric Power Research Institute of State Grid Hunan Electric Power Co Ltd, State Grid Hunan Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202010691927.XA priority Critical patent/CN111784685B/en
Publication of CN111784685A publication Critical patent/CN111784685A/en
Application granted granted Critical
Publication of CN111784685B publication Critical patent/CN111784685B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/048Activation functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a power transmission line defect image identification method based on cloud edge cooperative detection, which comprises the following steps executed when an unmanned aerial vehicle or an inspection terminal used as an edge end carries out inspection operation: collecting a power transmission line inspection image; classifying the collected power transmission line inspection image into a distant view image or a close view image through a classification model; if the long-range images are obtained through classification, identifying and positioning the defects of the large equipment parts through a defect detection model deployed at the edge end; and if the short-range images are obtained through classification, calling a defect detection model deployed at the cloud end to identify and position the defects of the small equipment parts. The invention can give consideration to multiple aspects of identification speed, identification precision, positioning precision and the like, can comprehensively identify various defect types, is beneficial to reducing the labor intensity of operators and improving the inspection work efficiency of the power transmission line and the automation and intellectualization level thereof.

Description

Power transmission line defect image identification method based on cloud edge cooperative detection
Technical Field
The invention relates to the technical field of digital image recognition, in particular to the field of intelligent detection of a defect image of a power transmission line based on deep learning, and specifically relates to a method for recognizing the defect image of the power transmission line based on cloud edge cooperative detection.
Background
In recent years, the unmanned aerial vehicle technology is widely applied to the power industry, particularly in the field of power transmission line inspection, the unmanned aerial vehicle carrying the high-definition camera overcomes various defects of the traditional operation mode, and becomes important force for maintaining the safe and stable operation of a power grid. The application of the unmanned aerial vehicle greatly reduces the labor intensity of operators, but also brings new problems. The unmanned aerial vehicle inspection operation generates a large amount of image data, the speed of the unmanned aerial vehicle inspection operation is increased exponentially, an effective method for quickly screening and extracting information contained in the images does not exist at present, and only manual means which are time-consuming and labor-consuming can be adopted. The number and quality of operators can not meet the requirements of current business development, and further improvement of the working efficiency is severely restricted.
With the rise of big data and artificial intelligence technologies, some colleges and enterprises try to solve the problem by adopting an image recognition technology based on a deep neural network, and develop corresponding recognition software. However, because of numerous transmission line devices and complex structures, different devices and even different defect types of the same device, the appearance form and the digital representation of the transmission line devices are different greatly, and the existing software has great defects in the aspects of identification precision, breadth and the like, and can not meet the practical requirements.
Disclosure of Invention
The technical problems to be solved by the invention are as follows: aiming at the problems in the prior art, the invention provides the power transmission line defect image identification method based on cloud edge cooperative detection, which can give consideration to multiple aspects such as identification speed, identification precision, positioning precision and the like, can comprehensively identify various defect types, and is beneficial to reducing the labor intensity of operators and improving the power transmission line inspection efficiency and the automation and intelligent level.
In order to solve the technical problems, the invention adopts the technical scheme that:
a power transmission line defect image identification method based on cloud edge cooperative detection comprises the following steps executed when an unmanned aerial vehicle or an inspection terminal used as an edge end conducts inspection operation:
1) collecting a power transmission line inspection image;
2) classifying the collected power transmission line inspection image into a distant view image or a close view image through a classification model;
3) if the long-range images are obtained through classification, identifying and positioning the defects of the large equipment parts through a defect detection model deployed at the edge end; and if the short-range images are obtained through classification, calling a defect detection model deployed at the cloud end to identify and position the defects of the small equipment parts.
Optionally, the classification model in step 2) is a ResNet-50 classification model, the ResNet-50 classification model includes five multi-block convolutional layers and a full-link layer, an output of the full-link layer is converted into two classification probability tensors of the long-range image and the short-range image through a sigmoid function, and a class with a larger probability value is selected as a prediction class of the input image.
Optionally, the detailed steps of step 2) include: the input power transmission line inspection image is processed by 5 multi-block convolution layers to obtain a feature map of 32 times down sampling, and then the feature map is classified into a distant view image or a close view image by a full connection layer and a sigmoid function.
Optionally, step 2) is preceded by the step of training a ResNet-50 classification model: establishing a training sample containing a long-range image and a short-range image; during each round of iterative training, images in a training sample are processed by 5 multi-block convolutional layers to obtain a feature map of 32 times of down sampling, then the feature map is classified into a distant view image or a near view image by a full connection layer and a sigmoid function, a cross entropy function is used for constructing a classification loss, and a random gradient descent method is used for updating network parameters; and completing the training of the ResNet-50 classification model through multiple iterations.
Optionally, the defect detection model deployed at the edge in step 3) is a YOLOv3 model, and the defect detection model deployed at the cloud is a master-R-CNN model.
Optionally, step 3) is preceded by the following step of training the YOLOv3 model:
3.1A) constructing a training set sample according to the long-range images and the annotation files thereof; clustering the sizes of all the labeled frames in the sample images of the training set by adopting a k-means clustering method to form 9 anchor point frame sizes with different sizes;
3.2A) selecting a training sample image, zooming the training sample image to a uniform size, extracting image features by taking a Darknet-53 network as a backbone network, and respectively forming three groups of feature maps of 32-time, 16-time and 8-time downsampling; allocating 9 anchor frame sizes to three groups of feature maps, wherein each group of feature maps is 3, the small-size feature maps are allocated with large anchor frame sizes, and the large-size feature maps are allocated with small anchor frame sizes;
3.3A) according to the size of the anchor frame distributed in the step 3.2A), generating a series of anchor frames on the original image by taking each pixel point of the feature map as an anchor, generating 3 frames by each pixel point, calculating the intersection ratio IOU of each anchor frame and the marking frame, and if one anchor frame has the largest IOU, predicting the object contained in the marking frame by the anchor frame; each anchor block predicts a frame, each frame contains 4 location parameters: the method comprises the following steps of 1, including a frame center point abscissa x, a frame center point ordinate y, a frame width w, a frame height h, a confidence score of 1 object and a conditional category probability score of each category;
3.4A) constructing frame regression loss by using a mean square error function, constructing confidence coefficient and category probability loss by using a cross entropy function, and taking the sum of the confidence coefficient and the category probability loss as total loss; judging whether the total loss is lower than a preset threshold value, if not, performing back propagation calculation to obtain the gradient of each network layer parameter, updating the parameters according to a set learning rate, then skipping to execute the step 3.2A), and starting the next round of training; if yes, completing the training of the Yolov3 model.
Optionally, the faster-R-CNN model includes a feature extraction network, a candidate region extraction network, a candidate frame screening layer, an interest region pooling layer, and a classification network, which are connected in sequence, where the feature extraction network uses a convolution part of the VGG-16 network as a backbone network; the candidate region extraction network consists of two parallel 1 x 1 convolution layers; the candidate frame screening layer ranks and screens frames output by the candidate area extraction network again to obtain a candidate frame most possibly containing the target; the interested region pooling layer unifies the candidate frames with different sizes into the same size so as to meet the input requirement of the full-connection layer; the classification network consists of two parallel full-connection layers, one full-connection layer is responsible for classifying the input candidate frames to obtain specific class labels of the candidate frames, and the other full-connection layer is responsible for carrying out second regression calculation on the candidate frames to obtain accurate position coordinates of the candidate frames;
the following steps of training the candidate area extraction network and classifying the network independently are also included before the step 3):
3.1B) carrying out feature extraction on the input image by taking the convolution part of the VGG-16 network as a backbone network, and taking the output of the last convolution layer of the VGG-16 network as a shared feature map;
3.2B) sharing the feature map and inputting the candidate area extraction network, generating a series of anchor frames on the original image by taking each pixel point of the feature map as an anchor point, and generating 9 frames by each pixel point; calculating the intersection ratio IOU of each anchor point frame and the labeling frame, and endowing the anchor point frames with labels: the intersection ratio IOU is more than 0.7 or the highest intersection ratio IOU with a certain marking frame is 1, which means that the anchor point frame contains the object as the foreground, and the intersection ratio IOU is less than 0.3 and is 0, which means that the anchor point frame contains the object as the background; randomly selecting 128 anchor points of the '1' class and 128 anchor points of the '0' class, and constructing softmax binary classification loss by using a cross entropy function; for all anchor frames of the '1' class, constructing frame regression loss by using smoothL1 function, and finishing the training of the candidate region extraction network by minimizing total loss;
3.3B) after the training of the candidate area extraction network is finished, performing score calculation on the anchor frame, converting the anchor frame into a front/background probability through a softmax function, performing regression calculation on the anchor frame to obtain a frame of a corrected position, taking the front M frames according to the foreground probability, removing the parts beyond the image boundary and too small in area, then removing the repeated frames by adopting a non-maximum suppression method NMS (network management system), and taking the front N frames according to the foreground probability as candidate frames;
3.4B) inputting the candidate frames extracted in the step 3.3B) and the shared characteristic diagram obtained in the step 3.1B) into the region of interest pooling layer together to obtain candidate frame characteristic diagrams with consistent sizes, and then inputting the candidate frame characteristic diagrams into a classification network;
3.5B) the classification network calculates the intersection ratio IOU of the candidate frame and the labeling frame, and assigns a specific class label to each candidate frame: the intersection ratio IOU is more than 0.5 and is '1' to indicate that the object contained in the candidate frame is a foreground, and the intersection ratio IOU is between 0.1 and 0.5 and is '0' to indicate that the object contained in the candidate frame is a background; randomly selecting 32 '1' class candidate boxes and 96 '0' class candidate boxes, constructing softmax multi-classification loss by using a cross entropy function, constructing frame regression loss by using smoothL1 function for all '1' class candidate boxes, then calculating the total loss of a classification network, and finishing the training of the classification network by minimizing the total loss;
the following steps of training the fast-R-CNN model are also included before the step 3):
3.1C) constructing a training set sample according to the close-range image and the annotation file thereof;
3.2C) initializing the AGG-16 network and training a candidate area extraction network;
3.3C) initializing the AGG-16 network, and training a classification network by using a candidate frame output by the candidate area extraction network in the step 3.2C);
3.4C) fixing the AGG-16 network in the step 3.3C), and training the candidate area extraction network again;
3.5C) fixing the AGG-16 network in the step 3.3C), and training the classification network again by using the candidate box output by the candidate area extraction network in the step 3.4C).
In addition, the invention also provides a power transmission line defect image recognition device based on cloud-edge cooperative detection, wherein the power transmission line defect image recognition device based on cloud-edge cooperative detection is an unmanned aerial vehicle or an inspection terminal, the power transmission line defect image recognition device based on cloud-edge cooperative detection at least comprises a microprocessor and a memory, the microprocessor is programmed or configured to execute the steps of the power transmission line defect image recognition method based on cloud-edge cooperative detection, or a computer program which is programmed or configured to execute the power transmission line defect image recognition method based on cloud-edge cooperative detection is stored in the memory.
In addition, the invention also provides a power transmission line defect image identification system based on cloud edge cooperative detection, which at least comprises a microprocessor and a memory, wherein the microprocessor is programmed or configured to execute the steps of the power transmission line defect image identification method based on cloud edge cooperative detection, or the memory is stored with a computer program programmed or configured to execute the power transmission line defect image identification method based on cloud edge cooperative detection.
In addition, the invention also provides a computer readable storage medium, wherein a computer program programmed or configured to execute the cloud edge cooperative detection-based power transmission line defect image identification method is stored in the computer readable storage medium.
Compared with the prior art, the invention has the following advantages:
1. the method comprises the following steps executed when the unmanned aerial vehicle or the inspection terminal as the edge end carries out inspection operation: collecting a power transmission line inspection image; classifying the collected power transmission line inspection image into a distant view image or a close view image through a classification model; if the long-range images are obtained through classification, identifying and positioning the defects of the large equipment parts through a defect detection model deployed at the edge end; and if the short-range images are obtained through classification, calling a defect detection model deployed at the cloud end to identify and position the defects of the small equipment parts. The invention can give consideration to multiple aspects of identification speed, identification precision, positioning precision and the like, can comprehensively identify various defect types, and is beneficial to reducing the labor intensity of operators and improving the routing inspection efficiency and the automation and intelligentization level of the power transmission line.
2. The invention classifies the inspection images shot by the unmanned aerial vehicle or the inspection terminal as the edge end by adopting the classification model, and identifies and positions the defects of the large equipment part and the small equipment part of the power transmission line based on the long-range image and the close-range image respectively, thereby effectively reducing the complexity of the model, leading the model to be easier to train and simultaneously improving the detection effect of the model. In particular, the identification of large component defects therein can be done at the job site. And after the computing power of the hardware configuration of the edge end and the performance of the algorithm model are further improved, all detection models can be completely deployed at the edge end, so that defect identification can be carried out while the edge end patrols, all operation flows are completed, and manual intervention is not needed.
3. The edge terminal equipment in the method can be an unmanned aerial vehicle, an inspection terminal such as an inspection robot, and various other embedded terminals or portable terminal equipment (such as a smart phone) and has the advantage of good universality.
Drawings
FIG. 1 is a schematic diagram of a basic flow of a method according to an embodiment of the present invention.
Detailed Description
As shown in fig. 1, the method for identifying a defect image of a power transmission line based on cloud-edge cooperative detection in this embodiment includes the following steps executed when an unmanned aerial vehicle or an inspection terminal serving as an edge performs inspection work:
1) collecting a power transmission line inspection image;
2) classifying the collected power transmission line inspection image into a distant view image or a close view image through a classification model;
3) if the long-range images are obtained through classification, identifying and positioning the defects of the large equipment parts through a defect detection model deployed at the edge end; and if the short-range images are obtained through classification, calling a defect detection model deployed at the cloud end to identify and position the defects of the small equipment parts.
It should be noted that the large device component and the small device component are specifically determined by corresponding defect detection models, and considering that the calculated force of the edge end is insufficient, the defect detection model deployed at the edge end is used for detecting the large device component, such as a channel, a tower, an attached facility, a foundation and other device components included in the long-range image, while the defect detection model deployed at the cloud end is used for detecting the small device component to exert the advantage of strong calculated force, such as an insulator sheet, a hanging point, a wire, a hardware and other device components included in the short-range image. Therefore, the power transmission line defect image identification method based on cloud-edge cooperative detection in the embodiment adopts a cloud-edge cooperative mode to divide the inspection image into two categories and adopts different models to perform defect detection, so that multiple defect types can be comprehensively identified by considering multiple aspects such as identification speed, identification precision, positioning precision and the like, the technical difficulty can be effectively reduced, the detection precision is improved, and the labor intensity of operators can be reduced, and the power transmission line inspection work efficiency and the automation and intelligentization levels thereof can be improved. The defect detection model deployed at the cloud end is called to identify and position the defects of the large equipment parts, so that the defects can be called in real time and can be processed after field operation is finished.
The classification model in the step 2) is used for classifying the collected power transmission line inspection images. In this embodiment, the classification model in step 2) is a ResNet-50 classification model, the ResNet-50 classification model includes five multi-block convolution layers and a full-link layer, the output of the full-link layer is converted into two classification probability tensors of a long-range image and a short-range image through a sigmoid function, and a class with a larger probability value is selected as a prediction class of the input image.
In this embodiment, the detailed steps of step 2) include: firstly, zooming a single patrol inspection image shot by an unmanned aerial vehicle to a set size, then extracting image features through 5 multi-block convolution layers, then outputting two-classification (distant view or close view) probability values of the image through 1 full connection layer and a sigmoid function, and finally giving the image a category corresponding to a larger probability as a label.
In this embodiment, step 2) further includes a step of training a ResNet-50 classification model: establishing a training sample containing a long-range image and a short-range image; during each round of iterative training, images in a training sample are processed by 5 multi-block convolutional layers to obtain a feature map of 32 times of down sampling, then the feature map is classified into a distant view image or a near view image by a full connection layer and a sigmoid function, a cross entropy function is used for constructing a classification loss, and a random gradient descent method is used for updating network parameters; and completing the training of the ResNet-50 classification model through multiple iterations.
In this embodiment, the defect detection model deployed at the edge in step 3) is the YOLOv3 model, and the defect detection model deployed at the cloud is the master-R-CNN model.
The steps of the Yolov3 model for identifying the long-range inspection image are as follows: and (3) zooming the long-range inspection image to a set size, extracting image features by taking a Darknet-53 network as a backbone network, and respectively forming three groups of feature maps of 32-time down-sampling, 16-time down-sampling and 8-time down-sampling. And 9 anchor frame sizes are distributed to three groups of feature maps, each group of feature maps is 3, small-size feature maps are distributed with large anchor frame sizes, and large-size feature maps are distributed with small anchor frame sizes. And according to the size of the anchor point frame distributed in the step S4.2, generating a series of anchor point frames (each pixel point generates 3 frames) on the original image by taking the pixel points of the feature map as anchor points. Each anchor box predicts a frame, each frame contains 4 position parameters (frame center abscissa x, frame center ordinate y, frame width w, frame height h), 1 confidence score containing the target and the conditional class probability score of each class. Sorting all frames according to the confidence level, removing frames smaller than a threshold value, then performing non-maximum suppression (NMS) operation on the rest frames according to the categories, and removing repeated frames. And displaying the finally reserved frame on the original image to finish the identification of the image. In the embodiment, the YOLOv3 model is adopted to quickly detect the defects of the large equipment parts, and the accuracy of automatic identification can be effectively ensured through confidence constraint. Meanwhile, the image is divided into a plurality of areas, which is beneficial to quickly positioning the defects of the large component. In the embodiment, the distant view target detection model is based on a YOLOv3 algorithm, and can identify and position large equipment part defects in a distant view image; the YOLOv3 model belongs to a one-stage algorithm, has obvious advantages in calculation speed, can fully exert the speed advantage when being arranged at an edge end, and even can realize real-time detection. Meanwhile, YOLOv3 has a better recognition effect on a large target and is the first choice of an edge recognition algorithm.
In the embodiment, the close-range target detection model is based on the fast-R-CNN model, and can identify and position small and dense equipment defects in the close-range image. For small and dense equipment components in a close-range image, the proportion of the small and dense equipment components in an original image is usually small, after repeated pooling downsampling, information reserved on a feature map is very limited and is difficult to accurately identify and locate, and a fast-R-CNN model can accurately detect the defects of the small equipment components through twice classification and frame regression. The fast-R-CNN model extracts candidate frames through a regional candidate network and classifies candidate regions, belonging to a two-stage algorithm. Although the detection speed of the algorithm is slower than that of the 'one-stage' algorithm, the algorithm has more advantages in detection precision, and particularly has better detection effect on small and dense parts. The most common defects of the power transmission line comprise bolt looseness, strand scattering of a conducting wire, split pin loss and the like, which belong to small part defects, and a two-stage algorithm needs to be adopted for identification in a targeted mode.
The embodiment collects the unmanned aerial vehicle inspection images of the power transmission line, divides the images of a channel, a whole tower, a tower body, a tower head, a signboard, a foundation and the like into a long-shot type, and divides the images of an insulator string, a hanging point and the like into a short-shot type. And respectively screening and labeling parts with standard shooting and clear image quality in the two categories to finally form three training sample data sets. In this embodiment, the inspection image is labeled, and the labeled main information includes a picture name, picture categories (long-range view and short-range view), a defect part name, a defect type, a defect position (horizontal and vertical coordinates of the upper left vertex of the label frame, horizontal and vertical coordinates of the lower right vertex), a storage path, and the like. And writing the marking information into a json file with the name consistent with the picture name to jointly form training sample data. The establishment of the sample set can provide training samples for the deep learning model, and is convenient for training the classification model and the defect identification model.
In this embodiment, step 3) includes the following steps of training the YOLOv3 model:
3.1A) constructing a training set sample according to the long-range images and the annotation files thereof; clustering the sizes of all the labeled frames in the sample images of the training set by adopting a k-means clustering method to form 9 anchor point frame sizes with different sizes;
3.2A) selecting a training sample image, zooming the training sample image to a uniform size, extracting image features by taking a Darknet-53 network as a backbone network, and respectively forming three groups of feature maps of 32-time, 16-time and 8-time downsampling; allocating 9 anchor frame sizes to three groups of feature maps, wherein each group of feature maps is 3, the small-size feature maps are allocated with large anchor frame sizes, and the large-size feature maps are allocated with small anchor frame sizes;
3.3A) according to the size of the anchor frame distributed in the step 3.2A), generating a series of anchor frames on the original image by taking each pixel point of the feature map as an anchor, generating 3 frames by each pixel point, calculating the intersection ratio IOU of each anchor frame and the marking frame, and if one anchor frame has the largest IOU, predicting the object contained in the marking frame by the anchor frame; each anchor block predicts a frame, each frame contains 4 location parameters: the method comprises the following steps of 1, including a frame center point abscissa x, a frame center point ordinate y, a frame width w, a frame height h, a confidence score of 1 object and a conditional category probability score of each category;
3.4A) constructing frame regression loss by using a mean square error function, constructing confidence coefficient and category probability loss by using a cross entropy function, and taking the sum of the confidence coefficient and the category probability loss as total loss; judging whether the total loss is lower than a preset threshold value, if not, performing back propagation calculation to obtain the gradient of each network layer parameter, updating the parameters according to a set learning rate, then skipping to execute the step 3.2A), and starting the next round of training; if yes, completing the training of the Yolov3 model.
In this embodiment, the ResNet-50 classification model and the YOLOv3 model may be trained on a computer in advance, and then the ResNet-50 classification model and the YOLOv3 model are transplanted to a high-performance microcomputer chip, which is mounted on the body of the unmanned aerial vehicle and integrated with the unmanned aerial vehicle through the OSDK. As the preferred scheme, the high-performance microcomputer chip of the unmanned aerial vehicle is configured with a GPU, the video memory is not lower than 6G, and the cruising ability of the unmanned aerial vehicle is not lower than 30 minutes on the premise of 300G of load. When the unmanned aerial vehicle executes the line patrol task and shoots the patrol image, the microcomputer chip automatically reads the image according to a preset program, firstly calls a ResNet-50 classification model to classify the image, skips if the image belongs to a close shot image, and continues to call a YOLOv3 model to detect large equipment parts in the image if the image belongs to a long shot image. When the existence of the defect is detected, the defective part is subjected to frame selection marking, the image is pushed to a receiver, the operator is warned and displayed, and meanwhile, the image and the defect information are independently stored.
In the embodiment, the master-R-CNN model and the hardware configuration related to calculation are deployed on the cloud platform to form the cloud application service. After the on-site line patrol operation is finished, the operator stores the high-definition images shot by the unmanned aerial vehicle in a local computer, applies for cloud application service, mobilizes software and hardware resources to detect close-range images in a cloud computing mode, identifies whether small equipment parts have defects or not, and feeds back results such as the defective parts, defect types and defect positions to the operator. The hardware resources configured by the cloud platform can meet the requirement of multi-channel concurrency, the memory of the CPU is not lower than 64G, the video memory of the GPU is not lower than 32G, and the hard disk is not lower than 10T.
In this embodiment, the fast-R-CNN model includes a feature extraction network, a candidate region extraction network, a candidate frame screening layer (a propofol layer), a region of interest pooling layer (a ROI pooling layer), and a classification network, which are connected in sequence, where the feature extraction network uses a convolution portion of the VGG-16 network as a backbone network; the candidate region extraction network consists of two parallel 1 x 1 convolution layers; the candidate frame screening layer ranks and screens frames output by the candidate area extraction network again to obtain a candidate frame most possibly containing the target; the interested region pooling layer unifies the candidate frames with different sizes into the same size so as to meet the input requirement of the full-connection layer; the classification network is composed of two parallel full-connection layers, one full-connection layer is responsible for classifying input candidate frames to obtain specific class labels of the candidate frames, and the other full-connection layer is responsible for performing second regression calculation on the candidate frames to obtain accurate position coordinates of the candidate frames. The step of identifying the close-range inspection image by the fast-R-CNN model is as follows: and (3) performing feature extraction on the close-range images by taking the convolution layer part of the VGG-16 network as a backbone network to form a shared feature map sampled by 16 times. Inputting the feature map into a candidate region extraction network, generating a series of anchor frames on an original image by taking each pixel point of the feature map as an anchor, calculating the front/background probability of each anchor frame through classification branches, and calculating the regression offset of each anchor frame through regression branches to further obtain a corresponding frame. And sequencing the frames according to the foreground probability, taking the first M frames, removing the frames beyond the image boundary and too small frames, removing repeated frames by adopting a non-maximum inhibition method, and taking the first N frames as candidate frames according to the foreground probability. And inputting the obtained candidate frame and the generated shared feature map into the region-of-interest pooling layer together to obtain the candidate frame feature maps with consistent sizes, and then inputting the candidate frame feature maps into the classification network. In the classification network, the multi-class probability of each candidate frame is calculated through the classification branches, the class with the maximum probability value is used as the class label of the candidate frame, the regression offset of each candidate frame is calculated through the regression branches, and then the corresponding accurate correction frame is obtained. And sequencing the accurate correction frames according to the category and the probability, removing repeated frames by adopting a non-maximum value inhibition method, outputting the final accurate correction frames, category labels and probability values thereof according to a probability threshold, and displaying the final accurate correction frames, the category labels and the probability values on an original image to finish classification and positioning tasks.
In this embodiment, the following steps of training the candidate region extraction network and classifying the network separately (training of the feature extraction network is integrated in the training process of the candidate region extraction network and the classifying network) are further included before the step 3):
3.1B) carrying out feature extraction on the input image by taking the convolution part of the VGG-16 network as a backbone network, and taking the output of the last convolution layer of the VGG-16 network as a shared feature map;
3.2B) sharing the feature map and inputting the candidate area extraction network, generating a series of anchor frames on the original image by taking each pixel point of the feature map as an anchor point, and generating 9 frames by each pixel point; calculating the intersection ratio IOU of each anchor point frame and the labeling frame, and endowing the anchor point frames with labels: the intersection ratio IOU is more than 0.7 or the highest intersection ratio IOU with a certain marking frame is 1, which means that the anchor point frame contains the object as the foreground, and the intersection ratio IOU is less than 0.3 and is 0, which means that the anchor point frame contains the object as the background; randomly selecting 128 anchor points of the '1' class and 128 anchor points of the '0' class, and constructing softmax binary classification loss by using a cross entropy function; for all anchor frames of the '1' class, constructing frame regression loss by using smoothL1 function, and finishing the training of the candidate region extraction network by minimizing total loss;
3.3B) after the training of the candidate area extraction network is finished, performing score calculation on the anchor frame, converting the anchor frame into a front/background probability through a softmax function, performing regression calculation on the anchor frame to obtain a frame of a corrected position, taking the front M frames according to the foreground probability, removing the parts beyond the image boundary and too small in area, then removing the repeated frames by adopting a non-maximum suppression method NMS (network management system), and taking the front N frames according to the foreground probability as candidate frames;
3.4B) inputting the candidate frames extracted in the step 3.3B) and the shared characteristic diagram obtained in the step 3.1B) into the region of interest pooling layer together to obtain candidate frame characteristic diagrams with consistent sizes, and then inputting the candidate frame characteristic diagrams into a classification network;
3.5B) the classification network calculates the intersection ratio IOU of the candidate frame and the labeling frame, and assigns a specific class label to each candidate frame: the intersection ratio IOU is more than 0.5 and is '1' to indicate that the object contained in the candidate frame is a foreground, and the intersection ratio IOU is between 0.1 and 0.5 and is '0' to indicate that the object contained in the candidate frame is a background; randomly selecting 32 '1' class candidate boxes and 96 '0' class candidate boxes, constructing softmax multi-classification loss by using a cross entropy function, constructing frame regression loss by using smoothL1 function for all '1' class candidate boxes, then calculating the total loss of a classification network, and finishing the training of the classification network by minimizing the total loss;
the following steps of training the fast-R-CNN model are also included before the step 3):
3.1C) constructing a training set sample according to the close-range image and the annotation file thereof;
3.2C) initializing the AGG-16 network and training a candidate area extraction network;
3.3C) initializing the AGG-16 network, and training a classification network by using a candidate frame output by the candidate area extraction network in the step 3.2C);
3.4C) fixing the AGG-16 network in the step 3.3C), and training the candidate area extraction network again;
3.5C) fixing the AGG-16 network in the step 3.3C), and training the classification network again by using the candidate box output by the candidate area extraction network in the step 3.4C).
In this embodiment, the unmanned aerial vehicle inspection images of the power transmission line are collected in advance, the part images of the channel, the whole tower, the tower body, the tower head, the signboard, the foundation and the like are divided into a long-range view type, the part images of the insulator, the hanging point and the like are divided into a short-range view type, and a data set 1 for training a classification model is formed. And screening out parts with standard shooting and clear picture quality in the two categories, and respectively labeling, wherein the long-range images are mainly labeled with large equipment part defects, and the short-range images are mainly labeled with small equipment part defects. The label file is stored in a json format and comprises information such as defect types, defect coordinates (upper left corner horizontal ordinate and lower right corner horizontal ordinate), image types and image names. Together, the defect image and the annotation file form a data set 2 and a data set 3 for training the target detection model. According to the proportion of 80%: 10%: and dividing the three data sets into a training set, a testing set and a verification set according to a proportion of 10%, and finishing the training of a ResNet-50 classification model, a Yolov3 target detection model and a faster-R-CNN target detection model respectively on the basis of the training set, the testing set and the verification set. In the training process of the model, the network loss is calculated through forward propagation, the gradient of each parameter is calculated through backward propagation, and then the parameters are updated according to the set learning step length to finish an epoch training. And then forward calculation is carried out on the test set, and the generalization performance of the test model is lost through the network. And completing model training through multiple rounds of iteration, and finally checking the recognition effect of the model on the verification set. The trained ResNet-50 classification model and the YOLOv3 target detection model are deployed into a high-performance airborne chip of Manifold2 in Xinjiang, and the Manifold2 is carried on a body of an M210 RTK unmanned aerial vehicle in Xinjiang and is communicated with a middle platform of the unmanned aerial vehicle through a USB interface. When the unmanned aerial vehicle executes the line patrol task and shoots the patrol images, the Manifold2 automatically acquires the patrol images and calls a ResNet-50 classification model to classify the images. If the image is a distant view class, the YOLOv3 model is continuously called to detect whether the image has defects. When the equipment defect is detected, the mark is selected in a frame mode, and then the mark is pushed to a receiver to display and alarm for an operator. And deploying the faster-R-CNN model on a cloud platform to form cloud application service. In order to meet the requirement of multipath concurrency, the hardware configuration is not lower than: 64G CPU memory, 32G GPU video memory. After the on-site line patrol operation is finished, the operator stores the high-definition images shot by the unmanned aerial vehicle in a local computer, applies for cloud application service, mobilizes software and hardware resources to detect close-range images in a cloud computing mode, and feeds back results to the operator.
In summary, due to the fact that the transmission line is complex in structure and numerous in equipment, different equipment and even different defect types of the same equipment have huge appearance characterization differences, and it is difficult to identify a plurality of defect components by using the same algorithm model. The embodiment provides a new identification method for the power transmission line defect image based on cloud edge cooperative detection, the power transmission line inspection image is divided into a long-range view image and a short-range view image by adopting an image classification method, and different algorithm models are respectively adopted to detect defect parts based on the long-range view image and the short-range view image. For larger components such as channels, towers, auxiliary devices and foundations, performing edge end identification by using a YOLOv3 model; and for smaller components such as insulator sheets, hanging points, wires, hardware fittings and the like, cloud identification is carried out by adopting a fast-R-CNN model. The method provided by the power transmission line defect image identification method based on cloud edge cooperative detection can give consideration to multiple aspects of identification speed, identification precision, positioning precision and the like, can comprehensively identify multiple defect types, and is beneficial to reducing the labor intensity of operators and improving the power transmission line inspection work efficiency and the automation and intelligence levels thereof.
In addition, the present embodiment further provides a power transmission line defect image recognition device based on cloud-edge cooperative detection, where the power transmission line defect image recognition device based on cloud-edge cooperative detection is an unmanned aerial vehicle or an inspection terminal, and the power transmission line defect image recognition device based on cloud-edge cooperative detection at least includes a microprocessor and a memory, where the microprocessor is programmed or configured to execute the steps of the power transmission line defect image recognition method based on cloud-edge cooperative detection, or the memory stores a computer program that is programmed or configured to execute the power transmission line defect image recognition method based on cloud-edge cooperative detection.
In addition, the present embodiment also provides a power transmission line defect image recognition system based on cloud-edge cooperative detection, which at least includes a microprocessor and a memory, where the microprocessor is programmed or configured to execute the steps of the power transmission line defect image recognition method based on cloud-edge cooperative detection, or the memory stores a computer program programmed or configured to execute the power transmission line defect image recognition method based on cloud-edge cooperative detection.
In addition, the present embodiment also provides a computer readable storage medium, in which a computer program programmed or configured to execute the foregoing cloud edge cooperative detection-based power transmission line defect image identification method is stored.
As will be appreciated by one skilled in the art, embodiments of the present application may be provided as a method, system, or computer program product. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present application may take the form of a computer program product embodied on one or more computer-readable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein. The present application is directed to methods, apparatus (systems), and computer program products according to embodiments of the application wherein instructions, which execute via a flowchart and/or a processor of the computer program product, create means for implementing functions specified in the flowchart and/or block diagram block or blocks. These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks. These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above embodiments, and all technical solutions belonging to the idea of the present invention belong to the protection scope of the present invention. It should be noted that modifications and embellishments within the scope of the invention may occur to those skilled in the art without departing from the principle of the invention, and are considered to be within the scope of the invention.

Claims (10)

1. A power transmission line defect image identification method based on cloud edge collaborative detection is characterized by comprising the following steps of:
1) collecting a power transmission line inspection image;
2) classifying the collected power transmission line inspection image into a distant view image or a close view image through a classification model;
3) if the long-range images are obtained through classification, identifying and positioning the defects of the large equipment parts through a defect detection model deployed at the edge end; and if the short-range images are obtained through classification, calling a defect detection model deployed at the cloud end to identify and position the defects of the small equipment parts.
2. The method for identifying the defect image of the power transmission line based on the cloud-edge cooperative detection as claimed in claim 1, wherein the classification model in the step 2) is a ResNet-50 classification model, the ResNet-50 classification model comprises five multi-block convolutional layers and a full connection layer, the output of the full connection layer is converted into two classification probability tensors of a long-range image and a short-range image through a sigmoid function, and the class with the larger probability value is selected as the prediction class of the input image.
3. The method for identifying the defect image of the power transmission line based on the cloud edge cooperative detection as claimed in claim 2, wherein the detailed steps of the step 2) comprise: the input power transmission line inspection image is processed by 5 multi-block convolution layers to obtain a feature map of 32 times down sampling, and then the feature map is classified into a distant view image or a close view image by a full connection layer and a sigmoid function.
4. The method for identifying the defect image of the power transmission line based on the cloud edge cooperative detection as claimed in claim 3, wherein the step 2) is preceded by a step of training a ResNet-50 classification model: establishing a training sample containing a long-range image and a short-range image; during each round of iterative training, images in a training sample are processed by 5 multi-block convolutional layers to obtain a feature map of 32 times of down sampling, then the feature map is classified into a distant view image or a near view image by a full connection layer and a sigmoid function, a cross entropy function is used for constructing a classification loss, and a random gradient descent method is used for updating network parameters; and completing the training of the ResNet-50 classification model through multiple iterations.
5. The power transmission line defect image identification method based on cloud edge cooperative detection as claimed in claim 1, wherein the defect detection model deployed at the edge end in step 3) is a YOLOv3 model, and the defect detection model deployed at the cloud end is a fast-R-CNN model.
6. The method for identifying the defect image of the power transmission line based on the cloud edge collaborative detection as claimed in claim 5, wherein the step 3) is preceded by the following steps of training a YOLOv3 model:
3.1A) constructing a training set sample according to the long-range images and the annotation files thereof; clustering the sizes of all the labeled frames in the sample images of the training set by adopting a k-means clustering method to form 9 anchor point frame sizes with different sizes;
3.2A) selecting a training sample image, zooming the training sample image to a uniform size, extracting image features by taking a Darknet-53 network as a backbone network, and respectively forming three groups of feature maps of 32-time, 16-time and 8-time downsampling; allocating 9 anchor frame sizes to three groups of feature maps, wherein each group of feature maps is 3, the small-size feature maps are allocated with large anchor frame sizes, and the large-size feature maps are allocated with small anchor frame sizes;
3.3A) according to the size of the anchor frame distributed in the step 3.2A), generating a series of anchor frames on the original image by taking each pixel point of the feature map as an anchor, generating 3 frames by each pixel point, calculating the intersection ratio IOU of each anchor frame and the marking frame, and if one anchor frame has the largest IOU, predicting the object contained in the marking frame by the anchor frame; each anchor block predicts a frame, each frame contains 4 location parameters: the method comprises the following steps of 1, including a frame center point abscissa x, a frame center point ordinate y, a frame width w, a frame height h, a confidence score of 1 object and a conditional category probability score of each category;
3.4A) constructing frame regression loss by using a mean square error function, constructing confidence coefficient and category probability loss by using a cross entropy function, and taking the sum of the confidence coefficient and the category probability loss as total loss; judging whether the total loss is lower than a preset threshold value, if not, performing back propagation calculation to obtain the gradient of each network layer parameter, updating the parameters according to a set learning rate, then skipping to execute the step 3.2A), and starting the next round of training; if yes, completing the training of the Yolov3 model.
7. The cloud-edge cooperative detection-based power transmission line defect image identification method of claim 5, wherein the faster-R-CNN model comprises a feature extraction network, a candidate region extraction network, a candidate frame screening layer, an interest region pooling layer and a classification network which are connected in sequence, wherein the feature extraction network takes a convolution part of a VGG-16 network as a backbone network; the candidate region extraction network consists of two parallel 1 x 1 convolution layers; the candidate frame screening layer ranks and screens frames output by the candidate area extraction network again to obtain a candidate frame most possibly containing the target; the interested region pooling layer unifies the candidate frames with different sizes into the same size so as to meet the input requirement of the full-connection layer; the classification network consists of two parallel full-connection layers, one full-connection layer is responsible for classifying the input candidate frames to obtain specific class labels of the candidate frames, and the other full-connection layer is responsible for carrying out second regression calculation on the candidate frames to obtain accurate position coordinates of the candidate frames;
the following steps of training the candidate area extraction network and classifying the network independently are also included before the step 3):
3.1B) carrying out feature extraction on the input image by taking the convolution part of the VGG-16 network as a backbone network, and taking the output of the last convolution layer of the VGG-16 network as a shared feature map;
3.2B) sharing the feature map and inputting the candidate area extraction network, generating a series of anchor frames on the original image by taking each pixel point of the feature map as an anchor point, and generating 9 frames by each pixel point; calculating the intersection ratio IOU of each anchor point frame and the labeling frame, and endowing the anchor point frames with labels: the intersection ratio IOU is more than 0.7 or the highest intersection ratio IOU with a certain marking frame is 1, which means that the anchor point frame contains the object as the foreground, and the intersection ratio IOU is less than 0.3 and is 0, which means that the anchor point frame contains the object as the background; randomly selecting 128 anchor points of the '1' class and 128 anchor points of the '0' class, and constructing softmax binary classification loss by using a cross entropy function; for all anchor frames of the '1' class, constructing frame regression loss by using smoothL1 function, and finishing the training of the candidate region extraction network by minimizing total loss;
3.3B) after the training of the candidate area extraction network is finished, performing score calculation on the anchor frame, converting the anchor frame into a front/background probability through a softmax function, performing regression calculation on the anchor frame to obtain a frame of a corrected position, taking the front M frames according to the foreground probability, removing the parts beyond the image boundary and too small in area, then removing the repeated frames by adopting a non-maximum suppression method NMS (network management system), and taking the front N frames according to the foreground probability as candidate frames;
3.4B) inputting the candidate frames extracted in the step 3.3B) and the shared characteristic diagram obtained in the step 3.1B) into the region of interest pooling layer together to obtain candidate frame characteristic diagrams with consistent sizes, and then inputting the candidate frame characteristic diagrams into a classification network;
3.5B) the classification network calculates the intersection ratio IOU of the candidate frame and the labeling frame, and assigns a specific class label to each candidate frame: the intersection ratio IOU is more than 0.5 and is '1' to indicate that the object contained in the candidate frame is a foreground, and the intersection ratio IOU is between 0.1 and 0.5 and is '0' to indicate that the object contained in the candidate frame is a background; randomly selecting 32 '1' class candidate boxes and 96 '0' class candidate boxes, constructing softmax multi-classification loss by using a cross entropy function, constructing frame regression loss by using smoothL1 function for all '1' class candidate boxes, then calculating the total loss of a classification network, and finishing the training of the classification network by minimizing the total loss;
the following steps of training the fast-R-CNN model are also included before the step 3):
3.1C) constructing a training set sample according to the close-range image and the annotation file thereof;
3.2C) initializing the AGG-16 network and training a candidate area extraction network;
3.3C) initializing the AGG-16 network, and training a classification network by using a candidate frame output by the candidate area extraction network in the step 3.2C);
3.4C) fixing the AGG-16 network in the step 3.3C), and training the candidate area extraction network again;
3.5C) fixing the AGG-16 network in the step 3.3C), and training the classification network again by using the candidate box output by the candidate area extraction network in the step 3.4C).
8. A power transmission line defect image recognition device based on cloud edge cooperative detection is an unmanned aerial vehicle or an inspection terminal, and at least comprises a microprocessor and a memory, and is characterized in that the microprocessor is programmed or configured to execute the steps of the power transmission line defect image recognition method based on cloud edge cooperative detection in any one of claims 1 to 7, or the memory stores a computer program programmed or configured to execute the power transmission line defect image recognition method based on cloud edge cooperative detection in any one of claims 1 to 7.
9. A power transmission line defect image recognition system based on cloud edge cooperative detection at least comprises a microprocessor and a memory, and is characterized in that the microprocessor is programmed or configured to execute the steps of the power transmission line defect image recognition method based on cloud edge cooperative detection in any one of claims 1 to 7, or the memory stores a computer program which is programmed or configured to execute the power transmission line defect image recognition method based on cloud edge cooperative detection in any one of claims 1 to 7.
10. A computer-readable storage medium, wherein a computer program is stored in the computer-readable storage medium, and is programmed or configured to execute the cloud-edge-based cooperative detection-based power transmission line defect image identification method according to any one of claims 1 to 7.
CN202010691927.XA 2020-07-17 2020-07-17 Power transmission line defect image identification method based on cloud edge cooperative detection Active CN111784685B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010691927.XA CN111784685B (en) 2020-07-17 2020-07-17 Power transmission line defect image identification method based on cloud edge cooperative detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010691927.XA CN111784685B (en) 2020-07-17 2020-07-17 Power transmission line defect image identification method based on cloud edge cooperative detection

Publications (2)

Publication Number Publication Date
CN111784685A true CN111784685A (en) 2020-10-16
CN111784685B CN111784685B (en) 2023-08-18

Family

ID=72764232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010691927.XA Active CN111784685B (en) 2020-07-17 2020-07-17 Power transmission line defect image identification method based on cloud edge cooperative detection

Country Status (1)

Country Link
CN (1) CN111784685B (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367400A (en) * 2020-11-12 2021-02-12 广东电网有限责任公司 Intelligent inspection method and system for power internet of things with edge cloud coordination
CN112419401A (en) * 2020-11-23 2021-02-26 上海交通大学 Aircraft surface defect detection system based on cloud edge cooperation and deep learning
CN112734703A (en) * 2020-12-28 2021-04-30 佛山市南海区广工大数控装备协同创新研究院 PCB defect optimization method by utilizing AI cloud collaborative detection
CN112837282A (en) * 2021-01-27 2021-05-25 上海交通大学 Small sample image defect detection method based on cloud edge cooperation and deep learning
CN112966608A (en) * 2021-03-05 2021-06-15 哈尔滨工业大学 Target detection method, system and storage medium based on edge-side cooperation
CN113052820A (en) * 2021-03-25 2021-06-29 贵州电网有限责任公司 Circuit equipment defect identification method based on neural network technology
CN113177570A (en) * 2020-11-20 2021-07-27 广东电网有限责任公司广州供电局 Method for detecting abnormity of electric tower bolt based on FasterRCNN cascade
CN113255605A (en) * 2021-06-29 2021-08-13 深圳市城市交通规划设计研究中心股份有限公司 Pavement disease detection method and device, terminal equipment and storage medium
CN113326871A (en) * 2021-05-19 2021-08-31 天津理工大学 Cloud edge cooperative meniscus detection method and system
CN113408087A (en) * 2021-05-25 2021-09-17 国网湖北省电力有限公司检修公司 Substation inspection method based on cloud side system and video intelligent analysis
CN113486779A (en) * 2021-07-01 2021-10-08 国网北京市电力公司 Panoramic intelligent inspection system for power transmission line
CN113515829A (en) * 2021-05-21 2021-10-19 华北电力大学(保定) Situation sensing method for transmission line hardware defects under extremely cold disasters
CN113592839A (en) * 2021-08-06 2021-11-02 广东电网有限责任公司 Distribution network line typical defect diagnosis method and system based on improved fast RCNN
CN114140447A (en) * 2021-12-06 2022-03-04 国网新疆电力有限公司信息通信公司 Cloud edge cooperation technology-based power equipment image identification method and system
CN114359285A (en) * 2022-03-18 2022-04-15 南方电网数字电网研究院有限公司 Power grid defect detection method and device based on visual context constraint learning
CN114397306A (en) * 2022-03-25 2022-04-26 南方电网数字电网研究院有限公司 Power grid grading ring hypercomplex category defect multi-stage model joint detection method
WO2022111219A1 (en) * 2020-11-30 2022-06-02 华南理工大学 Domain adaptation device operation and maintenance system and method
CN114815881A (en) * 2022-04-06 2022-07-29 国网浙江省电力有限公司宁波供电公司 Intelligent inspection method based on edge calculation and unmanned aerial vehicle inspection cooperation
CN114926667A (en) * 2022-07-20 2022-08-19 安徽炬视科技有限公司 Image identification method based on cloud edge-end cooperation
CN114943904A (en) * 2022-06-07 2022-08-26 国网江苏省电力有限公司泰州供电分公司 Operation monitoring method based on unmanned aerial vehicle inspection
CN114972721A (en) * 2022-06-13 2022-08-30 中国科学院沈阳自动化研究所 Power transmission line insulator string recognition and positioning method based on deep learning
CN115131307A (en) * 2022-06-23 2022-09-30 腾讯科技(深圳)有限公司 Article defect detection method and related device
CN115220479A (en) * 2022-09-20 2022-10-21 山东大学 Dynamic and static cooperative power transmission line refined inspection method and system
CN115272981A (en) * 2022-09-26 2022-11-01 山东大学 Cloud-edge co-learning power transmission inspection method and system
CN115965627A (en) * 2023-03-16 2023-04-14 中铁电气化局集团有限公司 Micro component detection system and method applied to railway operation
CN116579609A (en) * 2023-05-15 2023-08-11 三峡科技有限责任公司 Illegal operation analysis method based on inspection process
CN116703926A (en) * 2023-08-08 2023-09-05 苏州思谋智能科技有限公司 Defect detection method, defect detection device, computer equipment and computer readable storage medium
US11836968B1 (en) * 2022-12-08 2023-12-05 Sas Institute, Inc. Systems and methods for configuring and using a multi-stage object classification and condition pipeline
US12002256B1 (en) 2022-12-08 2024-06-04 Sas Institute Inc. Systems and methods for configuring and using a multi-stage object classification and condition pipeline

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446228A (en) * 2010-09-30 2012-05-09 深圳市雅都软件股份有限公司 Three-dimensional space visualization display method and system for power transmission line
CN104811608A (en) * 2014-01-28 2015-07-29 聚晶半导体股份有限公司 Image capturing apparatus and image defect correction method thereof
CN108810620A (en) * 2018-07-18 2018-11-13 腾讯科技(深圳)有限公司 Identify method, computer equipment and the storage medium of the material time point in video
CN109657596A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance component identification method based on deep learning
CN110033453A (en) * 2019-04-18 2019-07-19 国网山西省电力公司电力科学研究院 Based on the power transmission and transformation line insulator Aerial Images fault detection method for improving YOLOv3
WO2020134943A1 (en) * 2018-12-25 2020-07-02 阿里巴巴集团控股有限公司 Car insurance automatic payout method and system
CN111400536A (en) * 2020-03-11 2020-07-10 无锡太湖学院 Low-cost tomato leaf disease identification method based on lightweight deep neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102446228A (en) * 2010-09-30 2012-05-09 深圳市雅都软件股份有限公司 Three-dimensional space visualization display method and system for power transmission line
CN104811608A (en) * 2014-01-28 2015-07-29 聚晶半导体股份有限公司 Image capturing apparatus and image defect correction method thereof
CN108810620A (en) * 2018-07-18 2018-11-13 腾讯科技(深圳)有限公司 Identify method, computer equipment and the storage medium of the material time point in video
CN109657596A (en) * 2018-12-12 2019-04-19 天津卡达克数据有限公司 A kind of vehicle appearance component identification method based on deep learning
WO2020134943A1 (en) * 2018-12-25 2020-07-02 阿里巴巴集团控股有限公司 Car insurance automatic payout method and system
CN110033453A (en) * 2019-04-18 2019-07-19 国网山西省电力公司电力科学研究院 Based on the power transmission and transformation line insulator Aerial Images fault detection method for improving YOLOv3
CN111400536A (en) * 2020-03-11 2020-07-10 无锡太湖学院 Low-cost tomato leaf disease identification method based on lightweight deep neural network

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHEN C等: "Obtaining world coordinate information of UAV in GNSS denied environments", 《SENSORS》, vol. 20, no. 8, pages 1 - 24 *
冯小雨等: "基于改进 Faster R-CNN 的空中目标检测", 《光学学报》, vol. 38, no. 6, pages 0615004 - 1 *
冯小雨等: "基于改进Faster R-CNN的空中目标检测", 《光学学报》, no. 6, pages 250 - 258 *
刘梦溪等: "焊缝缺陷图像分类识别的深度置信网络研究", 《测控技术》, vol. 37, no. 8, pages 5 - 9 *
张玉燕等: "基于CT图像的金属点阵结构内部缺陷检测方法", 《计量学报》, vol. 41, no. 5, pages 544 - 550 *

Cited By (41)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112367400A (en) * 2020-11-12 2021-02-12 广东电网有限责任公司 Intelligent inspection method and system for power internet of things with edge cloud coordination
CN113177570A (en) * 2020-11-20 2021-07-27 广东电网有限责任公司广州供电局 Method for detecting abnormity of electric tower bolt based on FasterRCNN cascade
CN112419401A (en) * 2020-11-23 2021-02-26 上海交通大学 Aircraft surface defect detection system based on cloud edge cooperation and deep learning
WO2022111219A1 (en) * 2020-11-30 2022-06-02 华南理工大学 Domain adaptation device operation and maintenance system and method
CN112734703A (en) * 2020-12-28 2021-04-30 佛山市南海区广工大数控装备协同创新研究院 PCB defect optimization method by utilizing AI cloud collaborative detection
CN112837282A (en) * 2021-01-27 2021-05-25 上海交通大学 Small sample image defect detection method based on cloud edge cooperation and deep learning
CN112966608A (en) * 2021-03-05 2021-06-15 哈尔滨工业大学 Target detection method, system and storage medium based on edge-side cooperation
CN113052820A (en) * 2021-03-25 2021-06-29 贵州电网有限责任公司 Circuit equipment defect identification method based on neural network technology
CN113326871A (en) * 2021-05-19 2021-08-31 天津理工大学 Cloud edge cooperative meniscus detection method and system
CN113515829A (en) * 2021-05-21 2021-10-19 华北电力大学(保定) Situation sensing method for transmission line hardware defects under extremely cold disasters
CN113515829B (en) * 2021-05-21 2023-07-21 华北电力大学(保定) Situation awareness method for transmission line hardware defects under extremely cold disasters
CN113408087A (en) * 2021-05-25 2021-09-17 国网湖北省电力有限公司检修公司 Substation inspection method based on cloud side system and video intelligent analysis
CN113408087B (en) * 2021-05-25 2023-03-24 国网湖北省电力有限公司检修公司 Substation inspection method based on cloud side system and video intelligent analysis
CN113255605A (en) * 2021-06-29 2021-08-13 深圳市城市交通规划设计研究中心股份有限公司 Pavement disease detection method and device, terminal equipment and storage medium
CN113486779A (en) * 2021-07-01 2021-10-08 国网北京市电力公司 Panoramic intelligent inspection system for power transmission line
CN113592839A (en) * 2021-08-06 2021-11-02 广东电网有限责任公司 Distribution network line typical defect diagnosis method and system based on improved fast RCNN
CN113592839B (en) * 2021-08-06 2023-01-13 广东电网有限责任公司 Distribution network line typical defect diagnosis method and system based on improved fast RCNN
CN114140447A (en) * 2021-12-06 2022-03-04 国网新疆电力有限公司信息通信公司 Cloud edge cooperation technology-based power equipment image identification method and system
CN114359285A (en) * 2022-03-18 2022-04-15 南方电网数字电网研究院有限公司 Power grid defect detection method and device based on visual context constraint learning
CN114359285B (en) * 2022-03-18 2022-07-29 南方电网数字电网研究院有限公司 Power grid defect detection method and device based on visual context constraint learning
CN114397306A (en) * 2022-03-25 2022-04-26 南方电网数字电网研究院有限公司 Power grid grading ring hypercomplex category defect multi-stage model joint detection method
CN114815881A (en) * 2022-04-06 2022-07-29 国网浙江省电力有限公司宁波供电公司 Intelligent inspection method based on edge calculation and unmanned aerial vehicle inspection cooperation
CN114815881B (en) * 2022-04-06 2024-08-02 国网浙江省电力有限公司宁波供电公司 Intelligent inspection method based on cooperation of edge calculation and unmanned aerial vehicle inspection
CN114943904A (en) * 2022-06-07 2022-08-26 国网江苏省电力有限公司泰州供电分公司 Operation monitoring method based on unmanned aerial vehicle inspection
CN114972721A (en) * 2022-06-13 2022-08-30 中国科学院沈阳自动化研究所 Power transmission line insulator string recognition and positioning method based on deep learning
CN115131307A (en) * 2022-06-23 2022-09-30 腾讯科技(深圳)有限公司 Article defect detection method and related device
CN114926667B (en) * 2022-07-20 2022-11-08 安徽炬视科技有限公司 Image identification method based on cloud edge cooperation
CN114926667A (en) * 2022-07-20 2022-08-19 安徽炬视科技有限公司 Image identification method based on cloud edge-end cooperation
CN115220479B (en) * 2022-09-20 2022-12-13 山东大学 Dynamic and static cooperative power transmission line refined inspection method and system
CN115220479A (en) * 2022-09-20 2022-10-21 山东大学 Dynamic and static cooperative power transmission line refined inspection method and system
CN115272981A (en) * 2022-09-26 2022-11-01 山东大学 Cloud-edge co-learning power transmission inspection method and system
US11836968B1 (en) * 2022-12-08 2023-12-05 Sas Institute, Inc. Systems and methods for configuring and using a multi-stage object classification and condition pipeline
US12002256B1 (en) 2022-12-08 2024-06-04 Sas Institute Inc. Systems and methods for configuring and using a multi-stage object classification and condition pipeline
WO2024123723A1 (en) * 2022-12-08 2024-06-13 Sas Institute Inc. Systems and methods for configuring and using a multi-stage object classification and condition pipeline
GB2627894A (en) * 2022-12-08 2024-09-04 Sas Inst Inc Systems and methods for configuring and using a multi-stage object classification and condition pipeline
CN115965627B (en) * 2023-03-16 2023-06-09 中铁电气化局集团有限公司 Micro component detection system and method applied to railway operation
CN115965627A (en) * 2023-03-16 2023-04-14 中铁电气化局集团有限公司 Micro component detection system and method applied to railway operation
CN116579609A (en) * 2023-05-15 2023-08-11 三峡科技有限责任公司 Illegal operation analysis method based on inspection process
CN116579609B (en) * 2023-05-15 2023-11-14 三峡科技有限责任公司 Illegal operation analysis method based on inspection process
CN116703926A (en) * 2023-08-08 2023-09-05 苏州思谋智能科技有限公司 Defect detection method, defect detection device, computer equipment and computer readable storage medium
CN116703926B (en) * 2023-08-08 2023-11-03 苏州思谋智能科技有限公司 Defect detection method, defect detection device, computer equipment and computer readable storage medium

Also Published As

Publication number Publication date
CN111784685B (en) 2023-08-18

Similar Documents

Publication Publication Date Title
CN111784685B (en) Power transmission line defect image identification method based on cloud edge cooperative detection
CN110059694B (en) Intelligent identification method for character data in complex scene of power industry
CN107742093B (en) Real-time detection method, server and system for infrared image power equipment components
CN114743119B (en) High-speed rail contact net hanger nut defect detection method based on unmanned aerial vehicle
CN108509954A (en) A kind of more car plate dynamic identifying methods of real-time traffic scene
CN107273832B (en) License plate recognition method and system based on integral channel characteristics and convolutional neural network
CN111179249A (en) Power equipment detection method and device based on deep convolutional neural network
CN111145174A (en) 3D target detection method for point cloud screening based on image semantic features
CN107609485A (en) The recognition methods of traffic sign, storage medium, processing equipment
CN112581443A (en) Light-weight identification method for surface damage of wind driven generator blade
CN112560675B (en) Bird visual target detection method combining YOLO and rotation-fusion strategy
CN110929795B (en) Method for quickly identifying and positioning welding spot of high-speed wire welding machine
CN115205264A (en) High-resolution remote sensing ship detection method based on improved YOLOv4
CN110599453A (en) Panel defect detection method and device based on image fusion and equipment terminal
CN110059539A (en) A kind of natural scene text position detection method based on image segmentation
CN109934170B (en) Mine resource statistical method based on computer vision
CN114781514A (en) Floater target detection method and system integrating attention mechanism
CN110992307A (en) Insulator positioning and identifying method and device based on YOLO
CN113033516A (en) Object identification statistical method and device, electronic equipment and storage medium
CN115619719A (en) Pine wood nematode infected wood detection method based on improved Yolo v3 network model
CN112528058B (en) Fine-grained image classification method based on image attribute active learning
CN111540203B (en) Method for adjusting green light passing time based on fast-RCNN
CN109657540A (en) Withered tree localization method and system
CN113076889B (en) Container lead seal identification method, device, electronic equipment and storage medium
CN110163081A (en) SSD-based real-time regional intrusion detection method, system and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant