[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN118691847B - Transformer substation defect detection method, system and storage medium based on positive sample image - Google Patents

Transformer substation defect detection method, system and storage medium based on positive sample image Download PDF

Info

Publication number
CN118691847B
CN118691847B CN202411169130.8A CN202411169130A CN118691847B CN 118691847 B CN118691847 B CN 118691847B CN 202411169130 A CN202411169130 A CN 202411169130A CN 118691847 B CN118691847 B CN 118691847B
Authority
CN
China
Prior art keywords
image
positive sample
points
detected
difference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202411169130.8A
Other languages
Chinese (zh)
Other versions
CN118691847A (en
Inventor
李倩
曹思远
周彦朝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changsha Nengchuan Information Technology Co ltd
Original Assignee
Changsha Nengchuan Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changsha Nengchuan Information Technology Co ltd filed Critical Changsha Nengchuan Information Technology Co ltd
Priority to CN202411169130.8A priority Critical patent/CN118691847B/en
Publication of CN118691847A publication Critical patent/CN118691847A/en
Application granted granted Critical
Publication of CN118691847B publication Critical patent/CN118691847B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of digital twinning of substations, in particular to a method and a system for detecting defects of a substation based on a positive sample image. The method comprises the steps of: s1, extracting key feature point information through a SIFT algorithm: s2, adopting a knnMatch feature matching algorithm to match key feature points: s3, calculating a difference point: s4, acquiring candidate difference areas: and S5, calculating the difference degree by utilizing a hash sensing algorithm. Compared with the traditional image processing algorithm, the improved clustering machine learning algorithm is added for processing intermediate data, and the final result judgment refers to image information coding, so that the environmental interference is reduced, the misjudgment probability is generally reduced, and the precision is improved.

Description

Transformer substation defect detection method, system and storage medium based on positive sample image
Technical Field
The invention relates to the technical field of digital twinning of substations, in particular to a positive sample image-based substation defect detection method, a positive sample image-based substation defect detection system and a storage medium.
Background
The digital twin transformer substation is a technology for simulating and copying each component part and operation state of an actual transformer substation by utilizing an advanced digital technology, and the information of physical equipment, electrical parameters, operation state and the like of the actual transformer substation is digitalized by using a mathematical modeling and simulation technology to form a virtual model completely consistent with the actual transformer substation.
The digital twinning-based centralized monitoring function can realize virtual mapping and on-line monitoring of equipment states and environment states, and defect images can be detected rapidly and automatically by using an artificial intelligent image recognition algorithm, so that the machine substitution of the original manual reporting mode is realized.
Currently, the defect detection core problem based on positive samples in the digital twin transformer substation is judgment of image similarity and position judgment of a difference region.
For the calculation method of the image similarity, two general categories are generally classified into a conventional image algorithm and a deep learning-based mode. The histogram method based on pixel distribution, the cosine similarity method based on feature vectors, the global hash method based on image coding distance, the SIFT algorithm and the ORB algorithm based on feature point matching and the like belong to the category of traditional image algorithms, and the twin neural network is based on a deep learning mode.
Similarly, for the judgment of the difference position, the difference region between the sample image and the image to be detected can be found and aligned according to the characteristic point matching or the image coding. For defect detection in a specific scene in the transformer substation, a deep learning mode of target detection can be adopted for processing.
However, the conventional method is not high enough in efficiency and accuracy, relatively insufficient in generalization ability than the deep learning method, and insufficient in extraction effect for some complex image features. Although the deep learning method has high accuracy and generalization capability, a large amount of image data in a specific scene is required to be trained, the relevant defect samples of the transformer substation are difficult to collect, and the recognition capability of the deep learning model with small sample size is weak.
Disclosure of Invention
Accordingly, the present invention is directed to a method and a system for detecting defects of a transformer substation based on a positive sample image, so as to solve at least one of the above-mentioned problems.
In order to achieve the above purpose, a transformer substation defect detection method based on a positive sample image comprises the following steps:
Step S1, extracting key feature point information: after converting a positive sample template image and an image to be detected into a gray level image, acquiring key feature points and descriptors for calculating the key feature points from the positive sample template image and the image to be detected respectively by using a SIFT algorithm; the descriptor comprises 128-dimensional feature vector information formed in the gradient amplitude and direction of pixels of a window of 16 x 16 taking the key feature point as a center, wherein the position, the direction and the scale of the key feature point are included in the descriptor;
Step S2, matching key feature points: adopting a knnMatch feature matching algorithm to carry out one-to-many matching on descriptors of all key feature points of the positive sample template image and descriptors of all key feature points of the image to be detected, obtaining matching point pairs, removing outlier matching point pairs through a RANSAC algorithm, and calculating a homography matrix of the image to be detected on the positive sample template image; the homography matrix comprises the corresponding relation of coordinates of points between the image to be detected and the positive sample template image;
step S3, calculating a difference point:
S31, selecting S first detection points on the positive sample template image, affining the S first detection points to corresponding positions on the image to be detected through the homography matrix, and obtaining S second detection points corresponding to the S first detection points;
s32, calculating descriptors of the S first detection points in the positive sample template image and descriptors of the S second detection points in the image to be detected respectively through the SIFT algorithm;
S33, calculating Euclidean distance between the feature vectors of each pair of the first detection point and the second detection point, and defining the pair of the first detection point and the second detection point as difference points if the Euclidean distance is larger than a preset difference threshold;
Step S4, obtaining candidate difference areas: acquiring candidate difference areas on the image to be detected and the positive sample template image according to the coordinates of the difference points;
step S5, calculating the difference degree: and respectively calculating the difference degree of each pair of candidate difference regions on the image to be detected and the positive sample template image by using a hash perception algorithm, and judging the difference region with the difference degree larger than a set threshold value as the defect of the image to be detected.
Further, the step S1 specifically includes:
S11, converting the positive sample template image and the image to be detected into a gray level image;
S12, constructing a multi-scale space: constructing a Gaussian pyramid for the converted gray level image, performing Gaussian smoothing on an original image, removing high-frequency noise, performing downsampling on the smoothed image, and performing repeated filtering and downsampling on the downsampled image to obtain a plurality of groups of images, wherein each group of images comprises a plurality of layers of images, and the scale space of the two-dimensional image is defined as: l (x, y, σ) =g (x, y, σ) ×i (x, y); definition of the differential scale space is: d (x, y, σ) =l (x, y, kσ) -L (x, y, σ); sigma is standard deviation of Gaussian normal distribution, x is a horizontal axis coordinate, and y is a vertical axis coordinate;
s13, detecting local extreme points with direction information in the multiple groups of images through different-scale DoG space detection to serve as key feature points;
S14, acquiring the descriptors of the key feature points.
Further, the step S13 includes: comparing each pixel point in the multiple groups of images with the scale space corresponding to the pixel point and all adjacent points in the adjacent scale space, and taking the pixel point as an extreme point when the pixel value of the pixel point is larger or smaller than all the adjacent points; and taking extreme points existing under different scales of the plurality of groups of images as the key characteristic points.
Further, the step S14 includes:
acquiring scale information and position information of the key feature points in the images with different scales;
determining the direction information of the key feature point through the gradient distribution characteristics of the field pixels of the key feature point;
And dividing the pixels in the window into 16 block units by taking the gradient amplitude and the gradient direction of the pixels of the window with the key feature points as centers, wherein each block unit is the histogram statistics of 8 directions in the pixels, and 128-dimensional feature vector information of the key feature points is formed in a conformal mode.
Further, the step S2 includes:
adopting a knnMatch feature matching algorithm to carry out one-to-many matching on descriptors of all key feature points of the positive sample template image and descriptors of all key feature points of the image to be detected, and taking k=2 in the knnMatch feature matching algorithm to obtain 2 descriptors which are nearest to each other and next nearest to each other in feature space between the image to be detected and the positive sample template image;
When the ratio of the similarity distance of the feature space between the nearest neighbor descriptor and the next-nearest neighbor descriptor is between 0.4 and 0.6, determining the key feature points corresponding to the nearest neighbor descriptor and the next-nearest neighbor descriptor as matching point pairs;
And removing outlier matching point pairs through a RANSAC algorithm, and calculating a homography matrix of the image to be detected on the positive sample template image.
Further, in the step S31, detecting points are selected at a fixed interval i from the positive sample template image, and the straight line distance between each detecting point and the detecting points is i; the number of the first detection points is s= (w/i+1) ×h/i+1, and w and h are the length and width of the positive sample template image respectively.
Further, the step S4 includes:
S41, taking all the difference point data as a data set Q, wherein Euclidean distance between each point and all points in the data set Q is recorded as ; Wherein the number of the difference points is n; for a pair ofIf the elements in each row are ordered in ascending order, the distance vector D1 formed by the elements in the 1 st row represents the distance from the object to the object, and the distance is 0; the elements of column K form the vectors Dk of the K-nearest neighbors of all points; averaging the elements in the vector Dk to obtain a K-average nearest neighbor distance D of the vector Dk, taking the K-average nearest neighbor distance D as a candidate Eps parameter, and calculating all the K-average nearest neighbor distances D to obtain an Eps parameter list
S42, for the Eps parameter list, sequentially solving the number of Eps neighborhood objects corresponding to each candidate Eps parameter, and calculating the mathematical expectation value of the number of Eps neighborhood objects of all objects to serve as the neighborhood density value MinPts parameter of the data set Q
S43, sequentially selecting elements in different vectors Dk as Eps parameters and corresponding MinPts parameters, inputting a DBSCAN algorithm to perform cluster analysis on a data set Q to respectively obtain the number of clusters generated under different K values, considering that a clustering result tends to be stable when the number of clusters generated is three times continuously the same, and recording the number of clusters N as an optimal number;
S44, continuing to execute the step S43 until the generated cluster number is no longer N, and selecting a maximum K value corresponding to the N cluster number as an optimal K value, wherein the K-average nearest neighbor distance D corresponding to the optimal K value is an optimal Eps parameter, and the corresponding MinPis parameter is an optimal MinPts parameter;
S45, introducing the optimal Eps parameter and the optimal MinPts parameter selected by the image to be detected into a clustering result, and taking the circumscribed rectangle of each clustering area of the image to be detected as the candidate difference area;
S46, calculating average offset of the coordinate offsets between all the matching point pairs, and finding out a region corresponding to the candidate difference region of the image to be detected from the positive sample template image according to the average offset to serve as the candidate difference region of the positive sample template image.
Further, the step S5 includes:
S51, carrying out hash perception processing on the image to be detected and the candidate difference area on the positive sample template image to generate a corresponding hash code;
s52, calculating the Hamming distance between the two hash codes;
S531, if the Hamming distance of all the candidate difference areas is less than 5, judging that the whole graph is defect-free;
S532, if a plurality of candidate difference areas with the Hamming distance greater than or equal to 5 exist, selecting the candidate difference area with the largest Hamming distance and the candidate difference area with the difference value between all Hamming distances and the largest Hamming distance less than or equal to 2 as defect areas of the image to be detected.
The invention also provides a transformer substation defect detection system based on the positive sample image, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the steps of the transformer substation defect detection method based on the positive sample image are realized when the processor executes the computer program.
The invention also provides a computer readable storage medium storing a computer program which when executed by a processor implements the steps of the positive sample image based substation defect detection method as described in any one of the above.
In order to reduce the requirement of a deep learning technology on higher data cost in the defect detection of the transformer substation, the transformer substation defect detection method based on the positive sample image integrally adopts an optimization thought of a traditional image processing algorithm, and screens out matching point pairs between the positive sample image and a to-be-detected image in a manner of extracting key points through SIFT features, preliminarily divides difference points into a plurality of candidate difference areas, then encodes all the candidate difference areas by adopting a hash perception algorithm, and determines final area difference judgment and locking of abnormal positions through threshold setting, thereby solving the problem of high requirement on data by using the deep learning method and improving the precision of the traditional image algorithm; compared with a deep learning method, the method reduces the data cost, avoids time and labor consuming collection of substation defect samples and data labeling, adds an improved clustering machine learning algorithm to process intermediate data compared with a traditional image processing algorithm, and reduces environmental interference by introducing image information codes on final result judgment, overall reduces the misjudgment probability and improves the precision.
Drawings
Other features, objects and advantages of the present invention will become more apparent upon reading of the detailed description of the non-limiting implementation, made with reference to the accompanying drawings.
Fig. 1 is a schematic flow chart of a transformer substation defect detection method based on a positive sample image.
Fig. 2 is a sub-flowchart of step S3 in fig. 1.
Fig. 3 is a sub-flowchart of step S4 in fig. 1.
Detailed Description
The following is a clear and complete description of the technical method of the present patent in conjunction with the accompanying drawings, and it is evident that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the present invention without making any inventive effort, are intended to fall within the scope of the present invention.
Furthermore, the drawings are merely schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus a repetitive description thereof will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. The functional entities may be implemented in software or in one or more hardware modules or integrated circuits or in different networks and/or processor methods and/or microcontroller methods.
It will be understood that, although the terms "first," "second," etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another element. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
In order to achieve the above objective, referring to fig. 1 to 3, the present invention provides a method for detecting a defect of a transformer substation based on a positive sample image, comprising the following steps:
Step S1, extracting key feature point information: after converting a positive sample template image and an image to be detected into a gray level image, acquiring key feature points and descriptors for calculating the key feature points from the positive sample template image and the image to be detected respectively by using a SIFT algorithm; the descriptor comprises 128-dimensional feature vector information formed in the gradient amplitude and direction of pixels of a window of 16 x 16 taking the key feature point as a center, wherein the position, the direction and the scale of the key feature point are included in the descriptor;
Step S2, matching key feature points: adopting a knnMatch feature matching algorithm to carry out one-to-many matching on descriptors of all key feature points of the positive sample template image and descriptors of all key feature points of the image to be detected, obtaining matching point pairs, removing outlier matching point pairs through a RANSAC algorithm, and calculating a homography matrix of the image to be detected on the positive sample template image; the homography matrix comprises the corresponding relation of coordinates of points between the image to be detected and the positive sample template image;
step S3, calculating a difference point:
S31, selecting S first detection points on the positive sample template image, affining the S first detection points to corresponding positions on the image to be detected through the homography matrix, and obtaining S second detection points corresponding to the S first detection points;
s32, calculating descriptors of the S first detection points in the positive sample template image and descriptors of the S second detection points in the image to be detected respectively through the SIFT algorithm;
S33, calculating Euclidean distance between the feature vectors of each pair of the first detection point and the second detection point, and defining the pair of the first detection point and the second detection point as difference points if the Euclidean distance is larger than a preset difference threshold;
Step S4, obtaining candidate difference areas: acquiring candidate difference areas on the image to be detected and the positive sample template image according to the coordinates of the difference points;
step S5, calculating the difference degree: and respectively calculating the difference degree of each pair of candidate difference regions on the image to be detected and the positive sample template image by using a hash perception algorithm, and judging the difference region with the difference degree larger than a set threshold value as the defect of the image to be detected.
Specifically, in step S1, after the positive sample template image and the image to be detected are acquired, the feature information of the positive sample template image is sufficiently acquired, so as to perform vector correlation calculation with the corresponding feature information in the image to be detected, thereby matching key feature points with similar content. After converting a positive sample template image and an image to be detected into a gray level image, acquiring key feature points and descriptors for calculating the key feature points from the positive sample template image and the image to be detected respectively by using a SIFT algorithm.
Further, in a preferred embodiment, the step S1 specifically includes:
s11, converting the positive sample template image and the image to be detected into gray level images.
S12, constructing a multi-scale space: constructing a Gaussian pyramid for the converted gray level image, performing Gaussian smoothing on an original image, removing high-frequency noise, performing downsampling on the smoothed image, and performing repeated filtering and downsampling on the downsampled image to obtain a plurality of groups of images, wherein each group of images comprises a plurality of layers of images, and the scale space of the two-dimensional image is defined as: l (x, y, σ) =g (x, y, σ) ×i (x, y); definition of the differential scale space is: d (x, y, σ) =l (x, y, kσ) -L (x, y, σ).
In particular, those skilled in the art will recognize the SIFT algorithm to obtain key feature points from the image and to calculate descriptors of the key feature points. In the following, only a simple explanation will be made, firstly, a gaussian pyramid is constructed on a picture, gaussian smoothing is performed on an original image to remove high-frequency noise, then downsampling is performed on the smoothed image, the downsampling reduces the size of the picture, filtering and downsampling are repeated, one image can generate several groups of images, one group of images comprises several layers of images, so that the scales among the layers of one group of gaussian pyramids are different, that is, the used gaussian parameters sigma are different, sigma is the standard deviation of gaussian normal distribution, and each group of gaussian pyramids is a gaussian scale space.
The bottom layer image of a group of images in the field is obtained by downsampling the next group of images with the mesoscale of 2σ by the step length of 2, and after the construction of the Gaussian pyramid is completed, the adjacent Gaussian space images are subtracted to obtain the DoG Gaussian differential pyramid.
The scale space of a two-dimensional image is defined as: l (x, y, σ) =g (x, y, σ) ×i (x, y);
Definition of the differential scale space is: d (x, y, σ) =l (x, y, kσ) -L (x, y, σ).
S13, detecting local extreme points with direction information in the multiple groups of images through different-scale DoG space detection to serve as key feature points.
Further, the step S13 includes:
comparing each pixel point in the multiple groups of images with the corresponding scale space of the pixel point and all adjacent points of the adjacent scale space, and when the pixel value of the pixel point is larger or smaller than that of all the adjacent points, the pixel point is taken as an extreme point, and usually the extreme points are very prominent points and cannot disappear due to the change of illumination conditions, such as corner points, edge points, bright points of dark areas and dark points of bright areas;
and taking extreme points existing under different scales of the plurality of groups of images as the key characteristic points.
S14, acquiring the descriptors of the key feature points.
Further, the step S14 includes:
s141, acquiring scale information and position information of the key feature points in the images with different scales, specifically, finding the key feature points existing under different scales, and obtaining the scale image with the feature points.
S142, determining the direction information of the key feature point through the gradient distribution characteristics of the field pixels of the key feature point.
Specifically, in order to achieve image rotation invariance, assignment of a direction of key feature points is required. The direction parameters of the key feature points are usually determined by using the gradient distribution characteristics of the neighborhood pixels of the key feature points, and then the stable direction of the local structure of the key feature points is obtained by using the gradient histogram of the image.
The amplitude and the magnitude of the gradient of each pixel point are calculated in the field with the key feature point as the center and 3×1.5σ×3×1.5σ as the radius, and then the histogram is used for statistics of the amplitude of the gradient. The horizontal axis of the histogram is the direction of the gradient, the vertical axis is the accumulated value of the gradient amplitude corresponding to the gradient direction, and the direction corresponding to the highest peak in the histogram is the direction of the key feature point.
The calculation formula for the rotation of the coordinates and angles to the X-axis with the main direction is:
xrot=x*cos(Oris)-y*sin(Oris);
yrot=x*sin(Oris)+y*cos(Oris);
thetarot=theta-Oris。
S143, dividing the pixels in the window into 16 block units by taking the gradient amplitude and the gradient direction of the pixels in the window of 16 x 16 with the key feature points as the center, wherein each block unit is the histogram statistics of 8 directions in the pixels, and 128-dimensional feature vector information of the key feature points is formed.
Further, the step S2 includes:
adopting a knnMatch feature matching algorithm to carry out one-to-many matching on descriptors of all key feature points of the positive sample template image and descriptors of all key feature points of the image to be detected, and taking k=2 in the knnMatch feature matching algorithm to obtain 2 descriptors which are nearest to each other and next nearest to each other in feature space between the image to be detected and the positive sample template image;
When the ratio of the similarity distance of the feature space between the nearest neighbor descriptor and the next-nearest neighbor descriptor is between 0.4 and 0.6, determining the key feature points corresponding to the nearest neighbor descriptor and the next-nearest neighbor descriptor as matching point pairs;
And removing outlier matching point pairs from all proper matching point pairs through a RANSAC algorithm, and calculating a homography matrix of the image to be detected on the positive sample template image.
Further, in the step S31, detecting points are selected at a fixed interval i from the positive sample template image, and the straight line distance between each detecting point and the detecting points is i; the number of the first detection points is s= (w/i+1) ×h/i+1, and w and h are the length and width of the positive sample template image respectively.
In the step S33, the preset difference threshold may be a fixed ratio, or some pair (e.g., 70%) of the first detection point and the second detection point, which are greater than 50% -80% (e.g., 70%) of the maximum difference distance (the maximum value in the euclidean distance) among the euclidean distances between the feature vectors of the first detection point and the second detection point, may be defined as the difference point.
Alternatively, in the step S4, the candidate difference area may be defined by directly defining pixels of a preset size near the fox-search difference point.
Further, in a preferred embodiment, the step S4 includes:
S41, taking all the difference point data as a data set Q, wherein Euclidean distance between each point and all points in the data set Q is recorded as ; Wherein the number of the difference points is n; for a pair ofIf the elements in each row are ordered in ascending order, the distance vector D1 formed by the elements in the 1 st row represents the distance from the object to the object, and the distance is 0; the elements of column K form the vector D k of the K-nearest distances of all points; averaging the elements in the vector D k to obtain the K-average nearest neighbor distance D of the vector D k, taking the K-average nearest neighbor distance D as a candidate Eps parameter, and calculating all the K-average nearest neighbor distances D to obtain an Eps parameter list
S42, for the Eps parameter list, sequentially solving the number of Eps neighborhood objects corresponding to each candidate Eps parameter, and calculating the mathematical expectation value of the number of Eps neighborhood objects of all objects to serve as the neighborhood density value MinPts parameter of the data set Q
S43, sequentially selecting elements in different vectors Dk as Eps parameters and corresponding MinPts parameters (namely sequentially selecting elements in an Eps parameter list to be generated as the Eps parameters and corresponding MinPts parameters), inputting a DBSCAN algorithm to perform cluster analysis on a data set Q to respectively obtain the number of clusters generated under different K values, considering that a clustering result tends to be stable when the number of clusters generated is three times continuously the same, and recording the number of clusters N as an optimal number;
S44, continuing to execute the step S43 until the generated cluster number is no longer N, and selecting a maximum K value corresponding to the N cluster number as an optimal K value, wherein the K-average nearest neighbor distance D corresponding to the optimal K value is an optimal Eps parameter, and the corresponding MinPis parameter is an optimal MinPts parameter;
S45, introducing the optimal Eps parameter and the optimal MinPts parameter selected by the image to be detected into a clustering result, and taking the circumscribed rectangle of each clustering area of the image to be detected as the candidate difference area;
S46, calculating average offset of the coordinate offsets between all the matching point pairs, and finding out a region corresponding to the candidate difference region of the image to be detected from the positive sample template image according to the average offset to serve as the candidate difference region of the positive sample template image.
Further, the step S5 includes:
S51, carrying out hash perception processing on the image to be detected and the candidate difference area on the positive sample template image to generate a corresponding hash code.
Specifically, firstly, an image (candidate difference region) is reduced to a pixel image with a fixed size, and in order to remove details of the image, only basic information such as structures, brightness and the like is reserved, and image differences caused by different sizes and proportions are abandoned. Converting the reduced image into a gray image, simplifying the gray image into 64 gray levels, namely, all pixel points have 64 colors in total, calculating the gray average value of all 64 pixel points, comparing the gray of each pixel with the average value, marking the pixel with the average value larger than or equal to DCT as 1, marking the pixel with the average value smaller than DCT as 0, combining the comparison results to form a 64-bit integer, and forming a 16-system code every four bits.
The calculation formula of the DCT average value is as follows:
cos
F (i, j) is the original signal, F (u, v) is the coefficient after DCT transformation, N is the number of points of the original signal, and c (u), c (v) are the compensation coefficients.
S52, calculating the Hamming distance between the two hash codes.
Specifically, the hamming distance, that is, the sum of the numbers of bits at different positions of the two hash codes, can be used to obtain the degree of difference between the two images based on the calculated hamming distance. In general, the larger the hamming distance, the larger the difference between the two images. The Hamming distance d of the hash values of the two images, the similarity of the two pictures is (64-d)/64. It is generally considered that the two pictures are similar if the hamming distance is less than 5.
S531, if the Hamming distance of all the candidate difference areas is less than 5, judging that the whole graph is defect-free;
S532, if a plurality of candidate difference areas with the Hamming distance greater than or equal to 5 exist, selecting the candidate difference area with the largest Hamming distance and the candidate difference area with the difference value between all Hamming distances and the largest Hamming distance less than or equal to 2 as defect areas of the image to be detected.
In order to reduce the requirement of a deep learning technology on data cost in the defect detection of the transformer substation, the transformer substation defect detection method based on the positive sample image has rich characteristic information in the sample image under a normal scene, and matches key information between the image to be detected and the positive sample template image by utilizing the characteristic information, performs differential calculation, and quantifies the difference into a numerical value to judge.
The invention integrally adopts the optimization thought of the traditional image processing algorithm, integrates the advantages of several common image processing algorithms, reduces the data cost compared with a deep learning method, avoids time and labor consuming collection of substation defect samples and data labels, adds an improved clustering machine learning algorithm to process intermediate data compared with the traditional image processing algorithm, and reduces the environmental interference by introducing image information codes on final result judgment, generally reduces the misjudgment probability and improves the precision. And screening out matching point pairs between the positive sample graph and the graph to be detected in a manner of extracting key points by SIFT features, primarily dividing the difference points into a plurality of candidate difference areas, then encoding all the candidate difference areas by adopting a Hash perception algorithm, determining final area difference judgment by setting a threshold value, and locking abnormal positions, thereby solving the problem of high data requirement by using a deep learning method and improving the precision of the traditional image algorithm.
The invention also provides a transformer substation defect detection system based on the positive sample image, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the steps of the transformer substation defect detection method based on the positive sample image are realized when the processor executes the computer program.
The computer program may be divided into one or more modules/units, which are stored in the memory and executed by the processor to accomplish the present invention, for example. The one or more modules/units may be a series of computer program instruction segments capable of performing the specified functions, which instruction segments describe the execution of the computer program in the computer.
The Processor may be a central processing unit (Central Processing Unit, CPU), other general purpose Processor, digital signal Processor (DIGITAL SIGNAL Processor, DSP), application SPECIFIC INTEGRATED Circuit (ASIC), off-the-shelf Programmable gate array (Field-Programmable GATE ARRAY, FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be an internal storage unit, such as a hard disk or a memory; the memory may also be an external storage device such as a plug-in hard disk, a smart memory card (SMART MEDIA CARD, SMC), a Secure Digital (SD) card, a flash memory card (FLASH CARD), etc. Further, the memory may also include both internal storage units and external storage devices. The memory is used for storing the computer program and other programs and data required by the terminal device. The memory may also be used to temporarily store data that has been output or is to be output.
The invention also provides a computer readable storage medium storing a computer program which when executed by a processor implements the steps of the positive sample image based substation defect detection method as described in any one of the above.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions. The functional units and modules in the embodiment may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit, where the integrated units may be implemented in a form of hardware or a form of a software functional unit. In addition, the specific names of the functional units and modules are only for distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working process of the units and modules in the above system may refer to the corresponding process in the foregoing method embodiment, which is not described herein again.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and in part, not described or illustrated in any particular embodiment, reference is made to the related descriptions of other embodiments.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided herein, it should be understood that the disclosed apparatus/system and method may be implemented in other ways. For example, the apparatus/terminal device embodiments described above are merely illustrative, e.g., the division of the modules or units is merely a logical function division, and there may be additional divisions in actual implementation, e.g., multiple units or components may be combined or integrated into another system, or some features may be omitted or not performed. Alternatively, the coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection via interfaces, devices or units, which may be in electrical, mechanical or other forms.
The units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of this embodiment.
In addition, each functional unit in the embodiments of the present invention may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules/units, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a computer readable storage medium. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. The transformer substation defect detection method based on the positive sample image is characterized by comprising the following steps of:
Step S1, extracting key feature point information: after converting a positive sample template image and an image to be detected into a gray level image, acquiring key feature points and descriptors for calculating the key feature points from the positive sample template image and the image to be detected respectively by using a SIFT algorithm; the descriptor comprises 128-dimensional feature vector information formed in the gradient amplitude and direction of pixels of a window of 16 x 16 taking the key feature point as a center, wherein the position, the direction and the scale of the key feature point are included in the descriptor;
Step S2, matching key feature points: adopting a knnMatch feature matching algorithm to carry out one-to-many matching on descriptors of all key feature points of the positive sample template image and descriptors of all key feature points of the image to be detected, obtaining matching point pairs, removing outlier matching point pairs through a RANSAC algorithm, and calculating a homography matrix of the image to be detected on the positive sample template image; the homography matrix comprises the corresponding relation of coordinates of points between the image to be detected and the positive sample template image;
step S3, calculating a difference point:
S31, selecting S first detection points on the positive sample template image, affining the S first detection points to corresponding positions on the image to be detected through the homography matrix, and obtaining S second detection points corresponding to the S first detection points;
s32, calculating descriptors of the S first detection points in the positive sample template image and descriptors of the S second detection points in the image to be detected respectively through the SIFT algorithm;
S33, calculating Euclidean distance between the feature vectors of each pair of the first detection point and the second detection point, and defining the pair of the first detection point and the second detection point as difference points if the Euclidean distance is larger than a preset difference threshold;
Step S4, obtaining candidate difference areas: acquiring candidate difference areas on the image to be detected and the positive sample template image according to the coordinates of the difference points;
step S5, calculating the difference degree: and respectively calculating the difference degree of each pair of candidate difference regions on the image to be detected and the positive sample template image by using a hash perception algorithm, and judging the difference region with the difference degree larger than a set threshold value as the defect of the image to be detected.
2. The positive sample image-based substation defect detection method according to claim 1, wherein the step S1 specifically includes:
S11, converting the positive sample template image and the image to be detected into a gray level image;
S12, constructing a multi-scale space: constructing a Gaussian pyramid for the converted gray level image, performing Gaussian smoothing on an original image, removing high-frequency noise, performing downsampling on the smoothed image, and performing repeated filtering and downsampling on the downsampled image to obtain a plurality of groups of images, wherein each group of images comprises a plurality of layers of images, and the scale space of the two-dimensional image is defined as: l (x, y, σ) =g (x, y, σ) ×i (x, y); definition of the differential scale space is: d (x, y, σ) =l (x, y, kσ) -L (x, y, σ); sigma is standard deviation of Gaussian normal distribution, x is a horizontal axis coordinate, and y is a vertical axis coordinate;
S13, detecting local extreme points with direction information in the multiple groups of images through different-scale DoG space detection to serve as the key feature points;
S14, acquiring the descriptors of the key feature points.
3. The positive sample image-based substation defect detection method according to claim 2, wherein the step S13 includes: comparing each pixel point in the multiple groups of images with the scale space corresponding to the pixel point and all adjacent points in the adjacent scale space, and taking the pixel point as an extreme point when the pixel value of the pixel point is larger or smaller than all the adjacent points; and taking the extreme points existing under different scales of the plurality of groups of images as the key characteristic points.
4. The positive sample image-based substation defect detection method according to claim 3, wherein the step S14 includes:
acquiring scale information and position information of the key feature points in the images with different scales;
determining the direction information of the key feature point through the gradient distribution characteristics of the field pixels of the key feature point;
And dividing the pixels in the window into 16 block units by taking the gradient amplitude and the gradient direction of the pixels of the window with the key feature points as centers, wherein each block unit is the histogram statistics of 8 directions in the pixels, and 128-dimensional feature vector information of the key feature points is formed in a conformal mode.
5. The positive sample image-based substation defect detection method according to claim 1, wherein the step S2 comprises:
adopting a knnMatch feature matching algorithm to carry out one-to-many matching on descriptors of all key feature points of the positive sample template image and descriptors of all key feature points of the image to be detected, and taking k=2 in the knnMatch feature matching algorithm to obtain 2 descriptors which are nearest to each other and next nearest to each other in feature space between the image to be detected and the positive sample template image;
When the ratio of the similarity distance of the feature space between the nearest neighbor descriptor and the next-nearest neighbor descriptor is between 0.4 and 0.6, determining the key feature points corresponding to the nearest neighbor descriptor and the next-nearest neighbor descriptor as matching point pairs;
And removing outlier matching point pairs through a RANSAC algorithm, and calculating a homography matrix of the image to be detected on the positive sample template image.
6. The method for detecting defects in a transformer substation based on a positive sample image according to claim 1, wherein in the step S31, detection points are selected at a fixed interval i on the positive sample template image, and each detection point has a linear distance i from the detection points in the up-down and left-right directions; the number of the first detection points is s= (w/i+1) ×h/i+1, and w and h are the length and width of the positive sample template image respectively.
7. The positive sample image-based substation defect detection method according to claim 1, wherein the step S4 includes:
S41, taking all the difference point data as a data set Q, wherein Euclidean distance between each point and all points in the data set Q is recorded as ; Wherein the number of the difference points is n; for a pair ofIf the elements in each row are ordered in ascending order, the distance vector D1 formed by the elements in the 1 st row represents the distance from the object to the object, and the distance is 0; the elements of column K form the vector D k of the K-nearest distances of all points; averaging the elements in the vector D k to obtain the K-average nearest neighbor distance D of the vector D k, taking the K-average nearest neighbor distance D as a candidate Eps parameter, and calculating all the K-average nearest neighbor distances D to obtain an Eps parameter list
S42, for the Eps parameter list, sequentially solving the number of Eps neighborhood objects corresponding to each candidate Eps parameter, and calculating the mathematical expectation value of the number of Eps neighborhood objects of all objects to serve as the neighborhood density value MinPts parameter of the data set Q
S43, sequentially selecting elements in different vectors Dk as Eps parameters and corresponding MinPts parameters, inputting a DBSCAN algorithm to perform cluster analysis on a data set Q to respectively obtain the number of clusters generated under different K values, considering that a clustering result tends to be stable when the number of clusters generated is three times continuously the same, and recording the number of clusters N as an optimal number;
S44, continuing to execute the step S43 until the generated cluster number is no longer N, and selecting a maximum K value corresponding to the N cluster number as an optimal K value, wherein the K-average nearest neighbor distance D corresponding to the optimal K value is an optimal Eps parameter, and the corresponding MinPis parameter is an optimal MinPts parameter;
S45, introducing the optimal Eps parameter and the optimal MinPts parameter selected by the image to be detected into a clustering result, and taking the circumscribed rectangle of each clustering area of the image to be detected as the candidate difference area;
S46, calculating average offset of the coordinate offsets between all the matching point pairs, and finding out a region corresponding to the candidate difference region of the image to be detected from the positive sample template image according to the average offset to serve as the candidate difference region of the positive sample template image.
8. The positive sample image-based substation defect detection method according to claim 1, wherein the step S5 includes:
S51, carrying out hash perception processing on the image to be detected and the candidate difference area on the positive sample template image to generate a corresponding hash code;
s52, calculating the Hamming distance between the two hash codes;
S531, if the Hamming distance of all the candidate difference areas is less than 5, judging that the whole graph is defect-free;
S532, if a plurality of candidate difference areas with the Hamming distance greater than or equal to 5 exist, selecting the candidate difference area with the largest Hamming distance and the candidate difference area with the difference value between all Hamming distances and the largest Hamming distance less than or equal to 2 as defect areas of the image to be detected.
9. A positive sample image based substation defect detection system, comprising a memory, a processor, a computer program stored in the memory and executable on the processor, characterized in that the processor, when executing the computer program, implements the steps of the positive sample image based substation defect detection method according to any of claims 1 to 8.
10. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the positive sample image based substation defect detection method according to any one of claims 1 to 8.
CN202411169130.8A 2024-08-23 2024-08-23 Transformer substation defect detection method, system and storage medium based on positive sample image Active CN118691847B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411169130.8A CN118691847B (en) 2024-08-23 2024-08-23 Transformer substation defect detection method, system and storage medium based on positive sample image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411169130.8A CN118691847B (en) 2024-08-23 2024-08-23 Transformer substation defect detection method, system and storage medium based on positive sample image

Publications (2)

Publication Number Publication Date
CN118691847A CN118691847A (en) 2024-09-24
CN118691847B true CN118691847B (en) 2024-10-29

Family

ID=92769923

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411169130.8A Active CN118691847B (en) 2024-08-23 2024-08-23 Transformer substation defect detection method, system and storage medium based on positive sample image

Country Status (1)

Country Link
CN (1) CN118691847B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553265A (en) * 2020-04-27 2020-08-18 河北天元地理信息科技工程有限公司 Method and system for detecting internal defects of drainage pipeline
CN111639713A (en) * 2020-06-01 2020-09-08 广东小天才科技有限公司 Page turning detection method and device, electronic equipment and storage medium

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9141871B2 (en) * 2011-10-05 2015-09-22 Carnegie Mellon University Systems, methods, and software implementing affine-invariant feature detection implementing iterative searching of an affine space
CN113160029B (en) * 2021-03-31 2022-07-05 海南大学 Medical image digital watermarking method based on perceptual hashing and data enhancement
CN114332498A (en) * 2021-12-30 2022-04-12 中北大学 Multi-size image change detection device and method based on multi-feature extraction
CN115147723B (en) * 2022-07-11 2023-05-09 武汉理工大学 Inland ship identification and ranging method, inland ship identification and ranging system, medium, equipment and terminal
CN117132648B (en) * 2023-04-28 2024-07-12 荣耀终端有限公司 Visual positioning method, electronic equipment and computer readable storage medium
CN116667531B (en) * 2023-05-19 2024-09-13 国网江苏省电力有限公司泰州供电分公司 Acousto-optic-electric collaborative inspection method and device based on digital twin transformer substation
CN117253062A (en) * 2023-09-27 2023-12-19 湖南科技大学 Relay contact image characteristic quick matching method under any gesture
CN117076935B (en) * 2023-10-16 2024-02-06 武汉理工大学 Digital twin-assisted mechanical fault data lightweight generation method and system
CN117592332B (en) * 2023-11-21 2024-11-01 江苏省特种设备安全监督检验研究院 Digital twinning-based gearbox model high-fidelity method, system and storage medium
CN118334373A (en) * 2024-04-17 2024-07-12 华能海南发电股份有限公司文昌风电厂 SIFT algorithm-based wind turbine generator set cabin feature point matching method and system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111553265A (en) * 2020-04-27 2020-08-18 河北天元地理信息科技工程有限公司 Method and system for detecting internal defects of drainage pipeline
CN111639713A (en) * 2020-06-01 2020-09-08 广东小天才科技有限公司 Page turning detection method and device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN118691847A (en) 2024-09-24

Similar Documents

Publication Publication Date Title
CN109299720B (en) Target identification method based on contour segment spatial relationship
US20220309637A1 (en) Electronic substrate defect detection
US8160366B2 (en) Object recognition device, object recognition method, program for object recognition method, and recording medium having recorded thereon program for object recognition method
CN107103317A (en) Fuzzy license plate image recognition algorithm based on image co-registration and blind deconvolution
CN109635733B (en) Parking lot and vehicle target detection method based on visual saliency and queue correction
CN104537376A (en) A method, a relevant device, and a system for identifying a station caption
CN114170418B (en) Multi-feature fusion image retrieval method for automobile harness connector by means of graph searching
CN107578011A (en) The decision method and device of key frame of video
CN108932518A (en) A kind of feature extraction of shoes watermark image and search method of view-based access control model bag of words
Muhammad et al. A non-intrusive method for copy-move forgery detection
Babu et al. Texture and steerability based image authentication
CN108205657A (en) Method, storage medium and the mobile terminal of video lens segmentation
Lecca et al. Comprehensive evaluation of image enhancement for unsupervised image description and matching
CN108830283B (en) Image feature point matching method
CN110765993B (en) SEM graph measuring method based on AI algorithm
Zhang et al. Multi-scale segmentation strategies in PRNU-based image tampering localization
CN109544614B (en) Method for identifying matched image pair based on image low-frequency information similarity
CN108960246B (en) Binarization processing device and method for image recognition
Khalid et al. Image de-fencing using histograms of oriented gradients
CN103577826A (en) Target characteristic extraction method, identification method, extraction device and identification system for synthetic aperture sonar image
CN118691847B (en) Transformer substation defect detection method, system and storage medium based on positive sample image
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
Partio et al. An ordinal co-occurrence matrix framework for texture retrieval
CN108960285B (en) Classification model generation method, tongue image classification method and tongue image classification device
CN116681647A (en) Color-coated sheet surface defect detection method and device based on unsupervised generation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant