[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN109784396A - Method for identifying switching-on and switching-off states - Google Patents

Method for identifying switching-on and switching-off states Download PDF

Info

Publication number
CN109784396A
CN109784396A CN201910014074.3A CN201910014074A CN109784396A CN 109784396 A CN109784396 A CN 109784396A CN 201910014074 A CN201910014074 A CN 201910014074A CN 109784396 A CN109784396 A CN 109784396A
Authority
CN
China
Prior art keywords
image
opening
closing
candidate region
gradient
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910014074.3A
Other languages
Chinese (zh)
Inventor
朱兵
王弈心
陆子清
闫琛
廖婕
韦佳贝
姚书龙
唐志勇
陈成全
潘卫国
陈晖�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
CRSC Research and Design Institute Group Co Ltd
Original Assignee
CRSC Research and Design Institute Group Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by CRSC Research and Design Institute Group Co Ltd filed Critical CRSC Research and Design Institute Group Co Ltd
Priority to CN201910014074.3A priority Critical patent/CN109784396A/en
Publication of CN109784396A publication Critical patent/CN109784396A/en
Pending legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

A switching-on/off state identification method is characterized by comprising the following steps: step 1, collecting a plurality of opening and closing sample images, and training an SVM multi-classifier by using the sample images; step 2, collecting opening and closing images to be identified, and converting the opening and closing images into gray images; step 3, carrying out coarse positioning on the gray level image to obtain a coarse positioning candidate area; accurately positioning the gray level image to obtain a plurality of accurate positioning candidate areas; screening out a final target image for identifying the opening and closing state between the coarse positioning candidate area and the plurality of accurate positioning candidate areas; step 4, preprocessing the acquired target image; and 5, extracting HOG characteristics from the target image, and sending the HOG characteristic operator obtained by calculation into an SVM multi-classifier to obtain a final recognition result. The method can effectively improve the accuracy of target area positioning and opening and closing state identification.

Description

Method for identifying switching-on and switching-off states
Technical Field
The invention belongs to the field of image recognition, and particularly relates to a method for recognizing the opening and closing states.
Background
The power industry is closely related to the life of people, and the opening and closing of a transformer substation is the most basic device in the power industry and is very important to power supply. In recent years, the phenomenon that normal power transmission and transmission cannot be carried out due to the fact that the opening and closing detection and the position recognition cannot be achieved frequently occurs, and huge economic losses are caused to people's life and industrial production.
At present, two types of opening and closing detection methods are mainly used, and the first type is a manual inspection method. However, the switching-on and switching-off of the transformer substation are mostly in the field, the distance of workers is generally far, and the phenomenon of insufficient switching-on and switching-off cannot be solved in time, so that the power supply system cannot respond in time. Moreover, manual inspection usually consumes a lot of manpower and time, and is prone to error in a long-time and high-intensity working environment. Therefore, the manual inspection method has the defects of high labor intensity, low efficiency, insufficient inspection, poor reliability, high risk and the like. In recent years, with the popularization of inspection robots, the detection work of opening and closing gradually develops towards intellectualization. The electric power inspection robot is used for replacing manual inspection, and the electric power inspection robot has the advantages of high efficiency, high reliability and the like. However, most of the existing methods utilize the traditional image processing means for detection and identification, the detection effect is poor under the condition of changing illumination conditions, and generally one illumination condition needs a group of parameters, so that a relatively universal detection and identification method needs to be provided to deal with detection tasks under different illumination and posture conditions.
Disclosure of Invention
Aiming at the problems, the invention provides a method for automatically identifying the opening and closing state based on an svm classifier.
A method for identifying the opening and closing states comprises the following steps:
step 1, collecting a plurality of opening and closing sample images, and training an SVM multi-classifier by using the sample images;
step 2, collecting opening and closing images to be identified, and converting the opening and closing images into gray images;
step 3, carrying out coarse positioning on the gray level image to obtain a coarse positioning candidate area; accurately positioning the gray level image to obtain a plurality of accurate positioning candidate areas; screening out a final target image for identifying the opening and closing state between the coarse positioning candidate area and the plurality of accurate positioning candidate areas;
step 4, preprocessing the acquired target image;
and 5, extracting HOG characteristics from the target image, and sending the HOG characteristic operator obtained by calculation into an SVM multi-classifier to obtain a final recognition result.
Further, the training of the SVM multi-classifier using the sample image specifically includes:
step 1-1, taking the sample image as a positive and negative training sample set;
step 1-2, extracting HOG characteristics of the positive and negative training sample set;
and 1-3, endowing all the positive and negative training sample sets with sample labels, and sending the HOG characteristics and the sample labels of the training sample sets into an SVM for training.
Further, the taking the sample image as a positive and negative training sample set specifically includes:
(1) taking an image with an in-place opening and closing switch as a positive sample set, and taking an opening and closing switch picture with an out-of-place opening and closing switch as a negative sample set;
(2) cutting the picture, and deleting redundant information outside the opening and closing image area;
(3) the picture is scaled to m pixels long and n pixels wide, with m and n ranging from 36-64.
Further, the specific method for extracting the HOG features of the positive and negative training sample sets comprises the following steps:
(1) converting the color image into a gray image;
(2) gamma correction is carried out on the gray level image, the local shadow and illumination change of the image are reduced, and the formula of the Gamma correction is as follows:
I(x,y)=I(x,y)gamma(1)
wherein I (x, y) represents the pixel value of the x row and the y column of the image, and gamma takes a number between 0 and 1;
(3) the gradient of each pixel of the image is calculated according to the following formula:
Gx(x,y)=H(x+1,y)-H(x-1,y) (2)
Gy(x,y)=H(x,y+1)-H(x,y-1) (3)
wherein G isx(x,y),Gy(x, y), H (x, y) respectively represents the horizontal gradient, the vertical gradient and the pixel value of the pixel point (x, y) in the image, and the gradient magnitude G (x, y) and the gradient direction α (x, y) of the pixel point (x, y) can be obtained according to the following formula:
(4) dividing an image into square cells with the side length of a pixels, creating a gradient direction histogram for each cell, dividing the gradient direction into k direction blocks by 360 degrees, wherein the direction range of the ith direction block isCounting the gradient direction of each pixel in the cell, and if the gradient direction belongs to a certain direction block, adding the count value of the corresponding direction block to the amplitude value corresponding to the gradient;
(5) combining the unit cells into blocks, rewriting the gradient histogram corresponding to each unit cell into a vector form by the intra-block normalized gradient histogram, and connecting all gradient vectors in each block in series to form a gradient direction histogram vector of the block; multiplying the vector by a corresponding normalization factor, wherein the calculation formula of the normalization factor is as follows:
wherein v represents a vector that has not been normalized, | v | | | luminance2A norm of order 2 representing v, e representing a constant;
(6) and connecting the normalized vectors of all the blocks in the image in series to obtain the HOG characteristic of the training sample set.
Further, the sending the HOG features and the sample labels of the positive and negative training sample sets into the SVM for training specifically comprises:
(1) determining a training target of the SVM, namely finding an optimal hyperplane mathematical formula which can realize classification of positive and negative samples:
where w represents a vector perpendicular to the hyperplane, | | w | | | represents the norm of w, ξiRepresenting a relaxation variable, being a non-negative number, D being a parameter controlling the weight of two terms in the objective function, xiRepresenting HOG characteristics, y, of the ith sampleiA sample label representing the ith sample, b represents a constant;
(2) constructing a Lagrangian function:
wherein, αiRepresenting the Lagrange multiplier, ri=D-αiLet us order
Transformation of objective function into
Wherein d is*Representing an optimal value of the objective function;
(3) let L minimize for w, b, ξ, i.e.:
by bringing equation (11) into equation (8), the objective function is transformed into:
wherein,<xi,xj>expression to xi,xjInner product of (d);
(4) lagrange multiplier α using SMO algorithmiUsing a heuristic algorithm to select a pair of lagrange multipliers αijFixing device αijDetermining α under the condition that w is extreme, among other parametersiIs taken from αiRepresentation αj(ii) a Repeat without interruptionUntil the objective function converges;
(5) determining an optimal hyperplane according to the optimal value of the Lagrange multiplier:
wherein,representing the optimum value of the Lagrange multiplier, w*,b*Respectively representing the direction of the optimal hyperplane and the offset from the origin;
(6) obtaining a classification decision function, namely a trained SVM classifier:
further, the coarse positioning to obtain the coarse positioning candidate region is to perform coarse positioning on the opening and closing target region in the picture to be detected through a Mellin Fourier transform and phase correlation method to obtain the coarse positioning candidate region.
Further, the accurate positioning of the grayscale image to obtain a plurality of accurate positioning candidate regions specifically includes:
and accurately positioning the target switching-on and switching-off regions by using a machine learning method, and sending the image to be detected into a trained classifier to obtain a plurality of target candidate regions.
Further, the step of screening out a final target image identifying an opening/closing state between the rough positioning candidate region and the plurality of accurate positioning candidate regions specifically includes:
calculating the confidence coefficient of each accurate positioning candidate region, and selecting the candidate region with the highest confidence coefficient from the plurality of accurate positioning candidate regions;
and comparing the candidate region with the maximum confidence coefficient with the rough positioning candidate region, and selecting a final target image.
Further, the calculating the confidence of each accurate positioning candidate region specifically includes:
calculating an intersection ratio parameter IOU of each accurate positioning candidate area and the rough positioning candidate area;
calculating a perceptual hash index pHash of each accurate positioning candidate region and an opening and closing region image in the template image;
calculating mutual information index I (G) of each accurate positioning candidate area and template image(X),H(Y));
The confidence coefficient calculation formula is as follows:
Confidence=1-(pHash+1/I(G(X),H(y)))/(IOU+D) (20)
wherein D is a constant;
the template image is an acquired opening and closing in-place image.
Further, the step of comparing the candidate region with the maximum confidence with the rough positioning candidate region to select a final target image specifically includes:
if the intersection ratio parameter IOU of the candidate region with the maximum confidence coefficient simultaneously meets the condition that the intersection ratio parameter IOU is less than the set threshold dIOU and (pHash +1/I (G)(X),H(Y)) If the confidence coefficient of the candidate region is larger than the threshold, the coarse localization candidate region is used as the final target image, otherwise, the candidate region with the maximum confidence coefficient is used as the final target image.
Further, the calculation method of the intersection ratio parameter IOU is as follows:
wherein C is the coarse positioning candidate region, niFor the ith said fine positioning candidate.
Further, the calculation method of the perceptual hash index pHash is as follows:
and scaling the accurate positioning candidate area and the template image to the same size, performing cosine transformation, selecting a partial low-frequency area at the upper left corner of the image after the cosine transformation, removing direct current components of coordinates (0,0) to obtain a total N-dimensional feature vector, and calculating the Hamming distance of the feature vectors of the accurate positioning candidate area and the template image to be used as a perceptual hash index.
Further, the mutual information index I (G)(X),H(Y)) The calculation method is as follows:
wherein G is(X)、H(Y)The number of pixels of the template image and the pinpoint candidate region, W, H the candidate region image width and height, respectively.
Further, the preprocessing of the shunting and closing target image specifically comprises:
carrying out histogram equalization on the target image, and increasing the overall contrast of the image;
and performing Gaussian filtering on the equalized image to eliminate Gaussian noise on the image.
Further, the collecting of the opening and closing images to be identified specifically includes: the power inspection robot reaches an appointed inspection point through positioning navigation and shoots an opening and closing image of the inspection field power equipment.
By adopting the opening and closing state positioning method, the inspection robot is used for automatically acquiring field images, so that the working efficiency is improved, and the manual work is reduced; the method combining coarse positioning and precise positioning improves the accuracy of acquiring the target area and provides guarantee for accurately identifying the opening and closing state; image preprocessing and the above-described scientific localization approach reduce adverse environmental effects in image recognition. Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly introduced below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is a flow chart of a switching state identification according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the switching-on/off state recognition of the present invention includes the following steps:
step 1, collecting a plurality of opening and closing sample images, and training an SVM multi-classifier by using the sample images;
step 2, collecting opening and closing images to be identified, and converting the opening and closing images into gray images;
step 3, carrying out coarse positioning on the gray level image to obtain a coarse positioning candidate area; accurately positioning the gray level image to obtain a plurality of accurate positioning candidate areas; screening out a final target image for identifying the opening and closing state between the coarse positioning candidate area and the plurality of accurate positioning candidate areas;
step 4, preprocessing the acquired opening and closing target image;
and 5, performing pixel adjustment on the image, sliding a sliding window with the length of m pixels and the width of n pixels on the image, extracting HOG characteristics from the window, and sending the HOG characteristic operator obtained by calculation into an SVM multi-classifier to obtain a final recognition result.
The above steps are described in detail below.
Step 1, collecting 100000 pictures containing opening and closing as a sample set, and training an SVM multi-classifier by utilizing an opening and closing image number set collected in advance; selecting an image shot at the inspection point and placed in the opening and closing center as a template image for each inspection point;
the method for training the SVM multi-classifier is as follows:
(1) classifying and processing the sample set:
(a) taking the opening and closing pictures which are opened and closed in place as a positive sample set, and taking the opening and closing pictures which are not opened and closed in place as a negative sample set;
(b) cutting the picture, and deleting redundant information except the opening and closing graph;
(c) scaling the picture into a rectangle with the length and the width of 48 pixels;
(2) extracting HOG characteristics of the positive and negative sample images, specifically:
(a) converting the color image into a gray image;
(b) and performing Gamma correction on the gray level image to reduce local shadow and illumination change of the image.
The formula (1) is a Gamma correction formula, wherein the Gamma is 0.5;
(c) calculating the horizontal and vertical gradients of each pixel of the image according to the formulas (2) and (3), and then calculating the gradient amplitude and the gradient direction at the pixel point (x, y) according to the formulas (4) and (5);
(d) the image is divided into square cells with the side length of 8 pixels, and a gradient direction histogram is created for each cell. Dividing the gradient direction into 9 direction blocks at 360 degrees, counting the gradient direction of each pixel in a unit cell, if the gradient direction belongs to a certain direction block, adding the count value of the corresponding direction block to the amplitude value corresponding to the gradient, combining the unit cell into a block with the side length of 16 pixels, normalizing a gradient histogram in the block, and reducing the influence of illumination, shadow and edge on the gradient; the normalized vectors of all the blocks in the image are connected in series to obtain the HOG characteristics of the blocks;
(3) giving sample labels to all positive and negative samples, and sending the HOG characteristics and the sample labels of the positive and negative samples into the SVM for training, and the method specifically comprises the following steps:
(1) the training target of the SVM is to find an optimal hyperplane which can realize classification on positive and negative samples, and the mathematical form of the optimal hyperplane can be expressed by an equation (7);
(2) constructing a Lagrange function as an equation (8), and converting the target function into an equation (10) according to an equation (9);
(3) minimizing L for w, b, ξ, equation (11), bringing equation (11) into equation (8), and converting the objective function into equation (12);
(4) solving the optimal value of the Lagrange multiplier by using an SMO algorithm;
(5) determining an optimal hyperplane according to the optimal value of the Lagrange multiplier and the formula (13);
(6) and obtaining a classification decision function formula (14), namely a trained SVM classifier:
and 2, the inspection robot reaches a specified inspection point through positioning navigation, the navigation error is 5cm, a switching-on and switching-off image is obtained and is read in a gray-scale image mode, the inspection robot transmits the collected switching-on and switching-off image to an image processing and identifying terminal through communication equipment, the identifying terminal processes and identifies the image, and the inspection robot simultaneously transmits the marking information of the image corresponding to the inspection point.
Step 3, carrying out coarse positioning and accurate positioning on a target area to be detected, screening target candidate areas to obtain an opening and closing switch, and specifically comprising the following steps:
1) carrying out coarse positioning on a target opening and closing area in a picture to be detected by utilizing a Mellin Fourier transform and phase correlation method to obtain a coarse positioning candidate area;
2) accurately positioning an image to be detected by using a machine learning Adaboost classifier trained in advance to obtain a plurality of accurate positioning candidate regions;
3) respectively solving a merging ratio parameter IOU of each accurate positioning candidate area and each rough positioning candidate area, carrying out perceptual hash calculation on each accurate positioning candidate area image and an opening and closing area image in a template image to obtain perceptual hash indexes, and calculating mutual information indexes of each accurate positioning candidate area image and the template image;
the method for calculating the perception Hash pHash index specifically comprises the following steps:
the method comprises the steps of scaling an opening and closing area image in an accurately positioned candidate area image and an intercepted template image to 32 x 32, performing cosine transformation, selecting an 8 x 8 area at the upper left corner of the image after cosine transformation, removing direct current components of coordinates (0,0) to obtain 63-dimensional feature vectors, and calculating Hamming distances of the feature vectors of the image A and the image B to serve as a perceptual Hash pHash index;
calculating a mutual information index by using formulas (16) to (19);
calculating an intersection ratio parameter IOU by using a formula (15) to respectively obtain three intersection ratio parameter indexes (0.7,0.0 and 0.0);
weighting the three indexes of the intersection ratio IOU, the mutual information and the perceptual hash pHash obtained by the calculation according to a formula (20) to obtain the confidence coefficient of the accurate positioning candidate area; according to the method, the confidence degrees of the candidate regions are evaluated by adopting three index weights, so that the positioning accuracy is improved.
And (4) sequencing the regions with the maximum confidence degrees from high to low according to the confidence degrees of all the candidate regions, and taking the regions with the maximum confidence degrees as the optimal regions in accurate positioning, namely the candidate regions with the maximum confidence degrees.
Selecting one of the candidate regions with the highest confidence level and the rough positioning candidate region as a final target image region:
if the intersection ratio parameter IOU of the candidate region with the maximum confidence coefficient simultaneously meets the condition that the intersection ratio parameter IOU is less than the set threshold dIOU and (pHash +1/I (G)(X),H(Y)) If the confidence coefficient of the candidate region is larger than the threshold, the coarse localization candidate region is used as the final target image, otherwise, the candidate region with the maximum confidence coefficient is used as the final target image.
The invention screens out the optimal target area by combining two modes of coarse positioning and precise positioning, improves the positioning accuracy and provides a basis for correctly identifying the opening and closing states.
Step 4, preprocessing the acquired on-site opening and closing images, comprising:
(1) carrying out histogram equalization on the target image, and increasing the overall contrast of the image to make the image clearer;
(2) and performing Gaussian filtering on the equalized image to eliminate Gaussian noise on the image.
The image preprocessing operation reduces the influence of adverse factors of the image acquisition field environment, and provides favorable conditions for the opening and closing state identification.
And 5, carrying out pixel adjustment on the partial closing target image, sliding a sliding window with the length and the width of 48 pixels on the image, extracting HOG characteristics from the window, and sending the extracted HOG characteristics into the SVM for judgment to obtain a final detection result.
Although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (15)

1. A switching-on/off state identification method is characterized by comprising the following steps:
step 1, collecting a plurality of opening and closing sample images, and training an SVM multi-classifier by using the sample images;
step 2, collecting opening and closing images to be identified, and converting the opening and closing images into gray images;
step 3, carrying out coarse positioning on the gray level image to obtain a coarse positioning candidate area; accurately positioning the gray level image to obtain a plurality of accurate positioning candidate areas; screening out a final target image for identifying the opening and closing state between the coarse positioning candidate area and the plurality of accurate positioning candidate areas;
step 4, preprocessing the acquired target image;
and 5, extracting HOG characteristics from the target image, and sending the HOG characteristic operator obtained by calculation into an SVM multi-classifier to obtain a final recognition result.
2. The opening/closing state recognition method according to claim 1, wherein the training of the SVM multi-classifier using the sample image specifically comprises:
step 1-1, taking the sample image as a positive and negative training sample set;
step 1-2, extracting HOG characteristics of the positive and negative training sample set;
and 1-3, endowing all the positive and negative training sample sets with sample labels, and sending the HOG characteristics and the sample labels of the training sample sets into an SVM for training.
3. The opening and closing state recognition method according to claim 2, wherein the taking of the sample image as a positive and negative training sample set specifically comprises:
(1) taking an image with an in-place opening and closing switch as a positive sample set, and taking an opening and closing switch picture with an out-of-place opening and closing switch as a negative sample set;
(2) cutting the picture, and deleting redundant information outside the opening and closing image area;
(3) the picture is scaled to m pixels long and n pixels wide, with m and n ranging from 36-64.
4. The opening and closing state recognition method according to claim 2, wherein the specific method for extracting the HOG features of the positive and negative training sample sets is as follows:
(1) converting the color image into a gray image;
(2) gamma correction is carried out on the gray level image, the local shadow and illumination change of the image are reduced, and the formula of the Gamma correction is as follows:
I(x,y)=I(x,y)gamma(1)
wherein I (x, y) represents the pixel value of the x row and the y column of the image, and gamma takes a number between 0 and 1;
(3) the gradient of each pixel of the image is calculated according to the following formula:
Gx(x,y)=H(x+1,y)-H(x-1,y) (2)
Gy(x,y)=H(x,y+1)-H(x,y-1) (3)
wherein G isx(x,y),Gy(x, y), H (x, y) respectively represents the horizontal gradient, the vertical gradient and the pixel value of the pixel point (x, y) in the image, and the gradient magnitude G (x, y) and the gradient direction α (x, y) of the pixel point (x, y) can be obtained according to the following formula:
(4) dividing an image into square cells with the side length of a pixels, creating a gradient direction histogram for each cell, dividing the gradient direction into k direction blocks by 360 degrees, wherein the direction range of the ith direction block isCounting the gradient direction of each pixel in the cell, and if the gradient direction belongs to a certain direction block, adding the count value of the corresponding direction block to the amplitude value corresponding to the gradient;
(5) combining the unit cells into blocks, rewriting the gradient histogram corresponding to each unit cell into a vector form by the intra-block normalized gradient histogram, and connecting all gradient vectors in each block in series to form a gradient direction histogram vector of the block; multiplying the vector by a corresponding normalization factor, wherein the calculation formula of the normalization factor is as follows:
wherein v represents a vector that has not been normalized, | v | | | luminance2A norm of order 2 representing v, e representing a constant;
(6) and connecting the normalized vectors of all the blocks in the image in series to obtain the HOG characteristic of the training sample set.
5. The opening and closing state recognition method according to claim 2, wherein the step of sending the HOG features and the sample labels of the positive and negative training sample sets into the SVM for training specifically comprises the steps of:
(1) determining a training target of the SVM, namely finding an optimal hyperplane mathematical formula which can realize classification of positive and negative samples:
where w represents a vector perpendicular to the hyperplane, | | w | | | represents the norm of w, ξiRepresenting a relaxation variable, being a non-negative number, D being a parameter controlling the weight of two terms in the objective function, xiRepresenting HOG characteristics, y, of the ith sampleiA sample label representing the ith sample, b represents a constant;
(2) constructing a Lagrangian function:
wherein, αiRepresenting the Lagrange multiplier, ri=D-αiLet us order
Transformation of objective function into
Wherein d is*Expressing the objective functionA figure of merit;
(3) let L minimize for w, b, ξ, i.e.:
by bringing equation (11) into equation (8), the objective function is transformed into:
wherein,<xi,xj>expression to xi,xjInner product of (d);
(4) lagrange multiplier α using SMO algorithmiUsing a heuristic algorithm to select a pair of lagrange multipliers αijFixing device αijDetermining α under the condition that w is extreme, among other parametersiIs taken from αiRepresentation αj(ii) a Repeating the steps until the target function is converged;
(5) determining an optimal hyperplane according to the optimal value of the Lagrange multiplier:
wherein,representing the optimum value of the Lagrange multiplier, w*,b*Respectively representing the direction of the optimal hyperplane and the offset from the origin;
(6) obtaining a classification decision function, namely a trained SVM classifier:
6. the opening and closing state identification method according to claim 1, wherein the coarse positioning to obtain the coarse positioning candidate region is performed by performing coarse positioning on an opening and closing target region in a picture to be detected through a Mellin Fourier transform and phase correlation method to obtain the coarse positioning candidate region.
7. The opening and closing state identification method according to claim 1, wherein the accurate positioning of the grayscale images to obtain a plurality of accurate positioning candidate regions specifically comprises:
and accurately positioning the target switching-on and switching-off regions by using a machine learning method, and sending the image to be detected into a trained classifier to obtain a plurality of target candidate regions.
8. The opening/closing state recognition method according to claim 1, 6 or 7, wherein the step of screening out the final target image for recognizing the opening/closing state between the coarse positioning candidate region and the plurality of accurate positioning candidate regions is specifically as follows:
calculating the confidence coefficient of each accurate positioning candidate region, and selecting the candidate region with the highest confidence coefficient from the plurality of accurate positioning candidate regions;
and comparing the candidate region with the maximum confidence coefficient with the rough positioning candidate region, and selecting a final target image.
9. The opening/closing state identification method according to claim 8, wherein the calculating the confidence of each accurate positioning candidate region specifically includes:
calculating an intersection ratio parameter IOU of each accurate positioning candidate area and the rough positioning candidate area;
calculating a perceptual hash index pHash of each accurate positioning candidate region and an opening and closing region image in the template image;
calculating mutual information index I (G) of each accurate positioning candidate area and template image(X),H(Y));
The confidence coefficient calculation formula is as follows:
Confidence=1-(pHash+1/I(G(X),H(y)))/(IOU+D) (20)
wherein D is a constant;
the template image is an acquired opening and closing in-place image.
10. The opening/closing state recognition method according to claim 9, wherein the step of comparing the candidate region with the highest confidence with the rough-positioning candidate region and selecting a final target image comprises:
if the intersection ratio parameter IOU of the candidate region with the maximum confidence coefficient simultaneously meets the condition that the intersection ratio parameter IOU is less than the set threshold dIOU and (pHash +1/I (G)(X),H(Y)) If the confidence coefficient of the candidate region is larger than the threshold, the coarse localization candidate region is used as the final target image, otherwise, the candidate region with the maximum confidence coefficient is used as the final target image.
11. The opening/closing state recognition method according to claim 9, wherein the calculation manner of the intersection ratio parameter IOU is:
wherein C is the coarse positioning candidate region, niFor the ith said fine positioning candidate.
12. The opening/closing state identification method according to claim 9, wherein the perceptual hash index pHash is calculated in a manner that:
and scaling the accurate positioning candidate area and the template image to the same size, performing cosine transformation, selecting a partial low-frequency area at the upper left corner of the image after the cosine transformation, removing direct current components of coordinates (0,0) to obtain a total N-dimensional feature vector, and calculating the Hamming distance of the feature vectors of the accurate positioning candidate area and the template image to be used as a perceptual hash index.
13. On-off state of claim 9Identification method, characterized in that said mutual information index I (G)(X),H(Y)) The calculation method is as follows:
wherein G is(X)、H(Y)The number of pixels of the template image and the pinpoint candidate region, W, H the candidate region image width and height, respectively.
14. The opening and closing state identification method according to claim 9, wherein the preprocessing of the opening and closing target image is specifically:
carrying out histogram equalization on the target image, and increasing the overall contrast of the image;
and performing Gaussian filtering on the equalized image to eliminate Gaussian noise on the image.
15. The opening and closing state identification method according to claim 1, wherein the collecting of the opening and closing images to be identified specifically comprises: the power inspection robot reaches an appointed inspection point through positioning navigation and shoots an opening and closing image of the inspection field power equipment.
CN201910014074.3A 2019-01-08 2019-01-08 Method for identifying switching-on and switching-off states Pending CN109784396A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910014074.3A CN109784396A (en) 2019-01-08 2019-01-08 Method for identifying switching-on and switching-off states

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910014074.3A CN109784396A (en) 2019-01-08 2019-01-08 Method for identifying switching-on and switching-off states

Publications (1)

Publication Number Publication Date
CN109784396A true CN109784396A (en) 2019-05-21

Family

ID=66499255

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910014074.3A Pending CN109784396A (en) 2019-01-08 2019-01-08 Method for identifying switching-on and switching-off states

Country Status (1)

Country Link
CN (1) CN109784396A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178395A (en) * 2019-12-12 2020-05-19 平高集团有限公司 Isolation switch state identification method and device
CN112418226A (en) * 2020-10-23 2021-02-26 济南信通达电气科技有限公司 Method and device for identifying opening and closing states of fisheyes
CN113239837A (en) * 2021-05-21 2021-08-10 华南农业大学 Machine learning-based green tomato identification method in natural environment
CN113780191A (en) * 2021-09-14 2021-12-10 西安西电开关电气有限公司 Method and system for identifying opening and closing state image of starting drag switch of power station

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203367177U (en) * 2013-07-17 2013-12-25 施耐德电气华电开关(厦门)有限公司 Breaker opening and closing state indicating device
CN104200219A (en) * 2014-08-20 2014-12-10 深圳供电局有限公司 Automatic identification method and device for switch position indication of transformer substation disconnecting link position
CN108537154A (en) * 2018-03-28 2018-09-14 天津大学 Transmission line of electricity Bird's Nest recognition methods based on HOG features and machine learning
CN108564024A (en) * 2018-04-10 2018-09-21 四川超影科技有限公司 Switch identification method applied to power station environment
CN109344768A (en) * 2018-09-29 2019-02-15 南京理工大学 Pointer breaker recognition methods based on crusing robot

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203367177U (en) * 2013-07-17 2013-12-25 施耐德电气华电开关(厦门)有限公司 Breaker opening and closing state indicating device
CN104200219A (en) * 2014-08-20 2014-12-10 深圳供电局有限公司 Automatic identification method and device for switch position indication of transformer substation disconnecting link position
CN108537154A (en) * 2018-03-28 2018-09-14 天津大学 Transmission line of electricity Bird's Nest recognition methods based on HOG features and machine learning
CN108564024A (en) * 2018-04-10 2018-09-21 四川超影科技有限公司 Switch identification method applied to power station environment
CN109344768A (en) * 2018-09-29 2019-02-15 南京理工大学 Pointer breaker recognition methods based on crusing robot

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
ROSS GIRSHICK等: "Rich feature hierarchies for accurate object detection and semantic segmentation", 《2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
侯宏花: "《数字图像处理与分析》", 30 September 2011, 北京理工大学出版社 *
姚睿: "《复杂环境下的鲁棒目标检测与跟踪》", 31 May 2015, 中国矿业大学出版社 *
戴宪策等: "基于傅里叶-梅林变换的图像匹配方法研究", 《红外技术》 *
李帅: "基于机器学习的运动目标识别与跟踪技术研究", 《中国优秀硕士学位论文全文数据库》 *
李海彪等: "基于类间方差和离散余弦变换的模板匹配哈希目标跟踪", 《电光与控制》 *
杨杰等: "《视频目标检测和跟踪及其应用》", 31 August 2012, 上海交通大学出版社 *
闫志刚: "《矿山水害空间数据挖掘与知识发现的支持向量机理论与方法》", 31 October 2018, 中国矿业大学出版社 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111178395A (en) * 2019-12-12 2020-05-19 平高集团有限公司 Isolation switch state identification method and device
CN111178395B (en) * 2019-12-12 2023-04-07 平高集团有限公司 Isolation switch state identification method and device
CN112418226A (en) * 2020-10-23 2021-02-26 济南信通达电气科技有限公司 Method and device for identifying opening and closing states of fisheyes
CN112418226B (en) * 2020-10-23 2022-11-25 济南信通达电气科技有限公司 Method and device for identifying opening and closing states of fisheyes
CN113239837A (en) * 2021-05-21 2021-08-10 华南农业大学 Machine learning-based green tomato identification method in natural environment
CN113780191A (en) * 2021-09-14 2021-12-10 西安西电开关电气有限公司 Method and system for identifying opening and closing state image of starting drag switch of power station
CN113780191B (en) * 2021-09-14 2024-05-10 西安西电开关电气有限公司 Method and system for identifying opening and closing state image of power station start dragging switch

Similar Documents

Publication Publication Date Title
CN109447949A (en) Insulated terminal defect identification method based on crusing robot
CN108961235B (en) Defective insulator identification method based on YOLOv3 network and particle filter algorithm
CN108537154B (en) Power transmission line bird nest identification method based on HOG characteristics and machine learning
CN109784396A (en) Method for identifying switching-on and switching-off states
CN110059694A (en) The intelligent identification Method of lteral data under power industry complex scene
CN109344768A (en) Pointer breaker recognition methods based on crusing robot
CN111539330B (en) Transformer substation digital display instrument identification method based on double-SVM multi-classifier
CN109344766A (en) Slide block type breaker recognition methods based on crusing robot
CN113947590A (en) Surface defect detection method based on multi-scale attention guidance and knowledge distillation
CN107392237B (en) Cross-domain foundation cloud picture classification method based on migration visual information
CN110889332A (en) Lie detection method based on micro expression in interview
Laga et al. Image-based plant stornata phenotyping
CN112446370A (en) Method for recognizing text information of nameplate of power equipment
CN109934221B (en) Attention mechanism-based automatic analysis, identification and monitoring method and system for power equipment
CN106485273A (en) A kind of method for detecting human face based on HOG feature and DNN grader
CN109255336A (en) Arrester recognition methods based on crusing robot
CN104200226B (en) Particle filter method for tracking target based on machine learning
CN111199250A (en) Transformer substation air switch state checking method and device based on machine learning
CN117115790A (en) Automatic instrument image identification and classification method for inspection robot
CN116311201A (en) Substation equipment state identification method and system based on image identification technology
CN108268854B (en) Teaching assistance big data intelligent analysis method based on feature recognition
CN107944453A (en) Based on Hu not bushing detection methods of bending moment and support vector machines
CN109165592B (en) Real-time rotatable face detection method based on PICO algorithm
Lin et al. A traffic sign recognition method based on deep visual feature
CN113673534B (en) RGB-D image fruit detection method based on FASTER RCNN

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20190521