CN117392201A - Target paper bullet hole identification and target reporting method based on visual detection - Google Patents
Target paper bullet hole identification and target reporting method based on visual detection Download PDFInfo
- Publication number
- CN117392201A CN117392201A CN202311199001.9A CN202311199001A CN117392201A CN 117392201 A CN117392201 A CN 117392201A CN 202311199001 A CN202311199001 A CN 202311199001A CN 117392201 A CN117392201 A CN 117392201A
- Authority
- CN
- China
- Prior art keywords
- image
- target
- bullet hole
- bullet
- gray
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 39
- 238000001514 detection method Methods 0.000 title claims abstract description 35
- 230000000007 visual effect Effects 0.000 title claims abstract description 17
- 230000011218 segmentation Effects 0.000 claims abstract description 21
- 230000000877 morphologic effect Effects 0.000 claims abstract description 20
- 238000007781 pre-processing Methods 0.000 claims abstract description 17
- 238000012216 screening Methods 0.000 claims abstract description 10
- 238000012937 correction Methods 0.000 claims abstract description 8
- 230000003287 optical effect Effects 0.000 claims abstract description 4
- 230000009466 transformation Effects 0.000 claims description 24
- 239000011159 matrix material Substances 0.000 claims description 11
- 238000000605 extraction Methods 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 10
- 238000012545 processing Methods 0.000 claims description 8
- 230000007797 corrosion Effects 0.000 claims description 6
- 238000005260 corrosion Methods 0.000 claims description 6
- 241001522301 Apogonichthyoides nigripinnis Species 0.000 claims description 5
- NAWXUBYGYWOOIX-SFHVURJKSA-N (2s)-2-[[4-[2-(2,4-diaminoquinazolin-6-yl)ethyl]benzoyl]amino]-4-methylidenepentanedioic acid Chemical compound C1=CC2=NC(N)=NC(N)=C2C=C1CCC1=CC=C(C(=O)N[C@@H](CC(=C)C(O)=O)C(O)=O)C=C1 NAWXUBYGYWOOIX-SFHVURJKSA-N 0.000 claims description 3
- 238000005530 etching Methods 0.000 claims description 3
- 238000001914 filtration Methods 0.000 claims description 3
- 230000006870 function Effects 0.000 claims description 3
- 230000005484 gravity Effects 0.000 claims description 3
- 238000011179 visual inspection Methods 0.000 claims 1
- 238000005286 illumination Methods 0.000 abstract description 6
- 238000005516 engineering process Methods 0.000 description 4
- 230000008685 targeting Effects 0.000 description 4
- 238000011161 development Methods 0.000 description 3
- 230000018109 developmental process Effects 0.000 description 3
- 230000035945 sensitivity Effects 0.000 description 3
- 239000013598 vector Substances 0.000 description 3
- 239000003086 colorant Substances 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000006835 compression Effects 0.000 description 1
- 238000007906 compression Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 230000003628 erosive effect Effects 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000003702 image correction Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 238000012856 packing Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/11—Complex mathematical operations for solving equations, e.g. nonlinear equations, general mathematical optimization problems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F17/00—Digital computing or data processing equipment or methods, specially adapted for specific functions
- G06F17/10—Complex mathematical operations
- G06F17/16—Matrix or vector computation, e.g. matrix-matrix or matrix-vector multiplication, matrix factorization
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/60—Rotation of whole images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/20—Image enhancement or restoration using local operators
- G06T5/30—Erosion or dilatation, e.g. thinning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Mathematical Physics (AREA)
- Mathematical Optimization (AREA)
- Pure & Applied Mathematics (AREA)
- Mathematical Analysis (AREA)
- Computational Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Algebra (AREA)
- Databases & Information Systems (AREA)
- General Engineering & Computer Science (AREA)
- Operations Research (AREA)
- Geometry (AREA)
- Computing Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a target paper bullet hole identification and target reporting method based on visual detection, which comprises the following steps: and adding an optical filter in front of the camera, acquiring an original image of the target surface through the camera, and performing geometric correction on the original image. The image is sequentially: graying, binary segmentation and morphological operations. Correcting and preprocessing the image to obtain a binary image containing the bullet hole target surface, and extracting and fitting the contour of the target surface to obtain the target coordinates and the target radius; and screening out an effective target surface through an area to generate a mask, and then dividing the original image according to the area of the mask to leave a final target area for bullet hole detection. Comparing the calculated Euclidean distance with the radius of the circular ring where each ring value is located, and calculating the bullet hole on the target line according to the side with the larger ring value, namely the ring value is high as an effective result. The technical scheme of the invention solves the problems that the positions of the spring holes on the target surface cannot be effectively detected and the influence of illumination is great in the prior art.
Description
Technical Field
The invention relates to the technical field of computer vision, in particular to a target paper bullet hole identification and target reporting method based on vision detection.
Background
In recent years, the rapid development of information technology has driven the development of automation and intelligence in various fields. In the current targeting training, the report and statistics of the targeting results need to be performed by using a manual targeting mode, and the problems of long time consumption, low safety and human error exist. With the continuous development of computer vision technology, the method is applied to the automatic identification of the bullet hole to automatically report targets to replace the manual target reporting, which is a necessary trend.
At present, the domestic automatic target reporting system has the following types according to the implementation principle: double-layer electrode short circuit sampling system, photoelectric automatic target reporting system, acousto-electric positioning automatic target reporting system, optical fiber coding automatic target reporting system, electrode embedded automatic target reporting system and the like, but the problems of severe configuration and high later maintenance cost exist in the systems. The automatic target reporting system based on visual detection in the prior art mainly comprises four parts of image preprocessing, target ring extraction, bullet hole identification and ring value judgment, wherein the bullet hole identification is directly related to the accuracy of the final target reporting, and is the core of the whole automatic target reporting. The method for identifying the bullet holes based on template matching is to calculate the characteristic vectors of the bullet holes and other non-bullet holes, and compare the characteristic vectors with the characteristic vectors of each pixel point in the image to be detected, and the method needs a large amount of accurate bullet hole templates, and has relatively large calculated amount; the method for identifying the bullet holes based on the fuzzy theory is to identify the bullet holes according to the difference between the bullet hole gray values and the background gray values, and the method is simple and quick but does not consider the problem of target surface distortion; the subtraction technology based on time sequence images identifies the bullet hole by comparing the front frame image and the rear frame image, and the position with the difference value not being zero is regarded as the bullet hole; the method has the advantages that the bullet holes are identified by the image information fusion technology based on wavelet transformation, wavelet decomposition is carried out on two images respectively, wavelet transformation coefficients of images with new bullet points are mainly changed, so that the edges and positions of the new bullet points are highlighted, the anti-interference capability of the method is enhanced, but the algorithm is complex, and the time cost is high; based on the gray feature of the bullet hole, the difference of gray features of the bullet hole area and the background is utilized to identify the bullet hole, the gray features of the bullet hole are counted, the bullet hole is extracted through segmentation, and the algorithm is simple but is greatly influenced by illumination.
Therefore, there is a need for a target paper bullet hole recognition and target reporting method based on visual detection, which can effectively detect the positions of bullet holes on a target surface and can be used in an outdoor environment.
Disclosure of Invention
The invention mainly aims to provide a target paper bullet hole identification and target reporting method based on visual detection, which aims to solve the problems that the bullet hole position on a target surface cannot be effectively detected and the influence of illumination is large in the prior art.
In order to achieve the above purpose, the invention provides a target paper bullet hole identification and target reporting method based on visual detection, which comprises the following steps:
s1, adding a light filter in front of a camera, acquiring an original image of a target surface through the camera, performing graphic preprocessing on the original image, and performing geometric correction on the image.
S2, further preprocessing the image, and sequentially performing: graying, binary segmentation and morphological operations.
S3, correcting and preprocessing the image to obtain a binary image containing the bullet hole target surface, and extracting and fitting the contour of the target surface to obtain the target coordinates and the target radius; and screening out an effective target surface through an area to generate a mask, and then dividing the original image according to the area of the mask to leave a final target area for bullet hole detection.
S4, firstly, calculating the Euclidean distance between each bullet hole and the bulls center by using the Euclidean distance, and then comparing the Euclidean distance with the radius of the circular ring where each ring value is positioned according to the calculated Euclidean distance, wherein the bullet holes on the target line are usually calculated according to the side with the large ring value, namely the ring value is high and is an effective result.
Further, the geometric correction of the image in step S1 uses perspective transformation to project the two-dimensional picture onto a three-dimensional plane, and then converts the two-dimensional picture onto the two-dimensional plane, which specifically includes:
assuming that the original coordinate point is (X, Y), and the transformed coordinate is (X, Y, Z), the perspective transformation matrix equation is expressed as formula (1):
wherein a is a perspective transformation matrix, which can be split into four parts: first partFor linear transformation, the partial matrix is mainly used for scaling and rotating operation of the image; second part [ a ] 31 a 32 ]For translational operations, third part [ a 13 a 23 ] T For generating a perspective transformation; fourth part parameter a 33 Is a fixed value of 1.
The transformed coordinates (X, Y, Z) are three-dimensional coordinates, the transformed coordinates (X, Y, Z) are divided by the Z axis and converted into a two-dimensional plane to obtain new two-dimensional coordinates (X ', Y'), and the result is shown as a formula (2) and a formula (3):
let a 33 =1, the above formula is developed to obtain the following formula (4):
further, in step S2, the image is subjected to gray-scale preprocessing specifically:
the gray image can be obtained by carrying out weighted average on the RGB three components according to the formula (5)
Gray(i,j)=0.299*R(i,j)+0.578*G(i,j)+0.114*B(i,j) (5);
Where R (i, j), G (i, j), B (i, j) represent pixel values of red, green, and blue channels, respectively, and Gray (i, j) represents a single channel pixel value after Gray scale.
Further, in step S2, the image after the graying is subjected to binary segmentation, and the process of binary segmentation is as follows:
assuming that the pixel value of one gray image is f (x, y) and the segmentation threshold is T, the image g (x, y) after segmentation is defined as shown in formula (6):
further, in step S2, morphological operations are performed on the binary-segmented image, specifically:
in gray level morphology, the image functions f (x, y) and b (x, y) are used to represent the target image and the structural element respectively, and the (x, y) represents the coordinates of the pixel point in the image, so that the mathematical expression of morphological operation is shown as formulas (7) and (8), wherein g 1 (x, y) is a gray scale image g after etching 2 (x, y) is an expanded gray scale image:
further, the process of contour extraction and fitting of the target surface in step S3 is as follows:
s3.1, assume that the input image isWhere H represents the width of the image, W represents the height of the image, first raster scan the input binary image, find the point satisfying the boundary condition from top to bottom, from left to right, if f i,j =1 and f i,j-1 =0, (i, j) is the outer boundary start point.
S3.2, after the boundary tracking starting point is found, boundary tracking is executed, and a complete boundary is found.
And S3.3, after the complete boundary is obtained, setting the connected domain as D, taking the center point of the pixel of the D boundary, connecting the center points to form an approximate circular contour, and calculating the contour area by using a Grignard formula.
And S3.4, screening the obtained contour area, leaving the most middle ring of target contour, fitting the contour into a circle, and obtaining the coordinates of the target center and the radius thereof.
Further, the step of detecting the bullet hole in the step S3 is:
s3.5, carrying out contour extraction on the binary image processed in the step S2, screening out an effective target surface through an area, eliminating the influence of the bullet holes of the off-target on detection, then dividing the original image according to the area of the mask so as to eliminate the background outside the target surface, and carrying out bullet hole detection on the divided image;
s3.6, firstly carrying out graying on the segmented image to obtain a gray image, then carrying out median filtering on the gray image to eliminate loop lines and background fold noise points, and then carrying out binarization processing;
and S3.7, performing contour detection and morphological closing operation on the binarized image, namely performing expansion and corrosion on the image, and finally performing minimum circle fitting on the detected bullet hole contour, and simultaneously obtaining the centroid coordinates of each bullet hole.
Further, step S4 specifically includes:
s4.1, calculating the Euclidean distance d between each bullet hole and the bulls center by using an Euclidean distance formula as shown in (9) i (x, y) with a center of gravity o= (x) r ,y r ) The radius of the bulls-eye is r, and the coordinate of each bullet hole is B i =(x i ,y i ):
S4.2, comparing the Euclidean distance with the radius of the circular ring where each ring value is located, and for the bullet hole on the target line, calculating according to the side with the larger ring value, namely the effective achievement with the high ring value, wherein a judging formula is shown as a formula (10):
in the formula, (x, y) represents coordinates of pixel points in the image, i represents the number of bullet holes, and S represents a ring value.
The invention has the following beneficial effects:
the bullet hole detection method provided by the invention has higher bullet hole identification accuracy due to the large difference between the bullet hole and the gray value of the area where the bullet hole is positioned, and can be used in an outdoor environment.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art. In the drawings:
fig. 1 shows a flow chart of a target paper bullet hole identification and target reporting method based on visual detection.
Fig. 2 shows an image which has been grayed out by step S2 of the present invention.
Fig. 3 shows an image after binary segmentation using step S2 of the present invention.
Fig. 4 shows an image after morphological operations with step S2 of the present invention.
Fig. 5 shows a graph of the result of the bulls-eye extraction by step S3 of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The target paper bullet hole identification and target reporting method based on visual detection shown in fig. 1 specifically comprises the following steps:
s1, adding a light filter in front of a camera, acquiring an original image of a target surface through the camera, performing graphic preprocessing on the original image, and performing geometric correction on the image. Firstly, an original image of a target surface is acquired through a camera, the image acquired by the camera needs to be subjected to certain processing for subsequent detection, and the main purpose of image preprocessing is to eliminate irrelevant information in the image, enhance the detectability of relevant information, simplify data to the greatest extent, and improve the reliability of feature extraction and identification. In order to avoid damage to the camera during targeting, the camera is placed under the target paper and looks up at a certain angle, so geometric correction of the image is required
S2, further preprocessing the image, and sequentially performing: graying, binary segmentation and morphological operations. After the original image is geometrically corrected, a rough target surface area image is obtained, and in order to further obtain a target surface area of interest in the image, further preprocessing is required to be carried out on the corrected image: graying, binary segmentation and morphological operations.
S3, correcting and preprocessing the image to obtain a binary image containing the bullet hole target surface, and extracting and fitting the contour of the target surface to obtain the target coordinates and the target radius; and screening out an effective target surface through an area to generate a mask, and then dividing the original image according to the area of the mask to leave a final target area for bullet hole detection.
S4, firstly, calculating the Euclidean distance between each bullet hole and the bulls center by using the Euclidean distance, and then comparing the Euclidean distance with the radius of the circular ring where each ring value is positioned according to the calculated Euclidean distance, wherein the bullet holes on the target line are usually calculated according to the side with the large ring value, namely the ring value is high and is an effective result.
Specifically, the geometric correction of the image in step S1 uses perspective transformation to project the two-dimensional picture onto a three-dimensional plane, and then converts the two-dimensional picture onto the two-dimensional plane, which specifically includes:
assuming that the original coordinate point is (X, Y), and the transformed coordinate is (X, Y, Z), the perspective transformation matrix equation is expressed as formula (1):
wherein a is a perspective transformation matrix, which can be split into four parts: first partFor linear transformation, the partial matrix being predominantlyScaling and rotation operations for images; second part [ a ] 31 a 32 ]For translational operations, third part [ a 13 a 23 ] T For generating a perspective transformation; fourth part parameter a 33 Is a fixed value of 1.
The transformed coordinates (X, Y, Z) are three-dimensional coordinates, the transformed coordinates (X, Y, Z) are divided by the Z axis and converted into a two-dimensional plane to obtain new two-dimensional coordinates (X ', Y'), and the result is shown as a formula (2) and a formula (3):
let a 33 =1, the above formula is developed to obtain the following formula (4):
so 4 sets of point pairs are needed to solve the parameters to solve the perspective transformation matrix a. Compared with linear transformation and affine transformation, the perspective transformation is more flexible, the dimension is improved by placing the perspective transformation in a three-dimensional space, and the perspective transformation can keep 'linearity', namely, the straight line in the original image is still straight after the perspective transformation.
After the original image is geometrically corrected, a rough target surface area image is obtained, and in order to further obtain a target surface area of interest in the image, further preprocessing is required to be carried out on the corrected image: graying, binary segmentation and morphological operations.
Specifically, the graying preprocessing of the image in step S2 specifically includes:
image graying is the process of converting a color image into a gray image. The gray image contains only one channel, the channel value representing the gray value. In the graying process, the RGB values of each pixel are combined into a single gray value. The three components are weighted averaged with different weights according to importance and other criteria. Since the human eye has the highest sensitivity to green and the lowest sensitivity to blue, the weighted average of the three components of RGB according to the formula (5) can obtain the gray image as shown in FIG. 2
Gray(i,j)=0.299*R(i,j)+0.578*G(i,j)+0.114*B(i,j) (5);
Where R (i, j), G (i, j), B (i, j) represent pixel values of red, green and blue channels, respectively, gray (i, j) represents a single channel pixel value after Gray scale, where 0.299, 0.587, 0.114 are weights according to the sensitivity of the human eye to different colors. The gray image obtained by the algorithm is more real and accords with the perception of human eyes.
After the image is grayed, a 256-level gray image is obtained, binary segmentation processing is used, and the 256-brightness gray images are selected through proper threshold values to obtain a binary image which can still reflect the whole and local characteristics of the image, as shown in fig. 3. This is advantageous in that the aggregate properties of the image are only related to the position of the point where the pixel value is 0 or 255, the multi-level values of the pixels are not involved, the processing is simplified, and the processing and compression of the data is small.
Specifically, in step S2, the image after the graying is subjected to binary segmentation, and the process of binary segmentation is as follows:
assuming that the pixel value of one gray image is f (x, y) and the segmentation threshold is T, the image g (x, y) after segmentation is defined as shown in formula (6):
after the image is subjected to binarization processing, the obtained binary image has a lot of noise points, the boundary information of a target area is fuzzy, the binary image is processed by using morphological operation to carry out image enhancement, and the morphological core operation is morphological operation which changes the shape and the characteristics of the image by a mode of carrying out specific operation on structural elements and the image. The structural element is a small and predefined shape of the image matrix that can be logically operated on with the pixels in the image. Common morphological operations are: corrosion and expansion.
Specifically, in step S2, morphological operations are performed on the binary-segmented image, specifically:
in gray level morphology, the image functions f (x, y) and b (x, y) are used to represent the target image and the structural element respectively, and the (x, y) represents the coordinates of the pixel point in the image, so that the mathematical expression of morphological operation is shown as formulas (7) and (8), wherein g 1 (x, y) is a gray scale image g after etching 2 (x, y) is an expanded gray scale image:
the corrosion and the expansion are commonly used together, the corrosion is carried out firstly and then the expansion is called open operation, and the method is mainly used for deleting some small targets, separating some thinner joints and smoothing the boundaries of some larger objects; the first expansion and then erosion is called a closed operation and is mainly used to fill some tiny holes, connect the nearby objects, and smooth the object boundary. The binary image is inverted before the closing operation, and the value of the target area is set to be 1, so as to highlight the target area to be processed. And then connecting the missing parts of the target surface areas in the binary image by using a closed operation to form a relatively complete target surface area, as shown in fig. 4.
After the original image is subjected to the previous image correction and pretreatment, a binary image containing the bullet hole target surface is obtained. The target surface is firstly subjected to contour extraction, the detected contour is removed according to the contour area to leave a ring in the middle, and then the contour is fitted into a circle to obtain the target coordinates and the target radius, and the result is shown in fig. 5.
Specifically, the process of contour extraction and fitting of the target surface in step S3 is as follows:
s3.1, assume that the input image isWhere H represents the width of the image, W represents the height of the image, first raster scan the input binary image, find the point satisfying the boundary condition from top to bottom, from left to right, if f i,j =1 and f i,j-1 =0, (i, j) is the outer boundary start point.
S3.2, after the boundary tracking starting point is found, boundary tracking is executed, and a complete boundary is found.
And S3.3, after the complete boundary is obtained, setting the connected domain as D, taking the center point of the pixel of the D boundary, connecting the center points to form an approximate circular contour, and calculating the contour area by using a Grignard formula.
And S3.4, screening the obtained contour area, leaving the most middle ring of target contour, fitting the contour into a circle, and obtaining the coordinates of the target center and the radius thereof.
Specifically, the step of detecting the bullet hole in the step S3 is as follows:
s3.5, carrying out contour extraction on the binary image processed in the step S2, screening out an effective target surface through an area, eliminating the influence of the bullet holes of the off-target on detection, then dividing the original image according to the area of the mask so as to eliminate the background outside the target surface, and carrying out bullet hole detection on the divided image; the effective target surface refers to a human target surface with all other irrelevant backgrounds removed and only including a ring value.
S3.6, firstly carrying out graying on the segmented image to obtain a gray image, then carrying out median filtering on the gray image to eliminate loop lines and background fold noise points, and then carrying out binarization processing;
and S3.7, performing contour detection and morphological closing operation on the binarized image, namely performing expansion and corrosion on the image, and finally performing minimum circle fitting on the detected bullet hole contour, and simultaneously obtaining the centroid coordinates of each bullet hole.
Specifically, step S4 specifically includes:
s4.1, calculating the Euclidean distance d between each bullet hole and the bulls center by using an Euclidean distance formula as shown in (9) i (x, y) with a center of gravity o= (x) r ,y r ) The radius of the bulls-eye is r, and the coordinate of each bullet hole is B i =(x i ,y i ):
S4.2, comparing the Euclidean distance with the radius of the circular ring where each ring value is located, and for the bullet hole on the target line, calculating according to the side with the larger ring value, namely the effective achievement with the high ring value, wherein a judging formula is shown as a formula (10):
in the formula, (x, y) represents coordinates of pixel points in the image, i represents the number of bullet holes, and S represents a ring value.
The accuracy of the bullet hole detection directly determines the reliability of an automatic target reporting system, and the accuracy of a bullet hole detection algorithm based on visual detection designed in the method mainly comprises two parts: firstly, extracting an effective target surface; secondly, the bullet hole recognition accuracy after the effective target surface is extracted. Factors influencing the bullet hole recognition accuracy mainly include shooting quality of a camera, illumination and the degree of densely packing bullet holes. For better statistics, the present embodiment marks the bullet holes with colors to distinguish their ring values, the relationship between the color and the ring value is shown in table 1 below, in the test, the performance statistics is performed by using 10 successfully cut target surface images, and the ring values of all bullet holes of each target surface are summed to obtain the performance of the target surface, as shown in table 2.
TABLE 1 bullet hole and color marking
Table 2 score statistics
In the bullet hole identification method provided by the invention, the difference between the gray values of the bullet hole and the area where the bullet hole is positioned is larger, and the identification accuracy is higher as the difference between the gray values is larger. In the test process, when the target paper is subjected to direct irradiation of sunlight in the daytime, the target surface can be subjected to excessive exposure, the bullet holes which can be identified before the middle part are not detected due to the influence of illumination, and the identification result is greatly influenced. For the problem, the invention adds the optical filter in front of the image acquisition equipment, and the wave band is 532nm plus or minus 10nm to eliminate the influence of visible light of other wave bands, thereby improving the recognition precision.
The method provided by the invention can effectively detect the bullet hole of the target ring and calculate the score, and can be used in an outdoor environment. The method utilizes the principle of concentric characteristics of the circular rings in the chest ring target and the radius difference between the circular rings to calculate the Euclidean distance between the bulls-eye and the bullet hole, and compares the Euclidean distance with the radius of the ring value for judgment. In addition, in order to reduce the influence of outdoor illumination on recognition, an optical filter is added in front of the image acquisition equipment to filter visible light with a certain wavelength.
It should be understood that the above description is not intended to limit the invention to the particular embodiments disclosed, but to limit the invention to the particular embodiments disclosed, and that the invention is not limited to the particular embodiments disclosed, but is intended to cover modifications, adaptations, additions and alternatives falling within the spirit and scope of the invention.
Claims (8)
1. The target paper bullet hole identification and target reporting method based on visual detection is characterized by comprising the following steps of:
s1, adding an optical filter in front of a camera, acquiring an original image of a target surface through the camera, performing graphic preprocessing on the original image, and performing geometric correction on the image;
s2, further preprocessing the image, and sequentially performing: graying, binary segmentation and morphological operation;
s3, correcting and preprocessing the image to obtain a binary image containing the bullet hole target surface, and extracting and fitting the contour of the target surface to obtain the target coordinates and the target radius; screening out an effective target surface through an area to generate a mask, and then dividing an original image according to the area of the mask to leave a final target area for bullet hole detection;
s4, firstly, calculating the Euclidean distance between each bullet hole and the bulls center by using the Euclidean distance, and then comparing the Euclidean distance with the radius of the circular ring where each ring value is positioned according to the calculated Euclidean distance, wherein the bullet holes on the target line are usually calculated according to the side with the large ring value, namely the ring value is high and is an effective result.
2. The method for identifying and reporting target by target paper bullet holes based on visual detection according to claim 1, wherein the geometric correction of the image in the step S1 uses perspective transformation to project a two-dimensional picture onto a three-dimensional plane, and then the two-dimensional picture is converted into the two-dimensional plane, and the specific process is as follows:
assuming that the original coordinate point is (X, Y), and the transformed coordinate is (X, Y, Z), the perspective transformation matrix equation is expressed as formula (1):
wherein a is a perspective transformation matrix, which can be split into four parts: first partFor linear transformation, the partial matrix is mainly used for scaling and rotating operation of the image; second part [ a ] 31 a 32 ]For translational operations, third part [ a 13 a 23 ] T For generating a perspective transformation; fourth part parameter a 33 Is a fixed value of 1;
the transformed coordinates (X, Y, Z) are three-dimensional coordinates, the transformed coordinates (X, Y, Z) are divided by the Z axis and converted into a two-dimensional plane to obtain new two-dimensional coordinates (X ', Y'), and the result is shown as a formula (2) and a formula (3):
let a 33 =1, the above formula is developed to obtain the following formula (4):
3. the method for identifying and reporting target by target paper bullet holes based on visual detection of claim 1, wherein the step S2 of performing gray-scale preprocessing on the image is specifically as follows:
the gray image can be obtained by carrying out weighted average on the RGB three components according to the formula (5)
Gray(i,j)=0.299*R(i,j)+0.578*G(i,j)+0.114*B(i,j) (5);
Where R (i, j), G (i, j), B (i, j) represent pixel values of red, green, and blue channels, respectively, and Gray (i, j) represents a single channel pixel value after Gray scale.
4. The method for identifying and reporting target by target paper bullet holes based on visual detection of claim 3, wherein in step S2, the image after graying is subjected to binary segmentation, and the process of binary segmentation is as follows:
let the pixel value of one Gray image be Gray (x, y), and the threshold value of segmentation be T, the image g (x, y) after segmentation is defined as shown in formula (6):
5. the method for identifying and reporting target by target paper bullet holes based on visual detection of claim 4, wherein in step S2, morphological operations are performed on the binary-segmented image, specifically:
in gray level morphology, the image functions f (x, y) and b (x, y) are used to represent the target image and the structural element respectively, and the (x, y) represents the coordinates of the pixel point in the image, so that the mathematical expression of morphological operation is shown as formulas (7) and (8), wherein g 1 (x, y) is a gray scale image g after etching 2 (x, y) is an expanded gray scale image:
6. the method for identifying and reporting target by using target paper bullet holes based on visual detection according to claim 1, wherein the process of contour extraction and fitting of the target surface in the step S3 is as follows:
s3.1, assume that the input image isWhere H represents the width of the image, W represents the height of the image, first raster scan the input binary image, find the point satisfying the boundary condition from top to bottom, from left to right, if f i,j =1 and f i,j-1 =0, then (i, j) is the outer boundary start point;
s3.2, after finding the boundary tracking starting point, executing boundary tracking to find a complete boundary;
s3.3, after the complete boundary is obtained, setting a connected domain as D, taking the center point of the pixel of the D boundary, connecting the center point to form an approximate circular contour, and calculating the contour area by using a Grignard formula;
and S3.4, screening the obtained contour area, leaving the most middle ring of target contour, fitting the contour into a circle, and obtaining the coordinates of the target center and the radius thereof.
7. The method for identifying and reporting targets of target paper bullet holes based on visual inspection according to claim 6, wherein the step of detecting bullet holes in step S3 is:
s3.5, carrying out contour extraction on the binary image processed in the step S2, screening out an effective target surface through an area, eliminating the influence of the bullet holes of the off-target on detection, then dividing the original image according to the area of the mask so as to eliminate the background outside the target surface, and carrying out bullet hole detection on the divided image;
s3.6, firstly carrying out graying on the segmented image to obtain a gray image, then carrying out median filtering on the gray image to eliminate loop lines and background fold noise points, and then carrying out binarization processing;
and S3.7, performing contour detection and morphological closing operation on the binarized image, namely performing expansion and corrosion on the image, and finally performing minimum circle fitting on the detected bullet hole contour, and simultaneously obtaining the centroid coordinates of each bullet hole.
8. The method for identifying and reporting targets of target paper bullet holes based on visual detection of claim 1, wherein step S4 specifically comprises:
s4.1, calculating the Euclidean distance d between each bullet hole and the bulls center by using an Euclidean distance formula as shown in (9) i (x, y) with a center of gravity o= (x) r ,y r ) The radius of the bulls-eye is r, and the coordinate of each bullet hole is B i =(x i ,y i ):
S4.2, comparing the Euclidean distance with the radius of the circular ring where each ring value is located, and for the bullet hole on the target line, calculating according to the side with the larger ring value, namely the effective achievement with the high ring value, wherein a judging formula is shown as a formula (10):
in the formula, (x, y) represents coordinates of pixel points in the image, i represents the number of bullet holes, and S represents a ring value.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311199001.9A CN117392201A (en) | 2023-09-18 | 2023-09-18 | Target paper bullet hole identification and target reporting method based on visual detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202311199001.9A CN117392201A (en) | 2023-09-18 | 2023-09-18 | Target paper bullet hole identification and target reporting method based on visual detection |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117392201A true CN117392201A (en) | 2024-01-12 |
Family
ID=89465647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311199001.9A Pending CN117392201A (en) | 2023-09-18 | 2023-09-18 | Target paper bullet hole identification and target reporting method based on visual detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117392201A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118470106A (en) * | 2024-04-30 | 2024-08-09 | 无锡奥润激光技术有限公司 | Automatic target reporting method and system suitable for laser simulated shooting |
-
2023
- 2023-09-18 CN CN202311199001.9A patent/CN117392201A/en active Pending
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN118470106A (en) * | 2024-04-30 | 2024-08-09 | 无锡奥润激光技术有限公司 | Automatic target reporting method and system suitable for laser simulated shooting |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN102426649B (en) | Simple steel seal digital automatic identification method with high accuracy rate | |
CN108319973B (en) | Detection method for citrus fruits on tree | |
CN108921176A (en) | A kind of pointer instrument positioning and recognition methods based on machine vision | |
CN107808161B (en) | Underwater target identification method based on optical vision | |
CN111915704A (en) | Apple hierarchical identification method based on deep learning | |
CN114118144A (en) | Anti-interference accurate aerial remote sensing image shadow detection method | |
CN110807355A (en) | Pointer instrument detection and reading identification method based on mobile robot | |
CN112734761B (en) | Industrial product image boundary contour extraction method | |
CN109389165A (en) | Oil level gauge for transformer recognition methods based on crusing robot | |
CN111783773B (en) | Correction method for angle-inclined telegraph pole signboard | |
CN115359047B (en) | Abnormal defect detection method for intelligent welding of PCB | |
CN105447489B (en) | A kind of character of picture OCR identifying system and background adhesion noise cancellation method | |
CN110222661B (en) | Feature extraction method for moving target identification and tracking | |
CN109781737A (en) | A kind of detection method and its detection system of hose surface defect | |
CN111080574A (en) | Fabric defect detection method based on information entropy and visual attention mechanism | |
Li et al. | Vision-based target detection and positioning approach for underwater robots | |
CN108154496B (en) | Electric equipment appearance change identification method suitable for electric power robot | |
CN111161295A (en) | Background stripping method for dish image | |
CN116843581A (en) | Image enhancement method, system, device and storage medium for multi-scene graph | |
CN117392201A (en) | Target paper bullet hole identification and target reporting method based on visual detection | |
Ye et al. | The technology of image processing used in automatic target-scoring system | |
CN115661110A (en) | Method for identifying and positioning transparent workpiece | |
CN110349129B (en) | Appearance defect detection method for high-density flexible IC substrate | |
CN112489052A (en) | Line structure light central line extraction method under complex environment | |
CN117496532A (en) | Intelligent recognition tool based on 0CR |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |