CN111639562A - Intelligent positioning method for palm region of interest - Google Patents
Intelligent positioning method for palm region of interest Download PDFInfo
- Publication number
- CN111639562A CN111639562A CN202010416921.1A CN202010416921A CN111639562A CN 111639562 A CN111639562 A CN 111639562A CN 202010416921 A CN202010416921 A CN 202010416921A CN 111639562 A CN111639562 A CN 111639562A
- Authority
- CN
- China
- Prior art keywords
- palm
- circle
- center
- foreground
- formula
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/12—Fingerprints or palmprints
- G06V40/1382—Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger
- G06V40/1388—Detecting the live character of the finger, i.e. distinguishing from a fake or cadaver finger using image processing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/30—Noise filtering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Collating Specific Patterns (AREA)
Abstract
The invention relates to an intelligent positioning method for a palm region of interest, which comprises the following steps: 1) respectively collecting infrared images of a left palm and a right palm; 2) respectively preprocessing the palm images to obtain a foreground binary image; 3) carrying out edge detection on the foreground binary image to obtain a palm profile image, and refining the profile; 4) calculating the gravity center of the foreground palm in the foreground binary image; 5) calculating the intersection point of the gravity center and the two edges of the refined palm contour, and calculating the distance between the two intersection points; 6) respectively taking the distance between two intersection points and the distance center as an initial diameter and a circle center, and drawing a circle on the foreground binary image as a primary palm center circle; 7) performing palm center circle iteration; 8) and calculating a circumscribed rectangle of the final palm center circle, wherein the circumscribed rectangle is the region of interest of the palm image. The method has good superiority in the aspects of palm ROI positioning accuracy, completeness and palm vein recognition rate.
Description
Technical Field
The invention belongs to the technical field of palm vein identification and information safety, and particularly relates to an intelligent positioning method for a palm region of interest.
Background
In the era of the development of AI intelligent technology, intelligent products such as fingerprint payment, face payment and iris payment are produced and applied to the lives of people, and great convenience is brought to social activities. Under such large environments, the palm vein biological identification technology rises rapidly, and the palm vein biological identification technology utilizes an inherent biological characteristic palm vein which cannot be copied to achieve the purpose of identifying identity information. The implementation of this recognition technique involves many steps, of which the step of positioning with respect to a region of interest (ROI) of the palm image is particularly important. If the ROI is positioned inaccurately or incompletely, the identification algorithm is disabled or the passing rate is greatly reduced.
At present, a plurality of existing extraction algorithms for palm interested areas exist, and the existing extraction algorithms have corner point positions based on palm contours to acquire key point positions so as to perform ROI segmentation; there are methods for locating ROI regions based on texture information of palm grayscale images, and methods for performing ROI segmentation based on palm information clustering.
For example, chinese patent CN102163282B discloses a method and an apparatus for acquiring a region of interest of a palm print image, the method includes: correcting, namely binarizing the palm print image, and correcting the palm in the binarized palm print image to the vertical direction; a palm form determining step, namely detecting and classifying the palm forms, and determining that the palm binary image is a completely opened palm binary image or a closed palm binary image; a key point positioning step, namely positioning key points of the palm binary image according to the obtained palm form; and an acquisition step, namely acquiring the region of interest of the palm print image according to the positioned key points.
For example, chinese patent CN107016323A discloses a method and an apparatus for positioning a palm region of interest, which are used to accurately position the ROI in the palm image. The method provided by the embodiment of the invention comprises the following steps: acquiring N palm image samples, and marking the positions of real key points on each palm image sample, wherein N is a positive integer; training the N palm image samples and the real key point positions in each palm image sample to obtain a cascade regression device; and positioning the positions of target key points in the face image to be recognized according to the cascade regressor, and determining a region of interest (ROI) according to the positions of the target key points.
The methods have respective advantages and disadvantages, and have the advantages of high ROI positioning speed and the disadvantages of inaccurate positioning of few palm ROI regions and incomplete ROI regions, which cause the problems of failure of subsequent palm vein verification and the like.
Disclosure of Invention
The invention aims to solve the problems of inaccurate and incomplete ROI (region of interest) region positioning in the palm vein identification method in the prior art, and provides an intelligent positioning method for the ROI region, which can extract the ROI region more completely and accurately so as to improve the verification passing rate.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
the invention relates to an intelligent positioning method for a palm region of interest, which comprises the following steps:
1) respectively collecting infrared images of a left palm and a right palm;
2) respectively preprocessing the palm images to obtain a foreground binary image;
3) carrying out edge detection on the foreground binary image to obtain a palm profile image, and refining the profile;
4) calculating the gravity center of the foreground palm in the foreground binary image;
5) calculating the intersection point of the gravity center and the two edges of the refined palm contour, and calculating the distance between the two intersection points;
6) respectively taking the distance between two intersection points and the distance center as an initial diameter and a circle center, drawing a circle on the foreground binary image, and taking the circle as a first generation palm center circle;
7) performing palm center circle iteration, calculating the angle from the coordinate position of the foreground binary image to a circle center position vector, and updating the radius and the circle center of the palm center circle to obtain a final palm center circle;
8) and calculating a circumscribed rectangle of the final palm center circle, wherein the circumscribed rectangle is the region of interest of the palm image.
Preferably, the preprocessing of the palm image in the step 2) includes denoising, size normalization and thresholding; the denoising uses median filtering denoising; the size normalization uses bilinear interpolation processing; the thresholding takes the global gray level mean value of the image as a threshold value T, and a foreground binary image is obtained through calculation according to the threshold value T.
Preferably, the calculation formula of the threshold T is:
in the formula, ImageijRepresenting pixel values of the preprocessed palm gray image at the position of the ith row and the jth column, wherein m x n is the size of the image, and T is the global gray average value of the image;
the formula for obtaining the foreground binary image through calculation is as follows:
in the formula (BW)ijAnd the pixel values of the foreground binary image corresponding to the ith row and the jth column are represented.
Preferably, in the step 3), sobel edge detection is adopted to perform edge detection on the binary image, and a skeleton thinning method is adopted to perform edge thinning on the contour.
Preferably, the formula for calculating the center of gravity of the foreground palm in the foreground binary image in step 4) includes:
in the formula, xi、yiRespectively, the horizontal and vertical coordinates of the foreground pixel in the foreground binary image, and the i subscript represents the frontThe ith pixel point in all the pixels of the scene; n represents the sum of the pixel numbers of the foreground object; centroid represents the center of gravity of the foreground object, Centroidx、CentroidyRespectively, the abscissa and the ordinate of the center of gravity.
Preferably, the intersection points of the center of gravity and the two edges of the refined palm contour in the step 5) are respectively P1 and P2, the two intersection points are respectively distributed on the two sides of the center, and the calculation formulas of P1 and P2 are as follows:
P1x=Centroidx(5),
P1y=Contoury,if Contourx=Centroidx&&Contoury<Centroidy(6),
P2x=Centroidx(7),
P2y=Contoury,if Contourx=Centroidx&&Contoury>Centroidy(8),
in the formula, P1x、P1yRespectively, the abscissa and the ordinate of the point P1, P2x、P2yRespectively represents the horizontal and vertical coordinates of the P2 point, Contour represents the outline of the foreground object, ContourxAnd ContouryRespectively, representing all the abscissas and ordinates of the contour.
Preferably, the calculation formula of the center and the radius of the primary palm center circle in step 6) is as follows:
cenLocx=Centroidx(9),
cenLocy=(P1y+P2y)/2 (10),
in the formula, cenLocxAnd cenLocyRespectively representing the abscissa, Radius, of the initial centre of a circle0Indicating the initial radius of the circle.
Preferably, the specific step of palm center circle iteration in step 7) includes:
7.1) setting maximumNumber of iterations kMaxSetting an iteration ending condition when the initial iteration number k is equal to 0;
7.2) judging whether the current circle meets an iteration ending condition, if so, ending the iteration, and if not, starting the iteration;
7.3) the iteration number k is k +1, the distance and the angle between the background in the circle and the center of the circle are calculated, and the calculation formula is as follows:
θs0=arctan((cenLocx-i)/(cenLocy-j)) (13),
in the formula, i and j respectively represent the horizontal and vertical coordinates in the foreground binary image, dijDenotes the distance of the (i, j) position from the center of the circle, θs0Representing the angle of the current (i, j) position to the circle center position vector;
7.4) storage of the background coordinates in the circles0Calculate the diagonal position of the coordinates of the Holem0The calculation formulas are respectively as follows:
Holes0={i,j},if dij≤Radius0&&BWij=0 (14),
in the formula, Holes0Representing the stored in-circle background coordinates, which is a one-dimensional array of length 2, Holem0Indicating the Holes0The diagonal coordinate position of the middle background coordinate is a one-dimensional array with the length of 2;
7.5) if the diagonal position is the background, updating the radius, wherein the calculation formula of the radius is as follows:
in the formula, RadiuskIndicating the Holem0And Holes0The subscript k denotes the kth iteration;
if the diagonal position is the foreground, updating the circle center, wherein the calculation formula of the circle center is as follows:
in the formula, cenLockIndicating the Holem0And Holes0Center coordinates in between;
7.6) judging whether the current circle meets the end condition or reaches the maximum iteration number, if the current circle and the current circle do not meet the end condition, returning to the step 7.3), entering the next iteration, and if one condition is met, ending the iteration.
Preferably, the iteration end condition in steps 7.1) and 7.2) is that there is no background in the current circle, i.e. the pixel is 0.
Preferably, the step 8) of calculating the circumscribed rectangle of the final palm circle includes calculating coordinates of the upper left corner of the rectangle and calculating the side length of the rectangle;
the calculation formula of the coordinates of the upper left corner of the rectangle is as follows:
rectLoc={cenLock(1,1)-Radiusk,cenLock(1,2)-Radiusk} (18),
where rectLoc is the coordinate of the upper left corner of the rectangle;
the calculation formula of the rectangle side length is as follows:
sLength=2*Radiusk(19),
in the formula, sLength represents the side length of the circumscribed rectangle.
Compared with the prior art, the technical scheme provided by the invention has the following beneficial effects:
the intelligent positioning method of the palm interested region carries out the positioning of the ROI by a palm center circle iteration positioning method. Compared with the existing palm ROI positioning method, the positioning method has high ROI positioning accuracy and coverage rate, so that the palm vein identification rate and the safety level are improved, the left hand and the right hand do not need to be distinguished in the positioning process, and certain intelligence is achieved.
Drawings
FIG. 1 is a flow chart of a method for intelligently locating a region of interest in a palm according to the present invention;
FIG. 2 is a left and right palm vein image acquired by the present invention;
FIG. 3 is a foreground binary image obtained by the present invention;
FIG. 4 is a refined outline view of the palm vein of the present invention;
FIG. 5 is a schematic view of the present invention showing the position of the initial circle;
FIG. 6 is a flowchart of the present invention for iteratively positioning the ROI on the metacarpal circle;
FIG. 7 is a schematic diagram of the circle position after the iteration is finished according to the present invention;
fig. 8 shows a region of interest obtained by the present invention.
Detailed Description
For further understanding of the present invention, the present invention will be described in detail with reference to examples, which are provided for illustration of the present invention but are not intended to limit the scope of the present invention.
With reference to fig. 1, an intelligent positioning method for a palm region of interest includes the following steps:
1) the left and right palm vein images of 1 person are respectively collected, the collected images are as shown in the attached figure 2, and the sizes of the images are as follows: 672 pixels 896 pixels.
2) Respectively preprocessing the palm images, wherein the preprocessing comprises denoising, size normalization and thresholding, and the denoising uses median filtering to denoise; the size normalization uses bilinear interpolation processing to normalize the size to the size: 480 pixels 640 pixels; the thresholding takes the global gray level mean value of the image as a threshold value T, and obtains a foreground binary image by calculation according to the threshold value T, as shown in FIG. 3,
the calculation formula of the threshold value T is as follows:
in the formula, ImageijRepresenting the pretreated palm gray image corresponding to the ith row and jth column positionsThe pixel value, m × n is the size of the image, T is the image global gray level mean value, in this embodiment, n is 640, m is 480, and the calculated threshold value T is 165;
the formula for obtaining the foreground binary image by calculation is as follows:
in the formula (BW)ijAnd the pixel values of the foreground binary image corresponding to the ith row and the jth column are represented.
3) And (3) performing edge detection on the foreground binary image by using sobel edge detection to obtain a palm contour map, and performing edge refinement on the contour by using a skeleton refinement method to obtain the contour map after palm vein refinement as shown in fig. 4.
4) Calculating the gravity center of the foreground palm in the foreground binary image, wherein the formula of the gravity center of the foreground palm comprises:
in the formula, xi、yiRespectively the horizontal and vertical position coordinates of the foreground pixel in the foreground binary image, and the i subscript represents the ith pixel point in all the foreground pixels; n represents the total number of pixels of the foreground object, and N is 60000 in this embodiment; centroid represents the center of gravity of the foreground object, Centroidx、CentroidyThe abscissa and ordinate of the center of gravity are respectively expressed, and the Centroid is calculated in this examplex=241.5、Centroidy=359.8。
5) Calculating the intersection points of the gravity center and two edges of the refined palm contour, wherein the two intersection points are P1 and P2 respectively, the two intersection points are distributed on two sides of the center respectively, and the calculation formulas of P1 and P2 are as follows:
P1x=Centroidx(5),
P1y=Contoury,if Contourx=Centroidx&&Contoury<Centroidy(6),
P2x=Centroidx(7),
P2y=Contoury,if Contourx=Centroidx&&Contoury>Centroidy(8),
in the formula, P1x、P1yRespectively, the abscissa and the ordinate of the point P1, P2x、P2yRespectively represents the horizontal and vertical coordinates of the P2 point, Contour represents the outline of the foreground object, ContourxAnd ContouryRespectively representing all horizontal and vertical coordinates on the outline; carry-in Centroidx=241.5、Centroidy359.8, each yielding P1x=241、P1y=179、P2x=241、P2yThe distance between the two intersections is then calculated 520.
6) The distance and the distance center between two intersection points are respectively used as an initial diameter and a circle center, a circle is drawn on the foreground binary image and is used as an initial palm center circle, and the calculation formula of the circle center and the radius of the initial palm center circle is as follows:
cenLocx=Centroidx(9),
cenLocy=(P1y+P2y)/2 (10),
in the formula, cenLocxAnd cenLocyRespectively representing the abscissa, Radius, of the initial centre of a circle0Representing the initial radius of the circle, the data of P1 and P2 are respectively substituted into the equations (9), (10) and (11) to obtain cenLocx=241,cenLocy=349.5,Radius0170.5; the resulting initial circle position is shown in fig. 5.
7) Performing palm center circle iteration, calculating an angle from a foreground binary image coordinate position to a circle center position vector, updating the radius and the circle center of the palm center circle to obtain a final palm center circle, wherein the palm center circle iteration step is shown in fig. 6 and comprises the following steps:
7.1) setting the maximum number of iterations kMax,kMaxSetting an iteration ending condition, wherein the initial iteration number k is 0, and the iteration ending condition is that no background exists in the current circle, namely the pixel is 0;
7.2) judging whether the current circle meets an iteration end condition, if so, ending the iteration, jumping out of the step 7, and directly executing the step 8), and if not, starting the iteration and executing the step 7.3);
7.3) the iteration number k is k +1, the distance and the angle between the background in the circle and the center of the circle are calculated, and the calculation formula is as follows:
θs0=arctan((cenLocx-i)/(cenLocy-j)) (13),
in the formula, i and j respectively represent the horizontal and vertical coordinates in the foreground binary image, dijDenotes the distance of the (i, j) position from the center of the circle, θs0Representing the angle from the current (i, j) position to the circle center position vector, and d is calculated in the embodimentij=85,θs0=150°;
7.4) storage of the background coordinates in the circles0Calculate the diagonal position of the coordinates of the Holem0The calculation formulas are respectively as follows:
Holes0={i,j},if dij≤Radius0&&BWij=0 (14),
in the formula, Holes0Representing the stored in-circle background coordinates, which is a one-dimensional array of length 2, Holem0Indicating the Holes0A one-dimensional array of 2 length diagonal coordinate positions of medium background coordinates, e.g. when k is 1Calculating to obtain the Holes0={110,250},Holem0={326.25,497.15};
7.5) if the diagonal position is the background, updating the radius, wherein the calculation formula of the radius is as follows:
in the formula, RadiuskIndicating the Holem0And Holes0The subscript k denotes the kth iteration, e.g. when k is 1, Radius is calculatedk=160;
If the diagonal position is the foreground, updating the circle center, wherein the calculation formula of the circle center is as follows:
in the formula, cenLockIndicating the Holem0And Holes0The center coordinate between them, for example, when k is 1, cenLoc is calculatedk={249,355};
7.6) judging whether the current circle meets the end condition or reaches the maximum iteration number, if the current circle does not meet the end condition, returning to the step 7.3), entering next iteration, if one condition is met, ending the iteration, and after the iteration is ended, determining the specific position of the circle, which is shown in the attached figure 7.
8) Calculating a circumscribed rectangle of the final palm center circle, including calculating coordinates of the upper left corner of the rectangle and calculating the side length of the rectangle;
the calculation formula of the coordinates of the upper left corner of the rectangle is as follows:
rectLoc={cenLock(1,1)-Radiusk,cenLock(1,2)-Radiusk} (18),
where rectLoc is the coordinate of the upper left corner of the rectangle;
the calculation formula of the rectangle side length is as follows:
sLength=2*Radiusk(19),
in the formula, sLength represents the side length of the circumscribed rectangle;
in this embodiment, rectLoc {89, 195} and sLength ═ 320 are obtained through calculation; as shown in the right-hand diagram of fig. 8, the circumscribed rectangle is the region of interest (ROI) of the palm image.
The present embodiment collects palm vein image data of 50 persons, and each person collects 2 palm vein images of the left hand and the right hand, and 100 palm images. On one hand, 100 ROI images are obtained by applying the algorithm positioning, and 100 ROI images are obtained without distinguishing the left hand and the right hand in the process. On the other hand, the ROI is manually marked on 100 palm images; in principle, human annotation of the ROI can be considered the most desirable result, since the annotation task is simple.
The following 3 experiments were performed and analyzed for these two ROIs.
Experiment one: comparing these manually labeled ROIs with the ROIs located by the method of this example, the region overlap ratio of the two ROIs was 99.33%.
Experiment two: the two types of ROI areas are respectively subjected to recognition test by the same palm matching algorithm, under the condition of 0 false recognition, the manually marked ROI recognition rate is 96.25%, and the positioned ROI recognition rate of the embodiment is 96.18%.
Experiment three: according to the specification of the Chinese patent CN107016323A, the invention reproduces the training-based palm ROI positioning method of the patent, and obtains a cascade classifier related to the palm ROI positioning. Positioning 100 palm images acquired by the patent by using the cascade classifier to obtain 100 palm ROI areas; the ROI regions and manually labeled ROIs were subjected to the first experiment and the second experiment, respectively, to obtain: the region overlap ratio of the two ROIs was 94.27%; in the case of 0 false positive, the ROI identification rate of chinese patent CN107016323A localization was 93.59%.
The data analysis shows that the ROI acquired by the method is complete, the error of the ROI with manual labeling is within 1%, and the error of the ROI with manual labeling of the method disclosed by the Chinese patent CN107016323A is about 5%, so that the method disclosed by the invention is more prone to manual labeling; for the passing rate of the palm matching algorithm, the ROI error of the method is within 0.1%, and the ROI error of the method of the Chinese patent CN107016323A is about 2.6%, so that the method can effectively maintain the recognition rate.
The data can visually show the superiority of the intelligent positioning method for the palm region of interest in the aspects of accuracy and integrity of palm ROI positioning.
The present invention has been described in detail with reference to the embodiments, but the description is only for the preferred embodiments of the present invention and should not be construed as limiting the scope of the present invention. All equivalent changes and modifications made within the scope of the present invention shall fall within the scope of the present invention.
Claims (10)
1. An intelligent positioning method for a palm region of interest is characterized by comprising the following steps: which comprises the following steps:
1) respectively collecting infrared images of a left palm and a right palm;
2) respectively preprocessing the palm images to obtain a foreground binary image;
3) carrying out edge detection on the foreground binary image to obtain a palm profile image, and refining the profile;
4) calculating the gravity center of the foreground palm in the foreground binary image;
5) calculating the intersection point of the gravity center and the two edges of the refined palm contour, and calculating the distance between the two intersection points;
6) respectively taking the distance between two intersection points and the distance center as an initial diameter and a circle center, drawing a circle on the foreground binary image, and taking the circle as a first generation palm center circle;
7) performing palm center circle iteration, calculating the angle from the coordinate position of the foreground binary image to a circle center position vector, and updating the radius and the circle center of the palm center circle to obtain a final palm center circle;
8) and calculating a circumscribed rectangle of the final palm center circle, wherein the circumscribed rectangle is the region of interest of the palm image.
2. The intelligent positioning method for palm region of interest according to claim 1, characterized in that: respectively preprocessing the palm images in the step 2), including denoising, size normalization and thresholding; the denoising uses median filtering denoising; the size normalization uses bilinear interpolation processing; the thresholding takes the global gray level mean value of the image as a threshold value T, and a foreground binary image is obtained through calculation according to the threshold value T.
3. The intelligent positioning method for the palm region of interest according to claim 2, characterized in that: the calculation formula of the threshold value T is as follows:
in the formula, ImageijRepresenting pixel values of the preprocessed palm gray image at the position of the ith row and the jth column, wherein m x n is the size of the image, and T is the global gray average value of the image;
the formula for obtaining the foreground binary image through calculation is as follows:
in the formula (BW)ijAnd the pixel values of the foreground binary image corresponding to the ith row and the jth column are represented.
4. The intelligent positioning method for palm region of interest according to claim 1, characterized in that: in the step 3), the edge detection is carried out on the binary image by using sobel edge detection, and the edge refinement is carried out on the contour by using a skeleton refinement method.
5. The intelligent positioning method for palm region of interest according to claim 1, characterized in that: the formula for calculating the gravity center of the foreground palm in the foreground binary image in the step 4) comprises the following steps:
in the formula, xi、yiRespectively the horizontal and vertical position coordinates of the foreground pixel in the foreground binary image, and the i subscript represents the ith pixel point in all the foreground pixels; n represents the sum of the pixel numbers of the foreground object; centroid represents the center of gravity of the foreground object, Centroidx、CentroidyRespectively, the abscissa and the ordinate of the center of gravity.
6. The intelligent positioning method for palm region of interest according to claim 5, characterized in that: in the step 5), the intersection points of the gravity center and the two edges of the refined palm contour are respectively P1 and P2, the two intersection points are respectively distributed on the two sides of the center, and the calculation formulas of P1 and P2 are as follows:
P1x=Centroidx(5),
P1y=Contoury,if Contourx=Centroidx&&Contoury<Centroidy(6),
P2x=Centroidx(7),
P2y=Contoury,if Contourx=Centroidx&&Contoury>Centroidy(8),
in the formula, P1x、P1yRespectively, the abscissa and the ordinate of the point P1, P2x、P2yRespectively represents the horizontal and vertical coordinates of the P2 point, Contour represents the outline of the foreground object, ContourxAnd ContouryRespectively, representing all the abscissas and ordinates of the contour.
7. The intelligent positioning method for palm region of interest according to claim 6, characterized in that: the calculation formula of the circle center and the radius of the initial palm center circle in the step 6) is as follows:
cenLocx=Centroidx(9),
cenLocy=(P1y+P2y)/2 (10),
in the formula, cenLocxAnd cenLocyRespectively representing the abscissa, Radius, of the initial centre of a circle0Indicating the initial radius of the circle.
8. The intelligent positioning method for palm region of interest according to claim 7, characterized in that: the specific steps of palm center circle iteration in the step 7) comprise:
7.1) setting the maximum number of iterations kMaxSetting an iteration ending condition when the initial iteration number k is equal to 0;
7.2) judging whether the current circle meets an iteration ending condition, if so, ending the iteration, and if not, starting the iteration;
7.3) the iteration number k is k +1, the distance and the angle between the background in the circle and the center of the circle are calculated, and the calculation formula is as follows:
θs0=arctan((cenLocx-i)/(cenLocy-j)) (13),
in the formula, i and j respectively represent the horizontal and vertical coordinates in the foreground binary image, dijDenotes the distance of the (i, j) position from the center of the circle, θs0Representing the angle of the current (i, j) position to the circle center position vector;
7.4) storage of the background coordinates in the circles0Calculate the diagonal position of the coordinates of the Holem0The calculation formulas are respectively as follows:
Holes0={i,j},if dij≤Radius0&&BWij=0 (14),
Holem0={(cenLocx+sin(θs0)*Radius0,(cenLocy+cos(θs0)*Radius0}
Holem0={(cenLocx-sin(θs0)*Radius0,(cenLocy-cos(θs0)*Radius0}
in the formula, Holes0Representing the stored in-circle background coordinates, which is a one-dimensional array of length 2, Holem0Indicating the Holes0The diagonal coordinate position of the middle background coordinate is a one-dimensional array with the length of 2;
7.5) if the diagonal position is the background, updating the radius, wherein the calculation formula of the radius is as follows:
in the formula, RadiuskIndicating the Holem0And Holes0The subscript k denotes the kth iteration;
if the diagonal position is the foreground, updating the circle center, wherein the calculation formula of the circle center is as follows:
cenLock={(Holes0(1,1)+Holem0(1,1))/2,(Holes0(1,2)+Holem0(1,2))/2}
in the formula, cenLockIndicating the Holem0And Holes0Center coordinates in between;
7.6) judging whether the current circle meets the end condition or reaches the maximum iteration number, if the current circle and the current circle do not meet the end condition, returning to the step 7.3), entering the next iteration, and if one condition is met, ending the iteration.
9. The intelligent positioning method for palm region of interest according to claim 8, characterized in that: the iteration ending condition in the steps 7.1) and 7.2) is that no background exists in the current circle, namely the pixel is 0.
10. The intelligent positioning method for palm region of interest according to claim 8, characterized in that: the step 8) of calculating the circumscribed rectangle of the final palm center circle comprises calculating the coordinates of the upper left corner of the rectangle and calculating the side length of the rectangle;
the calculation formula of the coordinates of the upper left corner of the rectangle is as follows:
rectLoc={cenLock(1,1)-Radiusk,cenLock(1,2)-Radiusk} (18),
where rectLoc is the coordinate of the upper left corner of the rectangle;
the calculation formula of the rectangle side length is as follows:
sLength=2*Radiusk(19),
in the formula, sLength represents the side length of the circumscribed rectangle.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010416921.1A CN111639562B (en) | 2020-05-15 | 2020-05-15 | Intelligent positioning method for palm region of interest |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010416921.1A CN111639562B (en) | 2020-05-15 | 2020-05-15 | Intelligent positioning method for palm region of interest |
Publications (2)
Publication Number | Publication Date |
---|---|
CN111639562A true CN111639562A (en) | 2020-09-08 |
CN111639562B CN111639562B (en) | 2023-06-20 |
Family
ID=72329028
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010416921.1A Active CN111639562B (en) | 2020-05-15 | 2020-05-15 | Intelligent positioning method for palm region of interest |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111639562B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113609953A (en) * | 2021-07-30 | 2021-11-05 | 浙江一掌通数字科技有限公司 | Non-contact palm vein area identification method, system and storage medium |
CN115359249A (en) * | 2022-10-21 | 2022-11-18 | 山东圣点世纪科技有限公司 | Palm image ROI region extraction method and system |
CN115376167A (en) * | 2022-10-26 | 2022-11-22 | 山东圣点世纪科技有限公司 | Palm detection method and system under complex background |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP0800145A2 (en) * | 1996-04-01 | 1997-10-08 | Siemens Aktiengesellschaft | Method for recognition by computer of at least one finger-shaped object in a hand-shaped first object |
US5862245A (en) * | 1995-06-16 | 1999-01-19 | Alcatel Alsthom Compagnie Generale D'electricite | Method of extracting contours using a combined active contour and starter/guide approach |
US20100045788A1 (en) * | 2008-08-19 | 2010-02-25 | The Hong Kong Polytechnic University | Method and Apparatus for Personal Identification Using Palmprint and Palm Vein |
JP2010113530A (en) * | 2008-11-06 | 2010-05-20 | Nippon Hoso Kyokai <Nhk> | Image recognition device and program |
CN102163282A (en) * | 2011-05-05 | 2011-08-24 | 汉王科技股份有限公司 | Method and device for acquiring interested area in palm print image |
CN102521567A (en) * | 2011-11-29 | 2012-06-27 | Tcl集团股份有限公司 | Human-computer interaction fingertip detection method, device and television |
CN103455803A (en) * | 2013-09-04 | 2013-12-18 | 哈尔滨工业大学 | Non-contact type palm print recognition method based on iteration random sampling unification algorithm |
CN104063059A (en) * | 2014-07-13 | 2014-09-24 | 华东理工大学 | Real-time gesture recognition method based on finger division |
CN104809446A (en) * | 2015-05-07 | 2015-07-29 | 西安电子科技大学 | Palm direction correction-based method for quickly extracting region of interest in palmprint |
CN106651879A (en) * | 2016-12-23 | 2017-05-10 | 深圳市拟合科技有限公司 | Method and system for extracting nail image |
CN107609499A (en) * | 2017-09-04 | 2018-01-19 | 南京航空航天大学 | Contactless palmmprint region of interest extracting method under a kind of complex environment |
US20180181803A1 (en) * | 2016-12-27 | 2018-06-28 | Shenzhen University | Pedestrian head identification method and system |
CN108416338A (en) * | 2018-04-28 | 2018-08-17 | 深圳信息职业技术学院 | A kind of non-contact palm print identity authentication method |
US20190347767A1 (en) * | 2018-05-11 | 2019-11-14 | Boe Technology Group Co., Ltd. | Image processing method and device |
-
2020
- 2020-05-15 CN CN202010416921.1A patent/CN111639562B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5862245A (en) * | 1995-06-16 | 1999-01-19 | Alcatel Alsthom Compagnie Generale D'electricite | Method of extracting contours using a combined active contour and starter/guide approach |
EP0800145A2 (en) * | 1996-04-01 | 1997-10-08 | Siemens Aktiengesellschaft | Method for recognition by computer of at least one finger-shaped object in a hand-shaped first object |
US20100045788A1 (en) * | 2008-08-19 | 2010-02-25 | The Hong Kong Polytechnic University | Method and Apparatus for Personal Identification Using Palmprint and Palm Vein |
JP2010113530A (en) * | 2008-11-06 | 2010-05-20 | Nippon Hoso Kyokai <Nhk> | Image recognition device and program |
CN102163282A (en) * | 2011-05-05 | 2011-08-24 | 汉王科技股份有限公司 | Method and device for acquiring interested area in palm print image |
CN102521567A (en) * | 2011-11-29 | 2012-06-27 | Tcl集团股份有限公司 | Human-computer interaction fingertip detection method, device and television |
CN103455803A (en) * | 2013-09-04 | 2013-12-18 | 哈尔滨工业大学 | Non-contact type palm print recognition method based on iteration random sampling unification algorithm |
CN104063059A (en) * | 2014-07-13 | 2014-09-24 | 华东理工大学 | Real-time gesture recognition method based on finger division |
CN104809446A (en) * | 2015-05-07 | 2015-07-29 | 西安电子科技大学 | Palm direction correction-based method for quickly extracting region of interest in palmprint |
CN106651879A (en) * | 2016-12-23 | 2017-05-10 | 深圳市拟合科技有限公司 | Method and system for extracting nail image |
US20180181803A1 (en) * | 2016-12-27 | 2018-06-28 | Shenzhen University | Pedestrian head identification method and system |
CN107609499A (en) * | 2017-09-04 | 2018-01-19 | 南京航空航天大学 | Contactless palmmprint region of interest extracting method under a kind of complex environment |
CN108416338A (en) * | 2018-04-28 | 2018-08-17 | 深圳信息职业技术学院 | A kind of non-contact palm print identity authentication method |
US20190347767A1 (en) * | 2018-05-11 | 2019-11-14 | Boe Technology Group Co., Ltd. | Image processing method and device |
Non-Patent Citations (9)
Title |
---|
KUBICEK, J ET AL: "Extraction on myocardial fibrosis using interative active shape method", 《INTELLIGENT INFORMATION AND DATABASE SYSTEMS》 * |
KUBICEK, J ET AL: "Extraction on myocardial fibrosis using interative active shape method", 《INTELLIGENT INFORMATION AND DATABASE SYSTEMS》, vol. 9621, 31 December 2016 (2016-12-31), pages 698 - 707 * |
LIU, G ET AL: "Research on extraction algorithm of palm ROI based on maximum intrinsic circle", 《PARALLEL ARCHITECTURE ALGORITHUM AND PROGRAMMING》 * |
LIU, G ET AL: "Research on extraction algorithm of palm ROI based on maximum intrinsic circle", 《PARALLEL ARCHITECTURE ALGORITHUM AND PROGRAMMING》, vol. 729, 6 October 2017 (2017-10-06), pages 258 - 267 * |
吴微等: "手掌静脉识别中感兴趣区域的选择与定位研究", 《光电子.激光》 * |
吴微等: "手掌静脉识别中感兴趣区域的选择与定位研究", 《光电子.激光》, no. 01, 15 January 2013 (2013-01-15), pages 152 - 160 * |
戴雷;: "无约束手掌图像采集系统及相应特征定位算法", 《数据采集与处理》, no. 02, pages 183 - 187 * |
王艳霞: "掌纹识别关键技术及算法的研究", 《中国博士学位论文全文数据库 信息科技辑》 * |
王艳霞: "掌纹识别关键技术及算法的研究", 《中国博士学位论文全文数据库 信息科技辑》, no. 08, 15 August 2009 (2009-08-15), pages 138 - 20 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113609953A (en) * | 2021-07-30 | 2021-11-05 | 浙江一掌通数字科技有限公司 | Non-contact palm vein area identification method, system and storage medium |
CN115359249A (en) * | 2022-10-21 | 2022-11-18 | 山东圣点世纪科技有限公司 | Palm image ROI region extraction method and system |
CN115376167A (en) * | 2022-10-26 | 2022-11-22 | 山东圣点世纪科技有限公司 | Palm detection method and system under complex background |
Also Published As
Publication number | Publication date |
---|---|
CN111639562B (en) | 2023-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Qin et al. | Deep representation-based feature extraction and recovering for finger-vein verification | |
CN106529468B (en) | A kind of finger vein identification method and system based on convolutional neural networks | |
CN111126240B (en) | Three-channel feature fusion face recognition method | |
CN101131728A (en) | Face shape matching method based on Shape Context | |
CN111639562B (en) | Intelligent positioning method for palm region of interest | |
CN108009472A (en) | A kind of finger back arthrosis line recognition methods based on convolutional neural networks and Bayes classifier | |
CN101533466B (en) | Image processing method for positioning eyes | |
CN101551854B (en) | A processing system of unbalanced medical image and processing method thereof | |
WO2013075295A1 (en) | Clothing identification method and system for low-resolution video | |
Gonzalez et al. | Head tracking and hand segmentation during hand over face occlusion in sign language | |
Du et al. | Wavelet domain local binary pattern features for writer identification | |
Kwaśniewska et al. | Face detection in image sequences using a portable thermal camera | |
CN110458064B (en) | Low-altitude target detection and identification method combining data driving type and knowledge driving type | |
CN112651323A (en) | Chinese handwriting recognition method and system based on text line detection | |
CN109523484B (en) | Fractal feature-based finger vein network repair method | |
CN112801066B (en) | Identity recognition method and device based on multi-posture facial veins | |
Fritz et al. | Object recognition using local information content | |
Ravidas et al. | Deep learning for pose-invariant face detection in unconstrained environment | |
Ray et al. | Palm print recognition using hough transforms | |
Houtinezhad et al. | Off-line signature verification system using features linear mapping in the candidate points | |
Anai et al. | Personal identification using lip print furrows | |
CN116823940A (en) | Three-dimensional scene moving object detection method | |
Zhou et al. | ROI-HOG and LBP based human detection via shape part-templates matching | |
CN112270287A (en) | Palm vein identification method based on rotation invariance | |
Yu et al. | Research on video face detection based on AdaBoost algorithm training classifier |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |