CN112036317A - Face image intercepting method and device and computer equipment - Google Patents
Face image intercepting method and device and computer equipment Download PDFInfo
- Publication number
- CN112036317A CN112036317A CN202010900665.3A CN202010900665A CN112036317A CN 112036317 A CN112036317 A CN 112036317A CN 202010900665 A CN202010900665 A CN 202010900665A CN 112036317 A CN112036317 A CN 112036317A
- Authority
- CN
- China
- Prior art keywords
- face
- picture
- image
- new
- included angle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000012937 correction Methods 0.000 claims abstract description 28
- 230000002146 bilateral effect Effects 0.000 claims abstract description 25
- 230000002093 peripheral effect Effects 0.000 claims abstract description 22
- 238000004891 communication Methods 0.000 claims description 26
- 230000009466 transformation Effects 0.000 claims description 22
- 239000011159 matrix material Substances 0.000 claims description 19
- PXFBZOLANLWPMH-UHFFFAOYSA-N 16-Epiaffinine Natural products C1C(C2=CC=CC=C2N2)=C2C(=O)CC2C(=CC)CN(C)C1C2CO PXFBZOLANLWPMH-UHFFFAOYSA-N 0.000 claims description 14
- 238000000605 extraction Methods 0.000 claims description 14
- 230000001815 facial effect Effects 0.000 claims description 14
- 230000015654 memory Effects 0.000 claims description 14
- 238000004590 computer program Methods 0.000 claims description 8
- 230000003321 amplification Effects 0.000 claims description 7
- 238000003199 nucleic acid amplification method Methods 0.000 claims description 7
- 230000000007 visual effect Effects 0.000 claims description 6
- 238000001514 detection method Methods 0.000 abstract description 29
- 238000003384 imaging method Methods 0.000 abstract description 17
- 238000012217 deletion Methods 0.000 abstract description 4
- 230000037430 deletion Effects 0.000 abstract description 4
- 238000012360 testing method Methods 0.000 abstract description 4
- 238000013461 design Methods 0.000 description 44
- 230000008569 process Effects 0.000 description 10
- 238000004422 calculation algorithm Methods 0.000 description 8
- 238000010586 diagram Methods 0.000 description 6
- 238000012545 processing Methods 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 230000001154 acute effect Effects 0.000 description 2
- 238000004458 analytical method Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 208000011580 syndromic disease Diseases 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 210000000887 face Anatomy 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 210000001061 forehead Anatomy 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of face recognition, and discloses a face image intercepting method, a device and computer equipment, which can firstly carry out rotation correction on a face picture according to the size of an included angle between a picture ordinate axis or a picture abscissa axis and a bilateral symmetry center line of a target face so as to ensure that an imaging area of the target face is aligned in a new face picture, then determine a minimum circumscribed rectangle capable of covering all face peripheral contour feature points in the face feature points according to the coordinate position of the face feature points in the new face picture, and finally intercept a face image positioned in the minimum circumscribed rectangle from the new face picture, thereby aiming at an inclined imaging face, not only avoiding the problem that the intercepted face image has partial deletion, but also avoiding introducing excessive background information, and further improving the accuracy of face area detection and face recognition in face detection, through testing, the accuracy rate of face region detection can be improved by more than 3% in face detection.
Description
Technical Field
The invention belongs to the technical field of face recognition, and particularly relates to a face image intercepting method, a face image intercepting device and computer equipment.
Background
The current face recognition technology comprises specific technical means such as face detection, face correction and face recognition, wherein a mainstream face detection algorithm adopts an algorithm based on a face rectangular frame (parameters of the face rectangular frame can be represented by x, y, w and h, namely x represents the horizontal coordinate of the upper left corner of the face rectangular frame, y represents the vertical coordinate of the upper left corner of the face rectangular frame, w represents the transverse width of the face rectangular frame, and h represents the longitudinal height of the face rectangular frame) to detect the position of a face. After a face rectangular frame is obtained according to the existing face detection algorithm, face correction and face recognition are sequentially carried out on the image in the face rectangular frame, and a face recognition result is obtained.
However, when a face rectangular frame is generated for an oblique imaging face in the existing face detection algorithm, a part of face regions are often omitted from the face rectangular frame, so that a part of regions of a face image obtained after face correction is lost. As shown in fig. 1, a part of the forehead portion of the face image obtained by face correction is omitted, which affects the accuracy of face recognition; if the face rectangular frame needs to be enlarged in order to ensure that the face is not missed, more background areas are introduced, and the accuracy of face recognition is also affected.
Disclosure of Invention
The invention aims to solve the problem that the accuracy of face recognition is affected by partial deletion of a face image which is easy to be intercepted for an oblique imaging face in the existing face recognition process.
In a first aspect, the present invention provides a face image capturing method, including:
acquiring a face picture, wherein the face picture comprises a target face;
marking the face characteristic points of the target face in the face picture;
performing rotation correction on the face picture according to the size of an included angle between a reference line and a left-right symmetric center line of the target face to obtain a new face picture with the included angle being zero, wherein the reference line is a picture ordinate axis or a picture abscissa axis;
determining a minimum circumscribed rectangle capable of covering all human face peripheral contour feature points in the human face feature points according to the coordinate positions of the human face feature points in the new human face picture;
and intercepting the face image positioned in the minimum external rectangle from the new face image.
Based on the content of the invention, the face picture can be rotationally corrected according to the size of the included angle between the vertical axis line of the picture or the horizontal axis line of the picture and the bilateral symmetry center line of the target face, so that the imaging area of the target face is aligned in the new face picture, then the minimum circumscribed rectangle capable of covering all the peripheral contour feature points of the face in the feature points of the face is determined according to the coordinate position of the feature points of the face in the new face picture, and finally the face image positioned in the minimum circumscribed rectangle is intercepted from the new face picture, so that aiming at the inclined imaging face, the problem that the intercepted face image has partial deletion is avoided, excessive background information is also avoided, the accuracy of face area detection and face identification in the face detection can be improved, and the face area detection accuracy can be improved by more than 3% in the face detection through tests, and further, the face recognition accuracy is improved by more than 0.2%, and the greater the inclination of the imaged face is, the more obvious the improvement effect is.
In one possible design, a face picture is obtained, including:
marking a face region rectangular frame of a target face in a picture containing at least one face;
amplifying the face area rectangular frame according to a preset amplification ratio to obtain an amplified face area rectangular frame;
and intercepting the picture in the amplified face region rectangular frame from the picture.
Through the possible design, the face region rectangular frame marked firstly can be amplified and expanded to obtain a face picture with a proper size, so that the face imaging region locked through the minimum external rectangle can not be missed, and the computer resource requirement required by rotation processing can be properly reduced.
In a possible design, performing rotation correction on the face picture according to the size of an included angle between a reference line and a bilateral symmetry center line of the target face to obtain a new face picture with the included angle being zero degree, including:
selecting any two characteristic points which are positioned on the same vertical line or the same horizontal line under the main visual angle of the human face from the human face characteristic points;
determining the size of an included angle between the reference line and the bilateral symmetry center line of the target human face according to the included angle between the connecting line of any two feature points and the reference line;
and performing rotation correction on the face picture to obtain a new face picture with the included angle being zero.
Through the possible design, the included angle between the picture ordinate axis or the picture abscissa axis and the bilateral symmetry center line of the target face can be automatically obtained according to the marking result of the face characteristic point, so that the face imaging area can be accurately aligned, and the accuracy of face area detection and face identification in face detection is further improved.
In a possible design, performing rotation correction on the face picture according to the size of an included angle between a reference line and a bilateral symmetry center line of the target face to obtain a new face picture with the included angle being zero degree, including:
obtaining an affine transformation matrix for picture rotation transformation according to the size of the included angle;
calculating to obtain new coordinate positions of all pixel points in the face image in the new face image according to the affine transformation matrix;
and obtaining the new face picture with the included angle of zero according to the new coordinate positions of all the pixel points in the new face picture.
Through the possible design, rotation correction can be carried out by utilizing an affine transformation matrix according to the size of an included angle between a picture ordinate axis or a picture abscissa axis and a bilateral symmetry center line of the target face, and automatic accurate rotation is realized.
In one possible design, determining, according to the coordinate positions of the face feature points in the new face picture, a minimum bounding rectangle capable of encompassing all face peripheral contour feature points in the face feature points includes:
acquiring an abscissa set and an ordinate set of the face characteristic points in the new face image;
and determining the minimum circumscribed rectangle according to the abscissa minimum value and the abscissa maximum value in the abscissa set and the ordinate minimum value and the ordinate maximum value in the ordinate set.
Through the possible design, the four corner points of the minimum external rectangle can be accurately positioned, and the accuracy of face region detection and face identification in face detection is further improved.
In one possible design, after the face image located within the minimum bounding rectangle is cut out from the new face picture, the method further includes:
and importing the face image into a face feature extraction network model to obtain face feature information.
And determining personnel identity information corresponding to the face feature information according to the matching comparison result of the face feature information and each piece of pre-stored face feature information in a face feature library, wherein the pre-stored face feature information and the corresponding personnel identity information are bound and stored in the face feature library.
Through the possible design, the face image is ensured to have no face area loss through the first aspect, and excessive background information is removed, so that the face feature extraction result and the face feature recognition result are more accurate, and the face recognition accuracy can be improved by more than 0.2% through testing.
In a second aspect, the present invention provides a face image capturing device, which includes an original image acquiring unit, a feature marking unit, a new image acquiring unit, a rectangle determining unit and an image capturing unit;
the original image acquisition unit is used for acquiring a face image, wherein the face image comprises a target face;
the feature marking unit is in communication connection with the original image acquisition unit and is used for marking the face feature points of the target face in the face image;
the new image acquisition unit is in communication connection with the original image acquisition unit and is used for performing rotation correction on the face image according to the included angle between a reference line and the bilateral symmetry center line of the target face to obtain a new face image with the included angle being zero, wherein the reference line is an image ordinate axis or an image abscissa axis;
the rectangle determining unit is respectively in communication connection with the feature marking unit and the new image acquiring unit and is used for determining a minimum circumscribed rectangle capable of covering all face peripheral contour feature points in the face feature points according to the coordinate positions of the face feature points in the new face image;
the image intercepting unit is respectively in communication connection with the new image acquiring unit and the rectangle determining unit and is used for intercepting the face image positioned in the minimum external rectangle from the new face image.
In one possible design, the original image acquisition unit comprises a face frame marking subunit, a face frame amplifying subunit and a face screenshot subunit which are sequentially connected in a communication manner;
the face frame marking subunit is used for marking a face area rectangular frame of a target face in a picture containing at least one face;
the face frame amplifying subunit is configured to amplify the face region rectangular frame according to a preset amplification ratio to obtain an amplified face region rectangular frame;
and the face screenshot subunit is used for intercepting the picture in the amplified face area rectangular frame from the picture.
In one possible design, the new image acquisition unit comprises a feature point selection subunit, an included angle size determination subunit and a rotation syndrome subunit which are sequentially in communication connection;
the feature point selecting subunit is used for selecting any two feature points which can be positioned on the same vertical line or the same horizontal line under the main visual angle of the face from the face feature points;
the included angle size determining subunit is configured to determine, according to an included angle between the connection line of any two feature points and the reference line, a size of an included angle between the reference line and a bilateral symmetry center line of the target face;
and the rotation correction subunit is used for performing rotation correction on the face picture to obtain a new face picture with the included angle being zero.
In one possible design, the new map obtaining unit comprises a matrix determining subunit, a position replacing subunit and a new map obtaining subunit which are sequentially connected in a communication manner;
the matrix determining subunit is used for obtaining an affine transformation matrix for picture rotation transformation according to the size of the included angle;
the position transformation operator unit is used for calculating and obtaining the new coordinate position of each pixel point in the face picture in the face new picture according to the affine transformation matrix;
and the new image acquisition subunit is used for acquiring a new face image with the included angle of zero according to the new coordinate positions of all the pixel points in the new face image.
In one possible design, the rectangle determining unit comprises a coordinate set acquiring subunit and a circumscribed rectangle determining subunit which are connected in a communication manner;
the coordinate set acquisition subunit is configured to acquire an abscissa set and an ordinate set of the face feature point in the new face image;
and the circumscribed rectangle determining subunit is configured to determine the minimum circumscribed rectangle according to the abscissa minimum value and the abscissa maximum value in the abscissa set and the ordinate minimum value and the ordinate maximum value in the ordinate set.
In one possible design, the system further comprises a feature extraction unit and an identity determination unit;
the feature extraction unit is in communication connection with the image interception unit and is used for leading the face image into a face feature extraction network model to obtain face feature information;
the identity determining unit is in communication connection with the feature extracting unit and is used for determining the personnel identity information corresponding to the face feature information according to the matching comparison result of the face feature information and each piece of pre-stored face feature information in a face feature library, wherein the pre-stored face feature information and the corresponding personnel identity information are stored in the face feature library in a binding mode.
In a third aspect, the present invention provides a computer device, including a memory and a processor, which are communicatively connected, where the memory is used to store a computer program, and the processor is used to read the computer program and execute the face image capture method according to the first aspect or any one of the possible designs of the first aspect.
In a fourth aspect, the present invention provides a computer-readable storage medium, having stored thereon instructions, which, when executed on a computer, perform the face image capture method according to the first aspect or any one of the possible designs of the first aspect.
In a fifth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the face image capture method as described above in the first aspect or any one of the possible designs of the first aspect.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
Fig. 1 is a diagram illustrating an example of face image interception in a face detection process in the prior art.
Fig. 2 is a schematic flow chart of a face image interception method provided by the invention.
Fig. 3 is an exemplary diagram of the feature points of the face peripheral contour provided by the present invention.
Fig. 4 is an exemplary diagram of a face image interception result provided by the present invention.
Fig. 5 is a schematic structural diagram of a face image intercepting device provided by the invention.
Fig. 6 is a schematic structural diagram of a computer device provided by the present invention.
Detailed Description
The invention is further described with reference to the following figures and specific embodiments. It should be noted that the description of the embodiments is provided to help understanding of the present invention, but the present invention is not limited thereto. Specific structural and functional details disclosed herein are merely illustrative of example embodiments of the invention. This invention may, however, be embodied in many alternate forms and should not be construed as limited to the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element could be termed a second element, and, similarly, a second element could be termed a first element, without departing from the scope of example embodiments of the present invention.
It should be understood that, for the term "and/or" as may appear herein, it is merely an associative relationship that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, B exists alone, and A and B exist at the same time; for the term "/and" as may appear herein, which describes another associative object relationship, it means that two relationships may exist, e.g., a/and B, may mean: a exists independently, and A and B exist independently; in addition, for the character "/" that may appear herein, it generally means that the former and latter associated objects are in an "or" relationship.
It will be understood that when an element is referred to herein as being "connected," "connected," or "coupled" to another element, it can be directly connected or coupled to the other element or intervening elements may be present. Conversely, if a unit is referred to herein as being "directly connected" or "directly coupled" to another unit, it is intended that no intervening units are present. In addition, other words used to describe the relationship between elements should be interpreted in a similar manner (e.g., "between … …" versus "directly between … …", "adjacent" versus "directly adjacent", etc.).
It is to be understood that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments of the invention. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes" and/or "including," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, numbers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative designs, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
It should be understood that specific details are provided in the following description to facilitate a thorough understanding of example embodiments. However, it will be understood by those of ordinary skill in the art that the example embodiments may be practiced without these specific details. For example, systems may be shown in block diagrams in order not to obscure the examples in unnecessary detail. In other instances, well-known processes, structures and techniques may be shown without unnecessary detail in order to avoid obscuring example embodiments.
As shown in fig. 2 to 4, the face image capturing method provided in the first aspect of this embodiment may be, but is not limited to, suitable for performing face recognition tasks in places such as shops, airports, exhibition halls, and elevators. The face image intercepting method may include, but is not limited to, the following steps S101 to S105.
S101, obtaining a face picture, wherein the face picture comprises a target face.
In step S101, the face picture may be a live picture taken by an image capturing device, or may be a picture obtained after further image processing (e.g., by an initial cropping process, a scaling process, a color matching process, etc.) based on the live picture.
And S102, marking the face characteristic points of the target face in the face picture.
In the step S102, the face feature points are marked by an existing method, for example, the face picture is introduced into a face detection algorithm model such as a Multi-task Cascaded Convolutional network Model (MTCNN), so that the face feature points (including face peripheral contour feature points, face middle contour feature points, and the like) and the face region (i.e., marked by a face rectangular frame) can be detected at the same time, and the face feature points can be marked based on the detection result. As shown in fig. 3, among 68 individual face feature points that can be detected by using the existing face detection algorithm model, the face feature points numbered 0 to 16 and 17 to 26 are face peripheral contour feature points, and the face feature points numbered 27 to 67 are face middle contour feature points, which are used for tracing the eye contour, nose contour, mouth contour, etc. in the middle area of the face.
S103, performing rotation correction on the face picture according to the included angle between a reference line and the bilateral symmetry center line of the target face to obtain a new face picture with the included angle being zero, wherein the reference line is a picture ordinate axis or a picture abscissa axis.
In step S103, when the reference line is a picture ordinate axis, the corresponding included angle directly reflects the inclination degree of the imaged face in the face picture (i.e., the greater the included angle degree, the more serious the inclination procedure is), so that the face picture is rotationally corrected according to the included angle, the imaged face can be straightened in the new face picture in the vertical state, that is, the new face picture with the included angle of zero degree (which reflects the situation that the imaged face is not inclined) is obtained, and it is favorable to ensure that the face part missing problem is avoided during subsequent screenshot. When the reference line is a picture horizontal coordinate axis, although the corresponding included angle cannot directly reflect the inclination degree of the imaged face in the face picture, after the face picture is rotationally corrected according to the included angle to obtain a new face picture with the included angle of zero, the picture vertical coordinate axis is perpendicular to the bilateral symmetry center line of the target face, so that the imaged face can be aligned in a horizontal state in the new face picture, and the problem of face part missing during subsequent screenshot can be avoided.
S104, determining a minimum circumscribed rectangle capable of covering all face peripheral contour feature points in the face feature points according to the coordinate positions of the face feature points in the new face picture.
In the step S104, since the face feature point is marked in the face picture, the face feature point also rotates along with the rotation of the picture, and the coordinate position of the face feature point in the new face picture is obtained again without re-detection and marking, which is beneficial to reducing the overhead requirement on computing processing resources. Since the imaging area of the target face has been corrected in the foregoing step S103, the minimum bounding rectangle may be defined as a positive rectangle without an inclination angle, and therefore, the minimum bounding rectangle that encompasses all the face peripheral contour feature points may be determined according to the coordinate positions of the face feature points in the new face picture, so as to lock the imaging area of the target face. As shown in fig. 4 (compared with fig. 3, since the detection model is different, more human face feature points are detected), compared with the prior art (i.e., the human face imaging region locked by the gray-line frame), the human face image locked by the minimum bounding rectangle (i.e., the human face imaging region locked by the black-line frame) can remove the background region to the maximum extent without missing part of the human face region.
And S105, intercepting the face image positioned in the minimum external rectangle from the new face image.
Therefore, according to the face screenshot scheme described in detail in the above steps S101 to S105, according to the size of the included angle between the picture ordinate axis or the picture abscissa axis and the bilateral symmetry center line of the target face, the face picture is corrected in a rotating way, so that the imaging area of the target face is aligned in the new face picture, then according to the coordinate position of the face characteristic points in the new face picture, determining a minimum bounding rectangle capable of containing all face peripheral contour characteristic points in the face characteristic points, finally intercepting a face image positioned in the minimum bounding rectangle from the new face picture, therefore, the problem of partial deletion of the intercepted face image is avoided, too much background information can be avoided, and the accuracy of face region detection and face identification in face detection can be improved.
Based on the technical solution of the first aspect, the present embodiment further specifically proposes a possible design for how to obtain a face picture, that is, obtaining an original image of a face, including but not limited to the following steps S1011 to S1013.
S1011, marking a face area rectangular frame of a target face in a picture containing at least one face.
In the step S1011, the picture may be, but is not limited to, a live photograph taken by an image capture device. The specific way of marking the face region rectangular frame of the target face is an existing method, for example, the picture is introduced into a face detection algorithm model such as a Multi-task Cascaded Convolutional network Model (MTCNN), and feature points of the face (including feature points of a face peripheral outline, feature points of a face middle outline, and the like) and a face region (namely, marked by the face rectangular frame) can be detected at the same time, so as to obtain the face region rectangular frame.
S1012, amplifying the face region rectangular frame according to a preset amplification ratio to obtain an amplified face region rectangular frame.
In the step S1012, since the face imaging region is accurately determined again in the subsequent step S104, in the process of obtaining the face image, the face region rectangular frame may be appropriately enlarged and expanded (for example, an area with a preset enlargement ratio is expanded to the peripheral direction of the periphery, for example, an area with 50% expansion is expanded), so as to ensure that no omission occurs in the face imaging region finally locked by the minimum circumscribed rectangle. In addition, the preset amplification ratio can be a default value, or can be determined by a preset function which linearly or nonlinearly increases along with the increase of the size of an included angle (namely, the size of the included angle between the vertical axis of the picture and the bilateral symmetry center line of the target face directly reflects the inclination degree of the imaged face in the face picture), so that the preset function is matched with the imaged faces with different inclination degrees, the face area locked by the minimum circumscribed rectangle can not be missed finally, and the computer resource requirement required by the rotation processing can be minimized.
And S1013, intercepting the picture in the amplified face region rectangular frame from the picture.
Therefore, by the possible design one described in the above steps S1011 to S1013, a face image with a proper size can be obtained by enlarging and expanding the face region rectangular frame marked first, so that it can be ensured that the face imaging region locked by the minimum circumscribed rectangle finally does not have omission, and the computer resource requirement required by the rotation processing can be properly reduced.
In this embodiment, on the basis of the first aspect or the first possible technical solution, a second possible design for performing rotation correction on a face picture is further specifically provided, that is, according to an included angle between a reference line and a bilateral symmetry center line of the target face, the face picture is subjected to rotation correction to obtain a new face picture with the included angle being zero degree, which includes but is not limited to the following steps S1021 to S1023.
S1021, selecting any two characteristic points which can be positioned on the same vertical line or the same horizontal line under the main visual angle of the face from the face characteristic points;
in step S1021, as shown in fig. 3, for example, since the feature points of the peripheral outline of the human face with numbers 7 and 9 are located on the same horizontal line in the main perspective of the human face, they can be taken as two selected feature points; similarly, the feature points of the peripheral outline of the face, which are numbered 0 and 1, are located on the same vertical line under the main view angle of the face, so that the feature points can be used as two selected feature points; of course, other two face peripheral contour feature points meeting the following conditions can also be taken as the selected two feature points: the main visual angle of the face is located on the same vertical line or the same horizontal line (the selected judgment basis is determined in advance based on the actual position of the corresponding face peripheral contour feature point in the face). In addition, for example, two face middle contour feature points with the numbers 39 and 42, or any two face middle contour feature points with the numbers 27, 28, 29, 30, 33, 51, 62, 66, 57, etc. may be taken as the selected two feature points; of course, other two face middle contour feature points meeting the following conditions can also be taken as the selected two feature points: the feature points are necessarily located on the same horizontal line at the main view angle of the face (the selected criterion is determined in advance based on the actual positions of the corresponding face middle contour feature points in the face), for example, for the left-eye inner angle feature point (i.e. numbered 39 in fig. 3) and the right-eye inner angle feature point (i.e. numbered 42 in fig. 3), the two feature points are necessarily located on the same vertical line or the same horizontal line at the main view angle of the face.
And S1022, determining the size of an included angle between the reference line and the bilateral symmetry center line of the target human face according to the included angle between the connecting line of any two feature points and the reference line.
In step S1022, since the two feature points are located on the same vertical line or the same horizontal line in the main view angle of the human face, the conventional geometric analysis may be combined, determining the size of an included angle between the reference line and the bilateral symmetry center line of the target human face according to the included angle between the connecting line of any two feature points and the reference line, for example when the arbitrary two feature points are the left eye interior angle feature point (i.e. numbered 39 in figure 3) and the right eye interior angle feature point (i.e. numbered 42 in figure 3) respectively, the included angle between the connecting line and the horizontal coordinate axis of the picture is an acute angle of 5 degrees, the included angle between the connecting line and the vertical coordinate axis of the picture is an acute angle of 85 degrees, the size of the included angle when the reference line is the vertical coordinate axis of the picture can be obtained from geometric analysis to be 5 degrees, and obtaining that the included angle is 85 degrees when the reference line is the picture abscissa axis.
And S1023, performing rotation correction on the face picture to obtain a new face picture with the included angle being zero.
Therefore, by the second possible design described in the steps S1021 to S1023, the size of the included angle between the picture ordinate axis or the picture abscissa axis and the bilateral symmetry center line of the target face can be automatically obtained according to the marking result of the feature point of the face, so that the alignment of the face imaging area can be accurately realized, and the accuracy of face area detection and face identification in face detection is further improved.
In this embodiment, on the basis of the first aspect and any one of the first to second possible designs, a third possible design for performing rotation correction on a face picture is further specifically provided, that is, according to an included angle between a reference line and a bilateral symmetry center line of the target face, the face picture is subjected to rotation correction to obtain a new face picture with the included angle being zero, including but not limited to the following steps S1024 to S1026.
And S1024, obtaining an affine transformation matrix for picture rotation transformation according to the size of the included angle.
In step S1024, the affine transformation matrix is an algorithm matrix commonly used for picture rotation transformation, and the specific form of the affine transformation matrix may be, but is not limited to, as follows:
in the formula, θ represents a rotation target angle (i.e. the size of the included angle), and a (θ), b (θ), c (θ), d (θ), e (θ) and f (θ) respectively represent preset function parameters related to the rotation target angle.
And S1025, calculating to obtain new coordinate positions of all pixel points in the face picture in the face new picture according to the affine transformation matrix.
In step S1025, the specific calculation formula may be, but is not limited to, the following:
in the formula, (x, y) represents the coordinate position of the pixel point in the face picture, and (x ', y') represents the new coordinate position of the pixel point in the new face picture.
And S1026, obtaining the new face picture with the included angle being zero according to the new coordinate positions of all the pixel points in the new face picture.
Therefore, through the possible design three described in the steps S1024 to S1026, rotation correction can be performed by using an affine transformation matrix according to the size of an included angle between the picture ordinate axis or the picture abscissa axis and the bilateral symmetry center line of the target face, so that the purpose of automatic and accurate rotation is achieved.
On the basis of the first aspect and any one of the first to third possible designs, the present embodiment further specifically proposes a possible design four of how to determine a minimum bounding rectangle, that is, according to the coordinate position of the face feature point in the new face picture, a minimum bounding rectangle capable of covering all face peripheral contour feature points in the face feature points is determined, including but not limited to the following steps S1031 to S1032.
And S1031, acquiring an abscissa set and an ordinate set of the human face feature points in the new human face image.
S1032, determining the minimum circumscribed rectangle according to the abscissa minimum value and the abscissa maximum value in the abscissa set and the ordinate minimum value and the ordinate maximum value in the ordinate set.
In step S1032, a specific manner of determining the minimum bounding rectangle may include, but is not limited to: and taking the abscissa minimum value in the abscissa set and the ordinate minimum value in the ordinate set as the coordinate position of a first corner point of the minimum circumscribed rectangle, and taking the abscissa maximum value in the abscissa set and the ordinate maximum value in the ordinate set as the coordinate position of a second corner point of the minimum circumscribed rectangle, wherein the first corner point and the second corner point are a pair of paired corner points.
Therefore, by the possible design four described in the steps S1031 to S1032, the four corner points of the minimum circumscribed rectangle can be accurately positioned, and the accuracy of face region detection and face recognition in face detection is further improved.
On the basis of the first aspect and any one of the first to fourth possible designs, the present embodiment further specifically proposes a fifth possible design for how to further extract facial features and recognize the facial features, that is, after the facial image located in the minimum bounding rectangle is cut out from the new facial image, the method further includes, but is not limited to, the following steps S106 to S107.
And S106, importing the face image into a face feature extraction network model to obtain face feature information.
In step S106, the facial feature extraction network model is an existing commonly used facial feature extraction model, for example, a facial feature extraction model trained based on a convolution network model such as a facial mask network model FaceNet (which is an embedding method commonly used for face recognition and clustering, and the method uses a deep convolution network, and the most important part of which is end-to-end learning of the whole system), and facial feature information can be extracted so as to be used for feature information acquisition before person recognition or for feature information comparison during person recognition.
And S107, determining personnel identity information corresponding to the face feature information according to the matching comparison result of the face feature information and each piece of pre-stored face feature information in a face feature library, wherein the pre-stored face feature information and the corresponding personnel identity information are stored in the face feature library in a binding manner.
In step S107, specifically, if a certain pre-stored face feature information matches with the face feature information, the person identity information bound and stored with the pre-stored face feature information is used as the person identity information corresponding to the face feature information, so as to identify the corresponding person identity of the face image. In addition, if no pre-stored face feature information is matched with the face feature information, the corresponding personnel identity of the face image cannot be identified.
Therefore, through the possible design five described in the steps S106 to S107, the human face image is ensured to have no human face area missing through the steps S101 to S105, and excessive background information is removed, so that the human face feature extraction result and the human face feature recognition result are more accurate, and the human face recognition accuracy can be improved by more than 0.2% through testing.
As shown in fig. 5, a second aspect of this embodiment provides a virtual device for implementing the face image capturing method according to any one of the first aspect or the first aspect, where the virtual device includes an original image obtaining unit, a feature marking unit, a new image obtaining unit, a rectangle determining unit, and an image capturing unit;
the original image acquisition unit is used for acquiring a face image, wherein the face image comprises a target face;
the feature marking unit is in communication connection with the original image acquisition unit and is used for marking the face feature points of the target face in the face image;
the new image acquisition unit is in communication connection with the original image acquisition unit and is used for performing rotation correction on the face image according to the included angle between a reference line and the bilateral symmetry center line of the target face to obtain a new face image with the included angle being zero, wherein the reference line is an image ordinate axis or an image abscissa axis;
the rectangle determining unit is respectively in communication connection with the feature marking unit and the new image acquiring unit and is used for determining a minimum circumscribed rectangle capable of covering all face peripheral contour feature points in the face feature points according to the coordinate positions of the face feature points in the new face image;
the image intercepting unit is respectively in communication connection with the new image acquiring unit and the rectangle determining unit and is used for intercepting the face image positioned in the minimum external rectangle from the new face image.
In one possible design, the original image acquisition unit comprises a face frame marking subunit, a face frame amplifying subunit and a face screenshot subunit which are sequentially connected in a communication manner;
the face frame marking subunit is used for marking a face area rectangular frame of a target face in a picture containing at least one face;
the face frame amplifying subunit is configured to amplify the face region rectangular frame according to a preset amplification ratio to obtain an amplified face region rectangular frame;
and the face screenshot subunit is used for intercepting the picture in the amplified face area rectangular frame from the picture.
In one possible design, the new image acquisition unit comprises a feature point selection subunit, an included angle size determination subunit and a rotation syndrome subunit which are sequentially in communication connection;
the feature point selecting subunit is used for selecting any two feature points which can be positioned on the same vertical line or the same horizontal line under the main visual angle of the face from the face feature points;
the included angle size determining subunit is configured to determine, according to an included angle between the connection line of any two feature points and the reference line, a size of an included angle between the reference line and a bilateral symmetry center line of the target face;
and the rotation correction subunit is used for performing rotation correction on the face picture to obtain a new face picture with the included angle being zero.
In one possible design, the new map obtaining unit comprises a matrix determining subunit, a position replacing subunit and a new map obtaining subunit which are sequentially connected in a communication manner;
the matrix determining subunit is used for obtaining an affine transformation matrix for picture rotation transformation according to the size of the included angle;
the position transformation operator unit is used for calculating and obtaining the new coordinate position of each pixel point in the face picture in the face new picture according to the affine transformation matrix;
and the new image acquisition subunit is used for acquiring a new face image with the included angle of zero according to the new coordinate positions of all the pixel points in the new face image.
In one possible design, the rectangle determining unit comprises a coordinate set acquiring subunit and a circumscribed rectangle determining subunit which are connected in a communication manner;
the coordinate set acquisition subunit is configured to acquire an abscissa set and an ordinate set of the face feature point in the new face image;
and the circumscribed rectangle determining subunit is configured to determine the minimum circumscribed rectangle according to the abscissa minimum value and the abscissa maximum value in the abscissa set and the ordinate minimum value and the ordinate maximum value in the ordinate set.
In one possible design, the system further comprises a feature extraction unit and an identity determination unit;
the feature extraction unit is in communication connection with the image interception unit and is used for leading the face image into a face feature extraction network model to obtain face feature information;
the identity determining unit is in communication connection with the feature extracting unit and is used for determining the personnel identity information corresponding to the face feature information according to the matching comparison result of the face feature information and each piece of pre-stored face feature information in a face feature library, wherein the pre-stored face feature information and the corresponding personnel identity information are stored in the face feature library in a binding mode.
The working process, working details and technical effects of the apparatus provided in the second aspect of this embodiment may refer to the face image capturing method in the first aspect or any one of the possible designs in the first aspect, which is not described herein again.
As shown in fig. 6, a third aspect of the present embodiment provides a computer device for executing the facial image capturing method according to any one of the possible designs of the first aspect or the first aspect, and the computer device includes a memory and a processor, which are connected in communication with each other, where the memory is used for storing a computer program, and the processor is used for reading the computer program and executing the facial image capturing method according to any one of the possible designs of the first aspect or the first aspect. For example, the Memory may include, but is not limited to, a Random-Access Memory (RAM), a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a First-in First-out (FIFO), and/or a First-in Last-out (FILO), and the like; the processor may not be limited to the microprocessor of the model number employing the STM32F105 family. In addition, the computer device may also include, but is not limited to, a power module, a display screen, and other necessary components.
For the working process, working details, and technical effects of the foregoing computer device provided in the third aspect of this embodiment, reference may be made to the first aspect or any one of the possible designs of the face image capture method in the first aspect, which is not described herein again.
A fourth aspect of the present embodiment provides a computer-readable storage medium storing instructions for implementing the facial image capturing method according to any one of the first aspect and the possible designs of the first aspect, that is, the computer-readable storage medium stores instructions thereon, which, when executed on a computer, perform the facial image capturing method according to any one of the first aspect and the possible designs of the first aspect. The computer-readable storage medium refers to a carrier for storing data, and may include, but is not limited to, floppy disks, optical disks, hard disks, flash memories, flash disks and/or Memory sticks (Memory sticks), etc., and the computer may be a general-purpose computer, a special-purpose computer, a computer network, or other programmable devices.
The working process, working details and technical effects of the foregoing computer-readable storage medium provided in the fourth aspect of this embodiment may refer to the first aspect or any one of the first aspect that may be designed for the face image capture method, which is not described herein again.
A fifth aspect of the present embodiment provides a computer program product containing instructions, which when run on a computer, cause the computer to execute the face image capture method according to the first aspect or any one of the possible designs of the first aspect. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable devices.
The embodiments described above are merely illustrative, and may or may not be physically separate, if referring to units illustrated as separate components; if reference is made to a component displayed as a unit, it may or may not be a physical unit, and may be located in one place or distributed over a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: modifications may be made to the embodiments described above, or equivalents may be substituted for some of the features described. And such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Finally, it should be noted that the present invention is not limited to the above alternative embodiments, and that various other forms of products can be obtained by anyone in light of the present invention. The above detailed description should not be taken as limiting the scope of the invention, which is defined in the claims, and which the description is intended to be interpreted accordingly.
Claims (10)
1. A face image intercepting method is characterized by comprising the following steps:
acquiring a face picture, wherein the face picture comprises a target face;
marking the face characteristic points of the target face in the face picture;
performing rotation correction on the face picture according to the size of an included angle between a reference line and a left-right symmetric center line of the target face to obtain a new face picture with the included angle being zero, wherein the reference line is a picture ordinate axis or a picture abscissa axis;
determining a minimum circumscribed rectangle capable of covering all human face peripheral contour feature points in the human face feature points according to the coordinate positions of the human face feature points in the new human face picture;
and intercepting the face image positioned in the minimum external rectangle from the new face image.
2. The method of claim 1, wherein obtaining a picture of a human face comprises:
marking a face region rectangular frame of a target face in a picture containing at least one face;
amplifying the face area rectangular frame according to a preset amplification ratio to obtain an amplified face area rectangular frame;
and intercepting the picture in the amplified face region rectangular frame from the picture.
3. The method of claim 1, wherein performing rotation correction on the face picture according to an included angle between a reference line and a bilateral symmetry center line of the target face to obtain a new face picture with the included angle being zero degree, comprises:
selecting any two characteristic points which are positioned on the same vertical line or the same horizontal line under the main visual angle of the human face from the human face characteristic points;
determining the size of an included angle between the reference line and the bilateral symmetry center line of the target human face according to the included angle between the connecting line of any two feature points and the reference line;
and performing rotation correction on the face picture to obtain a new face picture with the included angle being zero.
4. The method of claim 1, wherein performing rotation correction on the face picture according to an included angle between a reference line and a bilateral symmetry center line of the target face to obtain a new face picture with the included angle being zero degree, comprises:
obtaining an affine transformation matrix for picture rotation transformation according to the size of the included angle;
calculating to obtain new coordinate positions of all pixel points in the face image in the new face image according to the affine transformation matrix;
and obtaining the new face picture with the included angle of zero according to the new coordinate positions of all the pixel points in the new face picture.
5. The method of claim 1, wherein determining a minimum bounding rectangle capable of enclosing all face peripheral contour feature points in the face feature points according to the coordinate positions of the face feature points in the new face picture comprises:
acquiring an abscissa set and an ordinate set of the face characteristic points in the new face image;
and determining the minimum circumscribed rectangle according to the abscissa minimum value and the abscissa maximum value in the abscissa set and the ordinate minimum value and the ordinate maximum value in the ordinate set.
6. The method of claim 1, wherein after the face image within the minimum bounding rectangle is cut from the new picture of the face, the method further comprises:
and importing the face image into a face feature extraction network model to obtain face feature information.
And determining personnel identity information corresponding to the face feature information according to the matching comparison result of the face feature information and each piece of pre-stored face feature information in a face feature library, wherein the pre-stored face feature information and the corresponding personnel identity information are bound and stored in the face feature library.
7. A human face image intercepting device is characterized by comprising an original image acquiring unit, a feature marking unit, a new image acquiring unit, a rectangle determining unit and an image intercepting unit;
the original image acquisition unit is used for acquiring a face image, wherein the face image comprises a target face;
the feature marking unit is in communication connection with the original image acquisition unit and is used for marking the face feature points of the target face in the face image;
the new image acquisition unit is in communication connection with the original image acquisition unit and is used for performing rotation correction on the face image according to the included angle between a reference line and the bilateral symmetry center line of the target face to obtain a new face image with the included angle being zero, wherein the reference line is an image ordinate axis or an image abscissa axis;
the rectangle determining unit is respectively in communication connection with the feature marking unit and the new image acquiring unit and is used for determining a minimum circumscribed rectangle capable of covering all face peripheral contour feature points in the face feature points according to the coordinate positions of the face feature points in the new face image;
the image intercepting unit is respectively in communication connection with the new image acquiring unit and the rectangle determining unit and is used for intercepting the face image positioned in the minimum external rectangle from the new face image.
8. The apparatus of claim 7, wherein the original image acquiring unit comprises a face frame marking subunit, a face frame enlarging subunit and a face screenshot subunit, which are sequentially connected in communication;
the face frame marking subunit is used for marking a face area rectangular frame of a target face in a picture containing at least one face;
the face frame amplifying subunit is configured to amplify the face region rectangular frame according to a preset amplification ratio to obtain an amplified face region rectangular frame;
and the face screenshot subunit is used for intercepting the picture in the amplified face area rectangular frame from the picture.
9. A computer device comprising a memory and a processor which are communicatively connected, wherein the memory is used for storing a computer program, and the processor is used for reading the computer program and executing the face image intercepting method according to any one of claims 1 to 7.
10. A computer-readable storage medium having stored thereon instructions for performing the method of intercepting a facial image according to any one of claims 1-7 when the instructions are run on a computer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010900665.3A CN112036317A (en) | 2020-08-31 | 2020-08-31 | Face image intercepting method and device and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010900665.3A CN112036317A (en) | 2020-08-31 | 2020-08-31 | Face image intercepting method and device and computer equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112036317A true CN112036317A (en) | 2020-12-04 |
Family
ID=73587218
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010900665.3A Pending CN112036317A (en) | 2020-08-31 | 2020-08-31 | Face image intercepting method and device and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112036317A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113469914A (en) * | 2021-07-08 | 2021-10-01 | 网易(杭州)网络有限公司 | Animal face beautifying method and device, storage medium and electronic equipment |
CN113837195A (en) * | 2021-10-19 | 2021-12-24 | 维沃移动通信有限公司 | Image processing method, device, equipment and storage medium |
CN113920557A (en) * | 2021-09-01 | 2022-01-11 | 广州云硕科技发展有限公司 | Visual sense-based credible identity recognition method and system |
CN114067231A (en) * | 2022-01-14 | 2022-02-18 | 成都飞机工业(集团)有限责任公司 | Part machining feature identification method based on machine vision learning identification |
CN114333030A (en) * | 2021-12-31 | 2022-04-12 | 科大讯飞股份有限公司 | Image processing method, device, equipment and storage medium |
CN115953823A (en) * | 2023-03-13 | 2023-04-11 | 成都运荔枝科技有限公司 | Face recognition method based on big data |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130070973A1 (en) * | 2011-09-15 | 2013-03-21 | Hiroo SAITO | Face recognizing apparatus and face recognizing method |
CN103605965A (en) * | 2013-11-25 | 2014-02-26 | 苏州大学 | Multi-pose face recognition method and device |
CN109948397A (en) * | 2017-12-20 | 2019-06-28 | Tcl集团股份有限公司 | A kind of face image correcting method, system and terminal device |
CN110210393A (en) * | 2019-05-31 | 2019-09-06 | 百度在线网络技术(北京)有限公司 | The detection method and device of facial image |
-
2020
- 2020-08-31 CN CN202010900665.3A patent/CN112036317A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130070973A1 (en) * | 2011-09-15 | 2013-03-21 | Hiroo SAITO | Face recognizing apparatus and face recognizing method |
CN103605965A (en) * | 2013-11-25 | 2014-02-26 | 苏州大学 | Multi-pose face recognition method and device |
CN109948397A (en) * | 2017-12-20 | 2019-06-28 | Tcl集团股份有限公司 | A kind of face image correcting method, system and terminal device |
CN110210393A (en) * | 2019-05-31 | 2019-09-06 | 百度在线网络技术(北京)有限公司 | The detection method and device of facial image |
Non-Patent Citations (2)
Title |
---|
ANGEL199408: ""平面内任意旋转角度的单人脸检测"", 《HTTPS://WWW.DOCIN.COM/P-475547242.HTML》 * |
杨露菁 等: "《智能图像处理及应用》", 31 March 2019 * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113469914A (en) * | 2021-07-08 | 2021-10-01 | 网易(杭州)网络有限公司 | Animal face beautifying method and device, storage medium and electronic equipment |
CN113469914B (en) * | 2021-07-08 | 2024-03-19 | 网易(杭州)网络有限公司 | Animal face beautifying method and device, storage medium and electronic equipment |
CN113920557A (en) * | 2021-09-01 | 2022-01-11 | 广州云硕科技发展有限公司 | Visual sense-based credible identity recognition method and system |
CN113920557B (en) * | 2021-09-01 | 2022-09-13 | 广州云硕科技发展有限公司 | Visual sense-based credible identity recognition method and system |
CN113837195A (en) * | 2021-10-19 | 2021-12-24 | 维沃移动通信有限公司 | Image processing method, device, equipment and storage medium |
CN114333030A (en) * | 2021-12-31 | 2022-04-12 | 科大讯飞股份有限公司 | Image processing method, device, equipment and storage medium |
CN114067231A (en) * | 2022-01-14 | 2022-02-18 | 成都飞机工业(集团)有限责任公司 | Part machining feature identification method based on machine vision learning identification |
CN115953823A (en) * | 2023-03-13 | 2023-04-11 | 成都运荔枝科技有限公司 | Face recognition method based on big data |
CN115953823B (en) * | 2023-03-13 | 2023-05-16 | 成都运荔枝科技有限公司 | Face recognition method based on big data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112036317A (en) | Face image intercepting method and device and computer equipment | |
US10210415B2 (en) | Method and system for recognizing information on a card | |
EP3358298B1 (en) | Building height calculation method and apparatus, and storage medium | |
CN110569699A (en) | Method and device for carrying out target sampling on picture | |
WO2022105415A1 (en) | Method, apparatus and system for acquiring key frame image, and three-dimensional reconstruction method | |
CN109190617B (en) | Image rectangle detection method and device and storage medium | |
CN111860489A (en) | Certificate image correction method, device, equipment and storage medium | |
US8526684B2 (en) | Flexible image comparison and face matching application | |
CN110647882A (en) | Image correction method, device, equipment and storage medium | |
CN107545223B (en) | Image recognition method and electronic equipment | |
JP5468332B2 (en) | Image feature point extraction method | |
CN110458855B (en) | Image extraction method and related product | |
EP3550467A1 (en) | Image matching method, device and system, and storage medium | |
CN109948420B (en) | Face comparison method and device and terminal equipment | |
CN113449639B (en) | Non-contact data acquisition method of gateway of Internet of things for instrument | |
TW202314634A (en) | Image processing method, electronic device and computer readable storage medium | |
CN111222432A (en) | Face living body detection method, system, equipment and readable storage medium | |
CN108875556A (en) | Method, apparatus, system and the computer storage medium veritified for the testimony of a witness | |
US9286543B2 (en) | Characteristic point coordination system, characteristic point coordination method, and recording medium | |
CN111798374B (en) | Image stitching method, device, equipment and medium | |
CN110110697B (en) | Multi-fingerprint segmentation extraction method, system, device and medium based on direction correction | |
CN115221910A (en) | Two-dimensional code identification method, device and equipment and computer readable storage medium | |
CN106910196B (en) | Image detection method and device | |
CN112907206A (en) | Service auditing method, device and equipment based on video object identification | |
CN113159037B (en) | Picture correction method, device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20201204 |
|
RJ01 | Rejection of invention patent application after publication |