CN109492601A - Face comparison method and device, computer-readable medium and electronic equipment - Google Patents
Face comparison method and device, computer-readable medium and electronic equipment Download PDFInfo
- Publication number
- CN109492601A CN109492601A CN201811393300.5A CN201811393300A CN109492601A CN 109492601 A CN109492601 A CN 109492601A CN 201811393300 A CN201811393300 A CN 201811393300A CN 109492601 A CN109492601 A CN 109492601A
- Authority
- CN
- China
- Prior art keywords
- image
- facial image
- face
- facial
- comparison method
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of face comparison method and devices, computer-readable medium and electronic equipment, are related to technical field of image processing.The face comparison method comprises determining that the first facial image and the second facial image;First facial image and the second facial image are spliced into target image;By the disaggregated model after one training of target image input;The output result of disaggregated model is compared with a classification thresholds, if output result is more than or equal to classification thresholds, it is determined that the first facial image and the second facial image correspond to same user.The present invention can accurately be compared face.
Description
Technical field
The present invention relates to technical field of image processing, fill in particular to a kind of face comparison method, face alignment
It sets, computer-readable medium and electronic equipment.
Background technique
With the urgent need of auto authentication in the fast-developing and various scene of information technology, biological characteristic is known
Other technology has obtained development at full speed.Wherein, recognition of face is as a kind of untouchable biometrics identification technology, extensively
It is applied to and is compared, in scene hotel self-service is moved in, meeting signature, banking are handled such as attendance, the testimony of a witness.
Currently, can use the face identification method based on template matching for the process that recognition of face compares, be based on KL
The face identification method of exchange, hidden markov model approach are realized based on method of face textural characteristics etc..These methods
It can satisfy the needs of face alignment to a certain extent.However, these methods do not have autonomous learning function, adaptability and reality
It is bad with property, for example, slight light variation is possible to lead to identification inaccuracy.
It should be noted that information is only used for reinforcing the reason to background of the invention disclosed in above-mentioned background technology part
Solution, therefore may include the information not constituted to the prior art known to persons of ordinary skill in the art.
Summary of the invention
The purpose of the present invention is to provide a kind of face comparison method, face alignment device, computer-readable medium and electricity
Sub- equipment, and then overcome face alignment process caused by the limitation and defect due to the relevant technologies suitable at least to a certain extent
The problem of answering property and bad practicality.
According to an aspect of the present invention, a kind of face comparison method is provided, comprising: determine the first facial image and second
Facial image;First facial image and the second facial image are spliced into target image;After one training of target image input
Disaggregated model;The output result of disaggregated model is compared with a classification thresholds, if output result is more than or equal to classification threshold
Value, it is determined that the first facial image and the second facial image correspond to same user.
Optionally it is determined that the first facial image and the second facial image include: that the first original image of acquisition and second are original
Image;Face datection is carried out to the first original image and the second original image respectively, it is corresponding with the first original image with determination
First facial image and the second facial image corresponding with the second original image.
Optionally, before the first facial image and the second facial image are spliced into target image, face comparison method
Further include: unitary of illumination processing is carried out to the first facial image and the second facial image respectively.
Optionally, carrying out unitary of illumination processing to the first facial image includes: to carry out gamma change to the first facial image
It changes;Difference of Gaussian filtering is carried out to the image after gamma transformation;Histogram equalization is carried out to the filtered image of difference of Gaussian
Processing.
Optionally, the first facial image and the second facial image are spliced into target image includes: respectively to the first face
Image and the second facial image carry out bilinear interpolation processing, the first facial image and the second facial image are zoomed to identical
Size;By after scaling the first facial image and the second facial image be spliced into target image.
Optionally, disaggregated model is a convolutional neural networks;Wherein, convolutional neural networks include 6 convolution pond layers and 3
A full articulamentum.
Optionally, each convolution pond layer in 6 convolution pond layers includes 2 convolutional layers and 1 maximum pond layer;
Wherein, each convolutional layer in 2 convolutional layers includes the convolution kernel that size is 3 × 3 and step-length is 2.
According to an aspect of the present invention, a kind of face alignment device is provided, which may include image
Determining module, image mosaic module, image input module and result comparison module.
Specifically, image determining module is determined for the first facial image and the second facial image;Image mosaic mould
Block can be used for the first facial image and the second facial image being spliced into target image;Image input module can be used for mesh
Disaggregated model after one training of logo image input;As a result comparison module can be used for the output result of disaggregated model and a classification
Threshold value is compared, if output result is more than or equal to classification thresholds, it is determined that the first facial image and the second facial image pair
Answer same user.
Optionally, image determining module may include original image acquiring unit and Face datection unit.
Specifically, original image acquiring unit can be used for obtaining the first original image and the second original image;Face inspection
Surveying unit can be used for carrying out Face datection to the first original image and the second original image respectively, with determining and the first original graph
As corresponding first facial image and the second facial image corresponding with the second original image.
Optionally, face alignment device can also include lighting process module.
Specifically, lighting process module can be used for respectively returning the first facial image and the progress illumination of the second facial image
One change processing.
Optionally, lighting process module may include first processing units, the second processing unit and third processing unit.
Specifically, first processing units can be used for carrying out gamma transformation to the first facial image;The second processing unit can
For carrying out difference of Gaussian filtering to the image after gamma transformation;After third processing unit can be used for filtering difference of Gaussian
Image carry out histogram equalization processing.
Optionally, image mosaic module may include image compression unit and image mosaic unit.
Specifically, image compression unit can be used for carrying out bilinearity to the first facial image and the second facial image respectively
First facial image and the second facial image are zoomed to identical size by interpolation processing;Image mosaic unit can be used for
By after scaling the first facial image and the second facial image be spliced into target image.
Optionally, disaggregated model is a convolutional neural networks;Wherein, convolutional neural networks include 6 convolution pond layers and 3
A full articulamentum.
Optionally, each convolution pond layer in 6 convolution pond layers includes 2 convolutional layers and 1 maximum pond layer;
Wherein, each convolutional layer in 2 convolutional layers includes the convolution kernel that size is 3 × 3 and step-length is 2.
According to an aspect of the present invention, a kind of computer-readable medium is provided, computer program, program are stored thereon with
The face comparison method such as above-mentioned any one is realized when being executed by processor.
According to an aspect of the present invention, a kind of electronic equipment is provided, comprising: one or more processors;Storage device,
For storing one or more programs, when one or more programs are executed by one or more processors, so that one or more
A processor realizes the face comparison method such as above-mentioned any one.
In the technical solution provided by some embodiments of the present invention, the first facial image and the second facial image are spelled
It is connected in target image, by the disaggregated model after one training of target image input, and determines the according to the output result of disaggregated model
Whether one facial image and the second facial image correspond to same user.The present invention carries out face ratio using the thought of disaggregated model
It is right, image mosaic to be compared is classified at an image, speed is fast, precision is high, avoids face knowledge in the related technology
Other method carries out the deterministic process of feature extraction and characteristic similarity to image.
It should be understood that above general description and following detailed description be only it is exemplary and explanatory, not
The disclosure can be limited.
Detailed description of the invention
The drawings herein are incorporated into the specification and forms part of this specification, and shows and meets implementation of the invention
Example, and be used to explain the principle of the present invention together with specification.It should be evident that the accompanying drawings in the following description is only the present invention
Some embodiments for those of ordinary skill in the art without creative efforts, can also basis
These attached drawings obtain other attached drawings.In the accompanying drawings:
Fig. 1 is shown can be using the face comparison method of the embodiment of the present invention or the exemplary system of face alignment device
The schematic diagram of framework;
Fig. 2 shows the structural schematic diagrams of the computer system of the electronic equipment suitable for being used to realize the embodiment of the present invention;
Fig. 3 diagrammatically illustrates the flow chart of face comparison method according to an illustrative embodiment of the invention;
Fig. 4 diagrammatically illustrates the structure chart of convolutional neural networks according to an illustrative embodiment of the invention;
Fig. 5 diagrammatically illustrates the whole of face comparison method steps involved according to an illustrative embodiment of the invention
Body flow chart;
Fig. 6 shows the schematic diagram of the first original image and the second original image;
Fig. 7 show according to an illustrative embodiment of the invention to the first original image and second shown in Fig. 6
The schematic diagram of the first facial image and the second facial image after original image progress Face datection;
Fig. 8 show an exemplary embodiment of the present invention to the first facial image shown in fig. 7 and the second people
Face image carries out the schematic diagram of unitary of illumination treated image;
Fig. 9 show an exemplary embodiment of the present invention to unitary of illumination shown in fig. 8 treated figure
Schematic diagram as zooming in and out and being spliced into an image;
Figure 10 diagrammatically illustrates the block diagram of the comparison device of face according to an illustrative embodiment of the invention;
Figure 11 diagrammatically illustrates the block diagram of image determining module according to an illustrative embodiment of the invention;
Figure 12 diagrammatically illustrates the block diagram of the face alignment device of another exemplary embodiment according to the present invention;
Figure 13 diagrammatically illustrates the block diagram of lighting process module according to an illustrative embodiment of the invention;
Figure 14 diagrammatically illustrates the block diagram of image mosaic module according to an illustrative embodiment of the invention.
Specific embodiment
Example embodiment is described more fully with reference to the drawings.However, example embodiment can be with a variety of shapes
Formula is implemented, and is not understood as limited to example set forth herein;On the contrary, thesing embodiments are provided so that the present invention will more
Fully and completely, and by the design of example embodiment comprehensively it is communicated to those skilled in the art.Described feature, knot
Structure or characteristic can be incorporated in any suitable manner in one or more embodiments.In the following description, it provides perhaps
More details fully understand embodiments of the present invention to provide.It will be appreciated, however, by one skilled in the art that can
It is omitted with practicing technical solution of the present invention one or more in the specific detail, or others side can be used
Method, constituent element, device, step etc..In other cases, be not shown in detail or describe known solution to avoid a presumptuous guest usurps the role of the host and
So that each aspect of the present invention thickens.
In addition, attached drawing is only schematic illustrations of the invention, it is not necessarily drawn to scale.Identical attached drawing mark in figure
Note indicates same or similar part, thus will omit repetition thereof.Some block diagrams shown in the drawings are function
Energy entity, not necessarily must be corresponding with physically or logically independent entity.These function can be realized using software form
Energy entity, or these functional entitys are realized in one or more hardware modules or integrated circuit, or at heterogeneous networks and/or place
These functional entitys are realized in reason device device and/or microcontroller device.
Flow chart shown in the drawings is merely illustrative, it is not necessary to including all steps.For example, the step of having
It can also decompose, and the step of having can merge or part merges, therefore the sequence actually executed is possible to according to the actual situation
Change.
Fig. 1 is shown can be using the face comparison method of the embodiment of the present invention or the exemplary system of face alignment device
The schematic diagram of framework.
As shown in Figure 1, system architecture 100 may include one of terminal device 101,102,103 or a variety of, network
104 and server 105.Network 104 between terminal device 101,102,103 and server 105 to provide communication link
Medium.Network 104 may include various connection types, such as wired, wireless communication link or fiber optic cables etc..
It should be understood that the number of terminal device, network and server in Fig. 1 is only schematical.According to realization need
It wants, can have any number of terminal device, network and server.For example server 105 can be multiple server compositions
Server cluster etc..
User can be used terminal device 101,102,103 and be interacted by network 104 with server 105, to receive or send out
Send message etc..Terminal device 101,102,103 can be the various electronic equipments with display screen, including but not limited to intelligent hand
Machine, tablet computer, portable computer and desktop computer etc..
Server 105 can be to provide the server of various services.For example, available first original graph of server 105
Picture and the second original image, and Face datection is carried out with determination corresponding the to the first original image and the second original image respectively
One facial image and the second facial image;Unitary of illumination processing is carried out to the first facial image and the second facial image;By light
According to after normalized the first facial image and the second facial image spliced, to form target image;By target image
Disaggregated model after one training of input, wherein the disaggregated model can be constructed based on convolutional neural networks;By disaggregated model
Output result is compared with a classification thresholds, if output result is more than or equal to the classification thresholds, can determine the first original
The beginning image same user corresponding with the second original image.
In this case, face alignment device of the present invention is generally located in server 105.
It is to be understood, however, that face comparison method provided by the present invention can also directly by terminal device 101,
102, it 103 executes, that is to say, that terminal device 101,102,103 can be utilized directly using method as described below to face
Image is compared.In this case, the present invention can be not against server.Correspondingly, video image processing device can also
To be arranged in mobile device 101,102,103.
Fig. 2 shows the structural schematic diagrams of the computer system of the electronic equipment suitable for being used to realize the embodiment of the present invention.
It should be noted that Fig. 2 shows the computer system 200 of electronic equipment be only an example, should not be to this hair
The function and use scope of bright embodiment bring any restrictions.
As shown in Fig. 2, computer system 200 includes central processing unit (CPU) 201, it can be read-only according to being stored in
Program in memory (ROM) 202 or be loaded into the program in random access storage device (RAM) 203 from storage section 208 and
Execute various movements appropriate and processing.In RAM 203, it is also stored with various programs and data needed for system operatio.CPU
201, ROM 202 and RAM 203 is connected with each other by bus 204.Input/output (I/O) interface 205 is also connected to bus
204。
I/O interface 205 is connected to lower component: the importation 206 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 207 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage section 208 including hard disk etc.;
And the communications portion 209 of the network interface card including LAN card, modem etc..Communications portion 209 via such as because
The network of spy's net executes communication process.Driver 210 is also connected to I/O interface 205 as needed.Detachable media 211, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on as needed on driver 210, in order to read from thereon
Computer program be mounted into storage section 208 as needed.
Particularly, according to an embodiment of the invention, may be implemented as computer below with reference to the process of flow chart description
Software program.For example, the embodiment of the present invention includes a kind of computer program product comprising be carried on computer-readable medium
On computer program, which includes the program code for method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 209, and/or from detachable media
211 are mounted.When the computer program is executed by central processing unit (CPU) 201, executes and limited in the system of the application
Various functions.
It should be noted that computer-readable medium shown in the present invention can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer readable storage medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor system, device or device, or any above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to: have the electrical connection, just of one or more conducting wires
Taking formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In the present invention, computer readable storage medium can be it is any include or storage journey
The tangible medium of sequence, the program can be commanded execution system, device or device use or in connection.And at this
In invention, computer-readable signal media may include in a base band or as carrier wave a part propagate data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By the use of instruction execution system, device or device or program in connection.Include on computer-readable medium
Program code can transmit with any suitable medium, including but not limited to: wireless, electric wire, optical cable, RF etc. are above-mentioned
Any appropriate combination.
Flow chart and block diagram in attached drawing are illustrated according to the system of various embodiments of the invention, method and computer journey
The architecture, function and operation in the cards of sequence product.In this regard, each box in flowchart or block diagram can generation
A part of one module, program segment or code of table, a part of above-mentioned module, program segment or code include one or more
Executable instruction for implementing the specified logical function.It should also be noted that in some implementations as replacements, institute in box
The function of mark can also occur in a different order than that indicated in the drawings.For example, two boxes succeedingly indicated are practical
On can be basically executed in parallel, they can also be executed in the opposite order sometimes, and this depends on the function involved.Also it wants
It is noted that the combination of each box in block diagram or flow chart and the box in block diagram or flow chart, can use and execute rule
The dedicated hardware based systems of fixed functions or operations is realized, or can use the group of specialized hardware and computer instruction
It closes to realize.
Being described in unit involved in the embodiment of the present invention can be realized by way of software, can also be by hard
The mode of part realizes that described unit also can be set in the processor.Wherein, the title of these units is in certain situation
Under do not constitute restriction to the unit itself.
As on the other hand, present invention also provides a kind of computer-readable medium, which be can be
Included in electronic equipment described in above-described embodiment;It is also possible to individualism, and without in the supplying electronic equipment.
Above-mentioned computer-readable medium carries one or more program, when the electronics is set by one for said one or multiple programs
When standby execution, so that method described in electronic equipment realization as the following examples.
The needs of identity-based verifying, biometrics identification technology have obtained continuous development.Wherein, recognition of face is one
The untouchable technology of kind has the characteristics that visualization, meets people's thinking habit, is based on this, and recognition of face is in business, safety etc.
It is used widely in field.With the development of face recognition technology, relative technology is widely applied to people's production and living
Various aspects, not only can bring more conveniences for people's lives, also greatly improve the safety of the person, property etc.
It ensures.It can be seen that face recognition technology has broad prospects.
Regional characteristics analysis method is widely used in face recognition technology, is mentioned from video using computer image processing technology
Portrait characteristic point is taken, carries out analysis founding mathematical models using the principle of biostatistics.The face recognition technology of early stage is based on
The structure feature of face geometry, with the appearance of high-precision high-performance computer, face recognition technology is rapidly developed.
Face identification method such as based on template matching, the face identification method based on KL exchange, hidden markov model approach, nerve
Network Recognition method, method based on face textural characteristics etc..These methods are in Image Acquisition condition ideal, user's fitness
Recognition performance is preferable in high, middle and small scale front face database operation.However, these methods do not have autonomous learning function
Can, and due to the effect of environment light, synthesis causes these method applicabilities and practicability poor.
In recent years, go deep into the rise of deep learning method, the especially research of depth convolutional neural networks, largely
Convolutional neural networks model is applied to image understanding and image recognition etc..The face comparison method and dress being described below
In setting, the present invention determines whether facial image corresponds to corresponding same user on the basis of using convolutional neural networks model, with
Phase solves the problems, such as present in above-mentioned the relevant technologies.
Fig. 3 diagrammatically illustrates the flow chart of the face comparison method of exemplary embodiments of the present invention.With reference to Fig. 3,
The face comparison method may comprise steps of:
S32. the first facial image and the second facial image are determined.
In an exemplary embodiment of the present invention embodiment, facial image can be the picture comprising face.It is realized with server
For face comparison method of the present invention, the first facial image and the second facial image be can store in a memory space
It is interior, server can the mark of mark and the second facial image for example based on the first facial image obtained out of this memory space
Take the first facial image and the second facial image.
At least one of first facial image and the second facial image can be user and clap for example, by the terminal of mobile phone
The photo taken the photograph can also be the photo that camera takes in application scenarios, for example, the monitoring camera on street.In addition, the
At least one of one facial image and the second facial image can be based on user certificate photo (for example, identity card, employee's card
Deng) obtained from picture.The present invention does not do the source of the first facial image and the second facial image specifically limited.In addition, this
Invention does not do the color of the first facial image and the second facial image, size, the size of shared memory space specifically limited yet.
It is above-mentioned to be illustrated so that facial image is picture as an example.However, it will be readily appreciated by those skilled in the art that first
At least one of facial image and the second facial image can be the image for intercepting a certain frame in video and generating, for example, can
To intercept the frame comprising face from the video that user mobile phone is shot as facial image of the present invention.
It should be understood that term " first ", " second " be merely to distinguish different facial image, it is this kind of belong to do not answer
As limitation of the invention.
It according to some embodiments of the present invention, may include many features unrelated with face in image, for example, including people
In the picture with scenes of object, sea, sky, sandy beach are the feature unrelated with face.In this case, sheet is executed with server
For the method for invention, step S32 can also include the following contents:
Firstly, available first original image of server (is denoted as Iinput1) and the second original image (be denoted as Iinput2)。
Wherein, the first original image and the second original image can be user's image to be compared, for example, it may be two pass through mobile phone
The photo of shooting.
Next, server can carry out Face datection to the first original image and the second original image respectively, with determination
The first facial image corresponding with the first original image (is denoted as Iface1) and the second face figure corresponding with the second original image
As (being denoted as Iface2)。
For the process of Face datection in the present invention, original image can be carried out using Viola-Jones detector
Face datection.Specifically, the Haar-like feature reflects image it is possible, firstly, to extract the Haar-like feature of image
Grey scale change situation, such as: eyes are than cheek color depth, and than bridge of the nose color depth, mouth is deeper etc. than ambient color for bridge of the nose two sides.
Then, Face datection rate can be obtained in conjunction with cascade Adaboost algorithm.Wherein, Adaboost algorithm is calculated as a kind of iteration
Method combines to construct a final classification device for the different classifier of same training set training, then by these classifiers, leads to
Face datection rate can be determined by crossing the final classification device.Next, server can be to the people that Face datection rate is met the requirements
Face image carries out real-time display and is intercepted, saved.Thus, it is possible to be based on the first original image Iinput1With the second original graph
As Iinput2The first facial image I is determined respectivelyface1With the second facial image Iface2。
It is to be understood, however, that the present invention can also be using other detection methods in addition to Viola-Jones detector
Or device is realized by the conversion of original image to facial image, these detection methods for example can also include based on SSD nerve
The detection method etc. of network.
S34. the first facial image and the second facial image are spliced into target image.
It, according to some embodiments of the present invention, can after step S32 determines the first facial image and the second facial image
Unitary of illumination processing is carried out to the first facial image and the second facial image with decibel, to reject illumination to the shadow of face alignment
It rings.
For the process of unitary of illumination processing, specifically, it is possible, firstly, to respectively to the first facial image Iface1With second
Facial image Iface2Gamma transformation is carried out, with to excessively bright (for example, camera exposure is excessive) or excessively dark (for example, camera exposure is insufficient)
Image be modified.Specifically it can use formula 1 to realize gamma transformation:
Formula 1 is illustrated only for the first facial image Iface1Gamma transformation situation, those skilled in the art can be with
It is readily determined for the second facial image Iface2The case where, the present invention repeats no more.In addition, the γ in formula 1 indicates gamma
Coefficient, in an exemplary embodiment of the present invention embodiment, the value range of gamma factor γ can be 0.15 to 0.3, it is preferable that gal
The value of horse coefficient gamma can be 0.2.
Next, difference of Gaussian filtering can be carried out to gamma treated image is carried out, to reduce the fuzziness of image.
Specifically it can use formula 2 to realize that difference of Gaussian filters:
Similarly, formula 2 is illustrated only for the first facial image Iface1Difference of Gaussian it is filtered as a result, this field
Technical staff can readily determine that for the second facial image Iface2The case where, the present invention repeats no more.In addition, formula 2
InWithRespectively indicate low frequency coefficient and high frequency coefficient.In an exemplary embodiment of the present invention embodiment, low frequency system
NumberValue range can be 1 to 3, it may be preferable that value can be 2;High frequency coefficientValue range can be
0.3 to 0.6, it may be preferable that value can be 0.5.
Then, histogram equalization processing can be carried out to the filtered image of difference of Gaussian, to enhance the comparison of image
Degree.Specifically it can use formula 3 to realize histogram equalization:
Iface1-li=FEQ(Iface1-dog) (formula 3)
Similarly, formula 3 is illustrated only for the first facial image Iface1Histogram equalization processing as a result, this field
Technical staff can readily determine that out for the second facial image Iface2The case where, the present invention repeats no more.In formula 3
In, FEQFor histogram equalization parameter.In addition, histogram equalization pixel-map can be realized using formula 4:
Wherein, n is the summation of pixel in image, njIt is the number of pixels of current gray level grade, L is possible gray scale in image
Grade sum.
By above-mentioned gamma transformation, difference of Gaussian filtering and histogram equalization processing, can to the first facial image and
Second facial image carries out unitary of illumination processing, to reject influence of the illumination to face alignment.
It should be understood that using the above-mentioned unitary of illumination treatment process to be only to reject influence of the illumination to face alignment
An exemplary embodiment of the present invention.There is no under the weaker scene of illumination effect or illumination effect, can not suffer from
Unitary of illumination handles and directly splices the first facial image and the second facial image.
In step S34, server can be by unitary of illumination treated the first facial image and the second facial image
Spliced, to generate target image.
It is possible, firstly, to bilinear interpolation processing be carried out to the first facial image and the second facial image respectively, by first
Facial image and the second facial image zoom to identical size.
Specifically, the corresponding formula of bilinear interpolation processing is as shown in formula 5:
Wherein, f (Qmn) indicate point (xm, yn) pixel value.
It is handled by bilinear interpolation as shown in formula 5, the first facial image I after unitary of illuminationface1-liWith
Two facial image Iface2-liIt can uniformly be zoomed to the size of w*h, wherein w can refer to that the width of image, h can be down to figures
The height of picture.
Next, can by after scaling the first facial image and the second facial image be spliced into target image.According to this
Image after scaling can be carried out horizontal splicing, i.e. target figure of the generation having a size of (2*w) * h by some embodiments of invention
Picture, in this case, two image left-right situs.According to other embodiment, the image after scaling can also be carried out
Vertical splicing, that is, generate having a size of the target image after w* (2*h), in this case, two images are arranged above and below.
S36. by the disaggregated model after one training of target image input.
In an exemplary embodiment of the present invention embodiment, disaggregated model can be a roll of product neural network.Below in conjunction with Fig. 4
The structure of convolutional neural networks of the invention is illustrated.
With reference to Fig. 4, convolutional neural networks of the present invention may include 6 convolution pond layers and 3 full articulamentums.
Specifically, convolution pond layer 1 to each of convolution pond layer 6 convolution pond layer may each comprise 2 convolution
Layer and 1 maximum pond layer, wherein each convolutional layer includes the convolution kernel that size is 3 × 3 and step-length is 2.
In addition, full articulamentum 1 can be 2048 dimensional vectors, there are activation primitive layer (that is, Relu layers) thereafter;Full articulamentum
2 can be 1024 dimensional vectors, also have activation primitive layer thereafter;Full articulamentum 3 can be 256 dimensional vectors.Full articulamentum 3 can be with
It is connect with softmax layers, softmax layers of output is classification results.Wherein, activation primitive selection Relu function: f (x)=
Max (0, x), output function select softmax function:
Face comparison method of the invention can further include the scheme being trained to convolutional neural networks shown in Fig. 4.
Specifically, unitary of illumination processing can be carried out using above-mentioned determining facial image, to image, image is carried out the mode of splicing etc.
Training set is formed, convolutional neural networks are trained using the training set, to optimize the weight of each convolution kernel.
After determining the convolutional neural networks after training, and the target image being spliced to form in step S34 is inputted
To the convolutional neural networks.
It should be understood that convolutional neural networks shown in Fig. 4 are merely exemplary, the present invention can also use other shapes
The network structure of formula forms above-mentioned disaggregated model.
S38. the output result of disaggregated model is compared with a classification thresholds, if output result is more than or equal to classification
Threshold value, it is determined that the first facial image and the second facial image correspond to same user.
In an exemplary embodiment of the present invention embodiment, the classification results of disaggregated model (i.e. above-mentioned convolutional neural networks) output
For the floating number between 0 to 1, the similarity of presentation class.The value range of classification thresholds can be 0.6 to 0.8.
In some embodiments of the invention, classification thresholds can be set as 0.8.If the classification of disaggregated model output
As a result it is more than or equal to 0.8, then can determines that the first facial image and the second facial image correspond to same user.On that is,
It states the first original image and the second original image corresponds to same user.In addition, if the classification results of disaggregated model output are less than
0.8, then it can determine that two images are corresponding and be different user.
In addition, in the above-mentioned disaggregated model of training, it, can be by the figure of the splicing if the image of splicing is same user
As corresponding sample labeling is classification 1;If the image of splicing is not same user, can be by the corresponding sample of the image of splicing
This is labeled as classification 0.In this case, still by classification thresholds be 0.8 for, if the classification results of target image be greater than etc.
It is then classification 1 in 0.8;If the classification results of target image are less than 0.8, for classification 0.It should be understood that classification herein
" 1 " and classification " 0 " are merely exemplary explanation, and different classification can also be characterized using other identifier, this exemplary implementation
Particular determination is not done in mode to this.
Face comparison method of the invention is illustrated below with reference to Fig. 5.With reference to Fig. 5, in step S512, service
Device can carry out Face datection to two original images, to determine two facial images;In step S514, server can be right
Two facial images carry out unitary of illumination processing, to reject influence of the illumination to image;It, can be to illumination in step S516
Two facial images after normalized are spliced, to obtain target image;It, can be by target image in step S518
Convolutional neural networks after input training, to classify to target image;In step S520, convolution mind can be determined
The classification results exported through network can determine two original images if classification results are classification 1 in step S522
Corresponding same user;If classification results are classification 0, it can determine that two original images are different in step S524
User.
In addition, face comparison method of the invention can also include the process being trained to convolutional neural networks.Still join
Fig. 5 is examined, in step S502, server can carry out Face datection to many groups of original images, with every group of determination of face figure
Picture;In step S504, unitary of illumination processing can be carried out to every group of image;It, can be to every group of image in step S506
Spliced, to construct training sample, and then constructs training set;In step S508, convolutional neural networks are constructed, and using step
The training set constructed in rapid S506 is trained the convolutional neural networks;Volume in step S510, after can determining training
Product neural network.
It should be understood that above description illustrates only the case where two images are spliced.However, art technology
Personnel can use design of the invention and realize the splicing of three images above, assorting process, to realize to three images above
Face alignment process.This scheme also belongs to protection scope of the present invention.
Face alignment process of the invention is illustrated below with reference to Fig. 6 to Fig. 9.
Fig. 6 shows the schematic diagram of the first original image and the second original image;Fig. 7 shows original to two in Fig. 6
Image carries out the image after Face datection, is denoted as the first facial image and the second facial image respectively;Fig. 8 is shown in Fig. 7
Facial image carry out unitary of illumination treated image;Fig. 9, which is shown, to be zoomed in and out the image in Fig. 8, is spliced
Image, the image are an example of object mentioned above image.
In the example shown in Fig. 6 to Fig. 9, the classification results of disaggregated model are as follows: the result for classification 0 is 0.1474, needle
Result to classification 1 is 0.8526.In the case where classification thresholds are set as 0.8, two original images pair in Fig. 6 can be determined
Answer same user.
In face comparison method of the invention, on the one hand, using disaggregated model thought carry out face alignment, will to than
Pair image mosaic classify at an image, speed is fast, precision is high, avoid in the related technology face identification method to figure
Deterministic process as carrying out feature extraction and characteristic similarity;On the other hand, unitary of illumination processing is carried out in advance to image,
Substantially reduce influence of the illumination to recognition of face;In another aspect, using convolutional neural networks method to the image of splicing into
Row classification, this mode accuracy of identification is high and processing speed is fast.
It should be noted that although describing each step of method in the present invention in the accompanying drawings with particular order, this is simultaneously
Undesired or hint must execute these steps in this particular order, or have to carry out the ability of step shown in whole
Realize desired result.Additional or alternative, it is convenient to omit multiple steps are merged into a step and executed by certain steps,
And/or a step is decomposed into execution of multiple steps etc..
Further, a kind of face alignment device is additionally provided in this example embodiment.
Figure 10 diagrammatically illustrates the block diagram of the face alignment device of exemplary embodiments of the present invention.With reference to figure
10, face alignment device 10 according to an illustrative embodiment of the invention may include image determining module 101, image mosaic
Module 103, image input module 105 and result comparison module 107.
Specifically, image determining module 101 is determined for the first facial image and the second facial image;Image mosaic
Module 103 can be used for the first facial image and the second facial image being spliced into target image;Image input module 105 can be with
For the disaggregated model after training target image input one;As a result comparison module 107 can be used for the output of disaggregated model
As a result it is compared with a classification thresholds, if output result is more than or equal to classification thresholds, it is determined that the first facial image and the
Two facial images correspond to same user.
Face alignment device through the invention carries out face alignment using the thought of disaggregated model, by figure to be compared
Classify as being spliced into an image, speed is fast, precision is high, avoids face identification method in the related technology and carries out to image
The deterministic process of feature extraction and characteristic similarity.
An exemplary embodiment of the present invention, with reference to Figure 11, image determining module 101 may include that original image obtains
Unit 1101 and Face datection unit 1103.
Specifically, original image acquiring unit 1101 can be used for obtaining the first original image and the second original image;People
Face detection unit 1103 can be used for carrying out Face datection to the first original image and the second original image respectively, to determine and the
Corresponding first facial image of one original image and the second facial image corresponding with the second original image.
In the present example embodiment, by being detected to face, the range of face alignment can be greatly reduced, is subtracted
Calculation amount is lacked.
An exemplary embodiment of the present invention, with reference to Figure 12, face alignment device 12 compared to face alignment device 10,
In addition to including image determining module 101, image mosaic module 103, image input module 105 and result comparison module 107, may be used also
To include lighting process module 121.
Specifically, lighting process module 121 can be used for carrying out light to the first facial image and the second facial image respectively
According to normalized.
An exemplary embodiment of the present invention, with reference to Figure 13, lighting process module 121 may include first processing units
1301, the second processing unit 1303 and third processing unit 1305.
Specifically, first processing units 1301 can be used for carrying out gamma transformation to the first facial image;Second processing list
Member 1303 can be used for carrying out difference of Gaussian filtering to the image after gamma transformation;Third processing unit 1305 can be used for height
Image after this differential filtering carries out histogram equalization processing.
By carrying out unitary of illumination processing in advance to image, influence of the illumination to recognition of face is substantially reduced.
An exemplary embodiment of the present invention, with reference to Figure 14, image mosaic module 103 may include image compression unit
1401 and image mosaic unit 1403.
Specifically, image compression unit 1401 can be used for respectively to the first facial image and the progress pair of the second facial image
First facial image and the second facial image are zoomed to identical size by linear interpolation processing;Image mosaic unit 1403
The first facial image and the second facial image after can be used for scale are spliced into target image.
An exemplary embodiment of the present invention, disaggregated model are a convolutional neural networks;Wherein, convolutional neural networks packet
Include 6 convolution pond layers and 3 full articulamentums.
An exemplary embodiment of the present invention, each convolution pond layer in 6 convolution pond layers includes 2 convolution
Layer and 1 maximum pond layer;Wherein, each convolutional layer in 2 convolutional layers includes the convolution that size is 3 × 3 and step-length is 2
Core.
Classified using the method for convolutional neural networks to the image of splicing, this mode accuracy of identification is high and processing is fast
Degree is fast.
Since each functional module and the above method of the program analysis of running performance device of embodiment of the present invention are invented
It is identical in embodiment, therefore details are not described herein.
In addition, above-mentioned attached drawing is only the schematic theory of processing included by method according to an exemplary embodiment of the present invention
It is bright, rather than limit purpose.It can be readily appreciated that the time that above-mentioned processing shown in the drawings did not indicated or limited these processing is suitable
Sequence.In addition, be also easy to understand, these processing, which can be, for example either synchronously or asynchronously to be executed in multiple modules.
It should be noted that although being referred to several modules or list for acting the equipment executed in the above detailed description
Member, but this division is not enforceable.In fact, embodiment according to the present invention, it is above-described two or more
Module or the feature and function of unit can embody in a module or unit.Conversely, an above-described mould
The feature and function of block or unit can be to be embodied by multiple modules or unit with further division.
Those skilled in the art after considering the specification and implementing the invention disclosed here, will readily occur to of the invention its
His embodiment.This application is intended to cover any variations, uses, or adaptations of the invention, these modifications, purposes or
Adaptive change follow general principle of the invention and including the undocumented common knowledge in the art of the present invention or
Conventional techniques.The description and examples are only to be considered as illustrative, and true scope and spirit of the invention are by claim
It points out.
It should be understood that the present invention is not limited to the precise structure already described above and shown in the accompanying drawings, and
And various modifications and changes may be made without departing from the scope thereof.The scope of the present invention is only limited by the attached claims.
Claims (10)
1. a kind of face comparison method characterized by comprising
Determine the first facial image and the second facial image;
First facial image and second facial image are spliced into target image;
By the disaggregated model after one training of target image input;
The output result of the disaggregated model is compared with a classification thresholds, if the output result is more than or equal to described
Classification thresholds, it is determined that first facial image and second facial image correspond to same user.
2. face comparison method according to claim 1, which is characterized in that determine the first facial image and the second face figure
As including:
Obtain the first original image and the second original image;
Face datection is carried out to first original image and second original image respectively, it is original with determination and described first
Corresponding first facial image of image and the second facial image corresponding with second original image.
3. face comparison method according to claim 1, which is characterized in that by first facial image and described the
Two facial images are spliced into before target image, the face comparison method further include:
Unitary of illumination processing is carried out to first facial image and second facial image respectively.
4. face comparison method according to claim 3, which is characterized in that carry out illumination to first facial image and return
One change is handled
Gamma transformation is carried out to first facial image;
Difference of Gaussian filtering is carried out to the image after gamma transformation;
Histogram equalization processing is carried out to the filtered image of difference of Gaussian.
5. face comparison method according to claim 1, which is characterized in that by first facial image and described second
Facial image is spliced into target image
Bilinear interpolation processing is carried out to first facial image and second facial image respectively, it will be described the first
Face image and second facial image zoom to identical size;
By after scaling first facial image and second facial image be spliced into target image.
6. face comparison method according to claim 1, which is characterized in that the disaggregated model is a convolution nerve net
Network;Wherein, the convolutional neural networks include 6 convolution pond layers and 3 full articulamentums.
7. face comparison method according to claim 6, which is characterized in that every a roll in the layer of 6 convolution ponds
Product pond layer includes 2 convolutional layers and 1 maximum pond layer;Wherein, each convolutional layer in 2 convolutional layers includes that size is
3 × 3 and step-length be 2 convolution kernel.
8. a kind of face alignment device characterized by comprising
Image determining module, for determining the first facial image and the second facial image;
Image mosaic module, for first facial image and second facial image to be spliced into target image;
Image input module, for the disaggregated model after training target image input one;
As a result comparison module, for the output result of the disaggregated model to be compared with a classification thresholds, if described defeated
Result is more than or equal to the classification thresholds out, it is determined that first facial image and second facial image correspond to same use
Family.
9. a kind of computer-readable medium, is stored thereon with computer program, which is characterized in that described program is executed by processor
Face comparison method of the Shi Shixian as described in any one of claims 1 to 7.
10. a kind of electronic equipment characterized by comprising
One or more processors;
Storage device, for storing one or more programs, when one or more of programs are by one or more of processing
When device executes, so that one or more of processors realize the face alignment side as described in any one of claims 1 to 7
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811393300.5A CN109492601A (en) | 2018-11-21 | 2018-11-21 | Face comparison method and device, computer-readable medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811393300.5A CN109492601A (en) | 2018-11-21 | 2018-11-21 | Face comparison method and device, computer-readable medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109492601A true CN109492601A (en) | 2019-03-19 |
Family
ID=65697273
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811393300.5A Pending CN109492601A (en) | 2018-11-21 | 2018-11-21 | Face comparison method and device, computer-readable medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109492601A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163169A (en) * | 2019-05-27 | 2019-08-23 | 北京达佳互联信息技术有限公司 | Face identification method, device, electronic equipment and storage medium |
CN110246133A (en) * | 2019-06-24 | 2019-09-17 | 中国农业科学院农业信息研究所 | A kind of corn kernel classification method, device, medium and equipment |
CN111798376A (en) * | 2020-07-08 | 2020-10-20 | 泰康保险集团股份有限公司 | Image recognition method and device, electronic equipment and storage medium |
CN112015966A (en) * | 2020-10-19 | 2020-12-01 | 北京神州泰岳智能数据技术有限公司 | Image searching method and device, electronic equipment and storage medium |
CN114170412A (en) * | 2020-08-20 | 2022-03-11 | 京东科技控股股份有限公司 | Certificate authenticity identification method, device and equipment |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101089874A (en) * | 2006-06-12 | 2007-12-19 | 华为技术有限公司 | Identify recognising method for remote human face image |
US20110002504A1 (en) * | 2006-05-05 | 2011-01-06 | New Jersey Institute Of Technology | System and/or method for image tamper detection |
CN102779273A (en) * | 2012-06-29 | 2012-11-14 | 重庆邮电大学 | Human-face identification method based on local contrast pattern |
CN104281572A (en) * | 2013-07-01 | 2015-01-14 | 中国科学院计算技术研究所 | Target matching method and system based on mutual information |
CN205028305U (en) * | 2015-07-14 | 2016-02-10 | 赵忠义 | Parcel bar code intelligent recognition device |
US20170127045A1 (en) * | 2015-10-28 | 2017-05-04 | Toppano Co., Ltd. | Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof |
CN107103308A (en) * | 2017-05-24 | 2017-08-29 | 武汉大学 | A kind of pedestrian's recognition methods again learnt based on depth dimension from coarse to fine |
CN107122744A (en) * | 2017-04-28 | 2017-09-01 | 武汉神目信息技术有限公司 | A kind of In vivo detection system and method based on recognition of face |
CN107273872A (en) * | 2017-07-13 | 2017-10-20 | 北京大学深圳研究生院 | The depth discrimination net model methodology recognized again for pedestrian in image or video |
CN108288252A (en) * | 2018-02-13 | 2018-07-17 | 北京旷视科技有限公司 | Image batch processing method, device and electronic equipment |
CN108304788A (en) * | 2018-01-18 | 2018-07-20 | 陕西炬云信息科技有限公司 | Face identification method based on deep neural network |
CN108509862A (en) * | 2018-03-09 | 2018-09-07 | 华南理工大学 | Anti- angle and the fast human face recognition for blocking interference |
-
2018
- 2018-11-21 CN CN201811393300.5A patent/CN109492601A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20110002504A1 (en) * | 2006-05-05 | 2011-01-06 | New Jersey Institute Of Technology | System and/or method for image tamper detection |
CN101089874A (en) * | 2006-06-12 | 2007-12-19 | 华为技术有限公司 | Identify recognising method for remote human face image |
CN102779273A (en) * | 2012-06-29 | 2012-11-14 | 重庆邮电大学 | Human-face identification method based on local contrast pattern |
CN104281572A (en) * | 2013-07-01 | 2015-01-14 | 中国科学院计算技术研究所 | Target matching method and system based on mutual information |
CN205028305U (en) * | 2015-07-14 | 2016-02-10 | 赵忠义 | Parcel bar code intelligent recognition device |
US20170127045A1 (en) * | 2015-10-28 | 2017-05-04 | Toppano Co., Ltd. | Image calibrating, stitching and depth rebuilding method of a panoramic fish-eye camera and a system thereof |
CN107122744A (en) * | 2017-04-28 | 2017-09-01 | 武汉神目信息技术有限公司 | A kind of In vivo detection system and method based on recognition of face |
CN107103308A (en) * | 2017-05-24 | 2017-08-29 | 武汉大学 | A kind of pedestrian's recognition methods again learnt based on depth dimension from coarse to fine |
CN107273872A (en) * | 2017-07-13 | 2017-10-20 | 北京大学深圳研究生院 | The depth discrimination net model methodology recognized again for pedestrian in image or video |
CN108304788A (en) * | 2018-01-18 | 2018-07-20 | 陕西炬云信息科技有限公司 | Face identification method based on deep neural network |
CN108288252A (en) * | 2018-02-13 | 2018-07-17 | 北京旷视科技有限公司 | Image batch processing method, device and electronic equipment |
CN108509862A (en) * | 2018-03-09 | 2018-09-07 | 华南理工大学 | Anti- angle and the fast human face recognition for blocking interference |
Non-Patent Citations (2)
Title |
---|
MINGFU XIONG, ET AL.: "Person re-identification via multiple coarse-to-fine deep metrics", 《ECAI"16 PROCEEDINGS OF THE TWENTY-SECOND EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE》 * |
陈慧岩.: "《智能车辆理论与应用》", 31 July 2018 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110163169A (en) * | 2019-05-27 | 2019-08-23 | 北京达佳互联信息技术有限公司 | Face identification method, device, electronic equipment and storage medium |
CN110246133A (en) * | 2019-06-24 | 2019-09-17 | 中国农业科学院农业信息研究所 | A kind of corn kernel classification method, device, medium and equipment |
CN110246133B (en) * | 2019-06-24 | 2021-05-07 | 中国农业科学院农业信息研究所 | Corn kernel classification method, device, medium and equipment |
CN111798376A (en) * | 2020-07-08 | 2020-10-20 | 泰康保险集团股份有限公司 | Image recognition method and device, electronic equipment and storage medium |
CN111798376B (en) * | 2020-07-08 | 2023-10-17 | 泰康保险集团股份有限公司 | Image recognition method, device, electronic equipment and storage medium |
CN114170412A (en) * | 2020-08-20 | 2022-03-11 | 京东科技控股股份有限公司 | Certificate authenticity identification method, device and equipment |
CN112015966A (en) * | 2020-10-19 | 2020-12-01 | 北京神州泰岳智能数据技术有限公司 | Image searching method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Khan et al. | Deep unified model for face recognition based on convolution neural network and edge computing | |
CN109492601A (en) | Face comparison method and device, computer-readable medium and electronic equipment | |
CA2934514C (en) | System and method for identifying faces in unconstrained media | |
CN112954450B (en) | Video processing method and device, electronic equipment and storage medium | |
CN109635627A (en) | Pictorial information extracting method, device, computer equipment and storage medium | |
CN109214343A (en) | Method and apparatus for generating face critical point detection model | |
CN108388878A (en) | The method and apparatus of face for identification | |
CN110210276A (en) | A kind of motion track acquisition methods and its equipment, storage medium, terminal | |
CN107578017A (en) | Method and apparatus for generating image | |
CN109543714A (en) | Acquisition methods, device, electronic equipment and the storage medium of data characteristics | |
CN106874826A (en) | Face key point-tracking method and device | |
CN105488519B (en) | A kind of video classification methods based on video size information | |
CN111832568A (en) | License plate recognition method, and training method and device of license plate recognition model | |
CN108509892A (en) | Method and apparatus for generating near-infrared image | |
CN108694719A (en) | image output method and device | |
CN108280413A (en) | Face identification method and device | |
CN107341464A (en) | A kind of method, equipment and system for being used to provide friend-making object | |
CN114078275A (en) | Expression recognition method and system and computer equipment | |
US20230036338A1 (en) | Method and apparatus for generating image restoration model, medium and program product | |
CN107025444A (en) | Piecemeal collaboration represents that embedded nuclear sparse expression blocks face identification method and device | |
Li et al. | Data-driven affective filtering for images and videos | |
CN109920016A (en) | Image generating method and device, electronic equipment and storage medium | |
CN110427915A (en) | Method and apparatus for output information | |
CN108446658A (en) | The method and apparatus of facial image for identification | |
CN108596070A (en) | Character recognition method, device, storage medium, program product and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190319 |