CN107578034A - information generating method and device - Google Patents
information generating method and device Download PDFInfo
- Publication number
- CN107578034A CN107578034A CN201710910131.7A CN201710910131A CN107578034A CN 107578034 A CN107578034 A CN 107578034A CN 201710910131 A CN201710910131 A CN 201710910131A CN 107578034 A CN107578034 A CN 107578034A
- Authority
- CN
- China
- Prior art keywords
- image
- probability
- face region
- human face
- mentioned
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses information generating method and device.One embodiment of this method includes:Obtain image to be detected and obtained by being carried out in advance to the image to be detected after Face datection, human face region information for indicating the human face region in the image to be detected;Based on the human face region information, facial image is extracted from the image to be detected;By the convolutional neural networks of facial image input training in advance, image feature information is obtained, wherein, the convolutional neural networks are used to extract characteristics of image;The image feature information is parsed, determines the facial image clearly probability;Based on the determine the probability, whether the facial image is clear, and generates testing result.This embodiment improves image detection efficiency.
Description
Technical field
The application is related to field of computer technology, and in particular to Internet technical field, more particularly to information generating method
And device.
Background technology
At present, recognition of face has more application scenarios, such as face payment, identification etc..In recognition of face
During, if facial image is not clear enough, it may result in identification error, or the appearance of the abnormal conditions such as system crash.
Therefore, detection whether is clearly carried out to facial image to be particularly important.
The content of the invention
The purpose of the embodiment of the present application is to propose a kind of information generating method and device.
In a first aspect, the embodiment of the present application provides a kind of information generating method, this method includes:Obtain image to be detected
In advance to above-mentioned image to be detected carry out Face datection after gained, for indicating the human face region in above-mentioned image to be detected
Human face region information;Based on above-mentioned human face region information, facial image is extracted from above-mentioned image to be detected;By above-mentioned face
Image inputs the convolutional neural networks of training in advance, obtains image feature information, wherein, above-mentioned convolutional neural networks are used to extract
Characteristics of image;Above-mentioned image feature information is parsed, determines above-mentioned facial image clearly probability;It is true based on above-mentioned probability
Whether fixed above-mentioned facial image is clear, and generates testing result.
In certain embodiments, it is above-mentioned whether clear based on the above-mentioned facial image of above-mentioned determine the probability, including:Determine above-mentioned
Whether probability is less than probability threshold value, if, it is determined that above-mentioned facial image is unintelligible.
In certain embodiments, it is above-mentioned that above-mentioned image feature information is parsed, determine above-mentioned facial image clearly
Probability, including:By the probability calculation model of above-mentioned image feature information input training in advance, above-mentioned facial image is obtained clearly
Probability, wherein, above-mentioned probability calculation model be used for characterize include face image image feature information and image clearly it is general
The corresponding relation of rate.
In certain embodiments, above-mentioned convolutional neural networks and above-mentioned probability calculation model are instructed by following training step
Get:The training sample of the mark of sample image that is preset including showing face and above-mentioned sample image is extracted, its
In, above-mentioned mark includes being used to characterize whether above-mentioned sample image clearly identifies, and above-mentioned mark includes being used to characterize above-mentioned sample
The first of this image clearly identifies and for characterizing unsharp second mark of above-mentioned sample image;Using machine learning method,
Train to obtain convolutional Neural net based on above-mentioned sample image, above-mentioned mark, default Classification Loss function and back-propagation algorithm
Network and probability calculation model, wherein, above-mentioned Classification Loss function be used to characterizing the probability of above-mentioned probability calculation model output with
State the difference degree of mark included in mark.
In certain embodiments, above-mentioned convolutional neural networks include 5 convolutional layers and 5 pond layers, and above-mentioned pond layer is used
Operated in performing maximum pondization to the information inputted with default window size and default window sliding step-length.
In certain embodiments, it is above-mentioned to be based on above-mentioned human face region information, face figure is extracted from above-mentioned image to be detected
Picture, including:Expand the scope of the human face region indicated by above-mentioned human face region information, obtain the first human face region;Intercept above-mentioned
First human face region obtains above-mentioned facial image.
In certain embodiments, human face region is rectangular area;And indicated by the above-mentioned above-mentioned human face region information of expansion
Human face region scope, including:The height of human face region indicated by above-mentioned human face region information and width expansion are preset
Multiple or increase default value.
Second aspect, the embodiment of the present application provide a kind of information generation device, and the device includes:Acquiring unit, configuration
For obtain image to be detected and in advance to above-mentioned image to be detected carry out Face datection after gained, it is above-mentioned to be checked for indicating
The human face region information of human face region in altimetric image;Extraction unit, it is configured to be based on above-mentioned human face region information, from above-mentioned
Facial image is extracted in image to be detected;Input block, it is configured to the convolution god of above-mentioned facial image input training in advance
Through network, image feature information is obtained, wherein, above-mentioned convolutional neural networks are used to extract characteristics of image;Determining unit, configuration are used
Parsed in above-mentioned image feature information, determine above-mentioned facial image clearly probability;Generation unit, it is configured to be based on
Whether the above-mentioned above-mentioned facial image of determine the probability is clear, and generates testing result.
In certain embodiments, above-mentioned generation unit includes:Determination subelement, it is configured to determine whether above-mentioned probability is low
In probability threshold value, if, it is determined that above-mentioned facial image is unintelligible.
In certain embodiments, above-mentioned determining unit includes:Subelement is inputted, is configured to above-mentioned image feature information
The probability calculation model of training in advance is inputted, obtains above-mentioned facial image clearly probability, wherein, above-mentioned probability calculation model is used
In the image feature information and the corresponding relation of the probability of image clearly that characterize the image for including face.
In certain embodiments, above-mentioned convolutional neural networks and above-mentioned probability calculation model are instructed by following training step
Get:Training sample that is preset, including the sample image for showing face and the mark of above-mentioned sample image is extracted, its
In, above-mentioned mark includes being used to characterize whether above-mentioned sample image clearly identifies, and above-mentioned mark includes being used to characterize above-mentioned sample
The first of this image clearly identifies and for characterizing unsharp second mark of above-mentioned sample image;Using machine learning method,
Train to obtain convolutional Neural net based on above-mentioned sample image, above-mentioned mark, default Classification Loss function and back-propagation algorithm
Network and probability calculation model, wherein, above-mentioned Classification Loss function be used to characterizing the probability of above-mentioned probability calculation model output with
State the difference degree of the mark included in mark.
In certain embodiments, above-mentioned convolutional neural networks include 5 convolutional layers and 5 pond layers, and above-mentioned pond layer is used
Operated in performing maximum pondization to the information inputted with default window size and default window sliding step-length.
In certain embodiments, said extracted unit includes:Expand subelement, be configured to expand above-mentioned human face region letter
The scope of the indicated human face region of breath, obtains the first human face region;Subelement is intercepted, is configured to intercept above-mentioned first face
Region obtains above-mentioned facial image.
In certain embodiments, human face region is rectangular area;And above-mentioned expansion subelement is further configured to:Will
The height and width expansion preset multiple or increase default value of human face region indicated by above-mentioned human face region information.
The third aspect, the embodiment of the present application provide a kind of electronic equipment, and the electronic equipment includes:One or more processing
Device;Storage device, for storing one or more programs;When said one or multiple programs are by said one or multiple processors
Perform so that the method for said one or the realization of multiple processors as described in any implementation in first aspect.
Fourth aspect, the embodiment of the present application provide a kind of computer-readable recording medium, are stored thereon with computer journey
Sequence, the method as described in any implementation in first aspect is realized when said procedure is executed by processor.
The information generating method and device that the embodiment of the present application provides, by obtaining image to be detected and to be checked to this in advance
Human face region information obtained by after altimetric image progress Face datection, for indicating the human face region in the image to be detected, with
Just the human face region information is based on, facial image is extracted from the image to be detected.The facial image is then inputted into instruction in advance
Experienced convolutional neural networks, obtain image feature information, to be parsed to the image feature information, determine the facial image
Clearly probability.Whether clear it is finally based on the determine the probability facial image, and generates testing result.So as to be effectively utilized
Extraction to facial image, image detection scope is reduced, image detection efficiency can be improved.
Moreover, convolutional neural networks can generally extract the image feature information of the various dimensions of image.Based on training in advance
The image feature information of facial image that extracts of convolutional neural networks, to determine facial image clearly probability, Ke Yiti
The degree of accuracy of the high probability, and then the degree of accuracy for facial image whether clearly judged result can be improved.
Brief description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that the application can apply to exemplary system architecture figure therein;
Fig. 2 is the flow chart according to one embodiment of the information generating method of the application;
Fig. 3 is the schematic diagram according to an application scenarios of the information generating method of the application;
Fig. 4 is the flow chart according to another embodiment of the information generating method of the application;
Fig. 5 is the structural representation according to one embodiment of the information generation device of the application;
Fig. 6 is adapted for the structural representation of the computer system of the electronic equipment for realizing the embodiment of the present application.
Embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention, rather than the restriction to the invention.It also should be noted that in order to
Be easy to describe, illustrate only in accompanying drawing to about the related part of invention.
It should be noted that in the case where not conflicting, the feature in embodiment and embodiment in the application can phase
Mutually combination.Describe the application in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the exemplary system of the embodiment of the information generating method that can apply the application or information generation device
System framework 100.
As shown in figure 1, system architecture 100 can include data storage server 101, network 102 and image procossing clothes
Business device 103.Network 102 is in data storage offer communication link between server 101 and image processing server 103
Medium.Network 102 can include various connection types, such as wired, wireless communication link or fiber optic cables etc..
Data storage server 101 can be to provide the server of various services, such as include face for storing
The server of image and the human face region information for indicating the human face region in the image.Alternatively, data storage services
Device 101 can also have Face datection function, and the human face region information can be that data storage server 101 enters to the image
The information generated after row Face datection.
Image processing server 103 can be to provide the server of various services, such as from data storage server 101
Image to be detected and the human face region information for indicating the human face region in the image to be detected are obtained, and it is to be detected based on this
Image and the human face region information carry out corresponding detection operation, generate testing result.
It should be noted that the information generating method that the embodiment of the present application is provided is typically by image processing server 103
Perform, correspondingly, information generation device is generally positioned in image processing server 103.
If it is pointed out that image to be detected that image processing server 103 to be obtained and for indicating that this is to be checked
The human face region information of human face region in altimetric image is stored in advance in image processing server 103 locally, then system architecture 100
In can not include data storage server 101.
It should be understood that the number of the data storage server, network and image processing server in Fig. 1 is only to illustrate
Property.According to needs are realized, can have any number of data storage server, network and image processing server.
With continued reference to Fig. 2, the flow 200 of one embodiment of information generating method according to the application is shown.The letter
The flow 200 of generation method is ceased, is comprised the following steps:
Step 201, obtain image to be detected and in advance to image to be detected carry out Face datection after obtained by, for indicating
The human face region information of human face region in image to be detected.
In the present embodiment, information generating method operation thereon electronic equipment (such as shown in Fig. 1 image procossing clothes
It is engaged in device 103) can be by wired connection mode or radio connection from the data storage connected server (such as Fig. 1
Shown data storage is with server 101) obtain image to be detected and carry out institute after Face datection to the image to be detected in advance
Human face region information obtain, for indicating the human face region in the image to be detected.Certainly, if the image to be detected and the people
Face area information is stored in advance in above-mentioned electronic equipment local, and above-mentioned electronic equipment can locally obtain the image to be detected and should
Human face region information.
It should be noted that human face region can be the human face region for having arbitrary shape (such as circular, rectangle etc.).
Here, when the human face region in above-mentioned image to be detected is border circular areas, above-mentioned human face region information can for example include should
The coordinate of the central point of human face region and the radius of the human face region.When the human face region in above-mentioned image to be detected is rectangle region
During domain, above-mentioned human face region information is such as can include the coordinate at least one summit of the human face region, height and width
Deng.
Actively obtained it is pointed out that above-mentioned image to be detected and above-mentioned human face region information can be above-mentioned electronic equipments
What electronic equipment take or above-mentioned passively obtained (is, for example, that above-mentioned data storage is sent to above-mentioned electronics with server
Equipment), the present embodiment does not do any restriction to content in this respect.
In some optional implementations of the present embodiment, above-mentioned electronic equipment can also be from the terminal device connected
Obtain above-mentioned image to be detected and above-mentioned human face region information.It should be noted that the present embodiment is not to above-mentioned image to be detected
Any restriction is done with the source of above-mentioned human face region information.
Step 202, based on human face region information, facial image is extracted from image to be detected.
In the present embodiment, above-mentioned electronic equipment is after above-mentioned image to be detected and above-mentioned human face region information is got,
Above-mentioned electronic equipment can be based on above-mentioned human face region information, and facial image is extracted from above-mentioned image to be detected.As an example,
The human face region that above-mentioned electronic equipment can intercept indicated by above-mentioned human face region information in above-mentioned image to be detected obtains people
Face image.
Step 203, facial image is inputted to the convolutional neural networks of training in advance, obtains image feature information.
In the present embodiment, for above-mentioned electronic equipment after facial image is obtained, above-mentioned electronic equipment can be by the face figure
As the convolutional neural networks of input training in advance, image feature information is obtained.Wherein, the convolutional neural networks can be used for extracting
Characteristics of image.Herein, image feature information can be the information for being characterized to the feature of image, and the feature of image can be with
It is the various fundamentals (such as color, lines, texture etc.) of image.In practice, convolutional neural networks (Convolutional
Neural Network, CNN) it is a kind of feedforward neural network, its artificial neuron can be responded in a part of coverage
Surrounding cells, have outstanding performance for image procossing.Therefore, it is possible to carry out image feature information using convolutional neural networks
Extraction.
It should be noted that above-mentioned convolutional neural networks can be to existing using machine learning method and training sample
Depth convolutional neural networks (such as DenseBox, VGGNet, ResNet, SegNet etc.) are carried out obtained from Training.
It is pointed out that above-mentioned convolutional neural networks can include at least one convolutional layer and at least one pond layer.Wherein, convolution
Layer can be used for extracting characteristics of image, and pond layer can be used for carrying out down-sampled (downsample) to the information of input.In addition,
Above-mentioned convolutional neural networks can also use various nonlinear activation functions (such as ReLU (Rectified Linear Units,
Correct linear unit) function, Sigmoid functions etc.) NONLINEAR CALCULATION is carried out to information.
Step 204, image feature information is parsed, determines facial image clearly probability.
In the present embodiment, above-mentioned electronic equipment is after the image feature information of above-mentioned facial image is obtained, above-mentioned electronics
Equipment can parse to the image feature information, determine above-mentioned facial image clearly probability.As an example, above-mentioned electronics
Mapping table can be previously stored with equipment local or the server being connected with above-mentioned electronic equipment telecommunication, this is corresponding
Relation table can include substantial amounts of image feature information and facial image corresponding with the image feature information clearly probability.On
The target matched with the image feature information of the facial image extracted can be searched in the mapping table by stating electronic equipment
Image feature information, and clearly determine the probability is the people that is extracted by the facial image corresponding to the target image characteristics information
Face image clearly probability.
Alternatively, clearly probability can be in section [0,1] to facial image determined by above-mentioned electronic equipment
Numerical value.Probability is bigger, and it is more clear can to characterize facial image.
Step 205, based on facial image, clearly whether determine the probability facial image is clear, and generates testing result.
In the present embodiment, above-mentioned electronic equipment is it is determined that the facial image extracted clearly after probability, above-mentioned electronics
Equipment can whether the facial image be clear based on the determine the probability, and generates testing result.Here, the testing result for example may be used
With including for prompting facial image that above-mentioned electronic equipment is extracted clear or unsharp prompt message.In addition, above-mentioned electricity
Sub- equipment can be by the probability compared with probability threshold value, if the probability is not less than probability threshold value, above-mentioned electronic equipment can
To determine that the facial image is clear.Here, probability threshold value numerical value such as can be 0.5, the probability threshold value is can basis
It is actually needed what is modified, the present embodiment does not do any restriction to content in this respect.
In some optional implementations of the present embodiment, if the probability is less than the probability threshold value, above-mentioned electronics is set
Standby to determine that the facial image is unintelligible, i.e., the facial image is fuzzy facial image.
In some optional implementations of the present embodiment, above-mentioned image to be detected can have corresponding image mark
Know, the testing result that above-mentioned electronic equipment is generated for example can include the image identification and it is following in one:For indicating
The unsharp mark of the facial image that is extracted, for indicating that extracted facial image clearly identifies.This is used to indicate institute
The unsharp mark of the facial image of extraction and this be used to indicating that extracted facial image clearly identifies can for example including
Numeral or letter etc..By taking numeral as an example, this is used to indicate that the unsharp mark of extracted facial image can for example use numeral
" 0 " represents;This can for example be represented for indicating that extracted facial image clearly identifies with digital " 1 ".On in addition,
Above-mentioned human face region information can also be included by stating testing result.
With continued reference to Fig. 3, Fig. 3 is a schematic diagram according to the application scenarios of the information generating method of the present embodiment.
In Fig. 3 application scenarios, image processing server 301 can obtain to be detected from the data storage server 302 connected
Image 303 and in advance to image to be detected 303 carry out Face datection after, for indicating the human face region in image to be detected 303
Human face region information 304.Then, image processing server 301 can intercept human face region information in image to be detected 303
Human face region indicated by 304, obtain facial image 305.Afterwards, image processing server 301 can be defeated by facial image 305
The convolutional neural networks for entering training in advance obtain image feature information 306.Then, image processing server 301 can be to image
Characteristic information 306 is parsed, and determines clearly probability 307 of facial image 305.Finally, image processing server 301 can incite somebody to action
Whether clearly identified probability 307 draws the judged result of facial image 305 compared with probability threshold value, and can be with
Testing result 308 is generated, wherein, testing result 308 can include being used for the prompting letter for prompting facial image 305 clear or fuzzy
Breath.
The method that above-described embodiment of the application provides is effectively utilized the extraction to facial image, reduces image detection
Scope, image detection efficiency can be improved.
Moreover, convolutional neural networks can generally extract the image feature information of the various dimensions of image.Based on training in advance
The image feature information of facial image that extracts of convolutional neural networks, to determine facial image clearly probability, Ke Yiti
The degree of accuracy of high true facial image clearly probability, and then can improve for facial image whether clearly judged result
The degree of accuracy.
With further reference to Fig. 4, it illustrates the flow 400 of another of information generating method embodiment.The information generates
The flow 400 of method, comprises the following steps:
Step 401, obtain image to be detected and in advance to image to be detected carry out Face datection after obtained by, for indicating
The human face region information of human face region in image to be detected.
In the present embodiment, information generating method operation thereon electronic equipment (such as shown in Fig. 1 image procossing clothes
It is engaged in device 103) can be by wired connection mode or radio connection from the data storage connected server (such as Fig. 1
Shown data storage is with server 101) obtain image to be detected and carry out institute after Face datection to the image to be detected in advance
Human face region information obtain, for indicating the human face region in the image to be detected.Certainly, if the image to be detected and the people
Face area information is stored in advance in above-mentioned electronic equipment local, and above-mentioned electronic equipment can locally obtain the image to be detected and should
Human face region information.It is pointed out that human face region can be rectangular area.
Step 402, expand the scope of the human face region indicated by human face region information, obtain the first human face region, and cut
The first human face region is taken to obtain facial image.
In the present embodiment, above-mentioned electronic equipment is after above-mentioned image to be detected and above-mentioned human face region information is got,
Above-mentioned electronic equipment can expand the scope of the human face region indicated by human face region information, obtain the first human face region.It is above-mentioned
Electronic equipment can intercept first human face region and obtain facial image.
In the present embodiment, above-mentioned electronic equipment can be by the height of the human face region indicated by above-mentioned human face region information
With width expansion preset multiple or increase default value, using the human face region after expansion as the first human face region.Here, should
Preset multiple numerical value such as can be 1.Moreover, this highly can correspond to same default value with the width, can also be right
Answer different default values.For example, default value corresponding with the height is and the height identical numerical value;It is corresponding with the width
Default value be and the height identical numerical value.Preset multiple and default value are can be modified according to being actually needed
, the present embodiment does not do any restriction to content in this respect.
Step 403, facial image is inputted to the convolutional neural networks of training in advance, obtains image feature information.
In the present embodiment, for above-mentioned electronic equipment after facial image is obtained, above-mentioned electronic equipment can be by the face figure
As the convolutional neural networks of input training in advance, image feature information is obtained.Wherein, the convolutional neural networks can be used for extracting
Characteristics of image.Herein, image feature information can be the information for being characterized to the feature of image, and the feature of image can be with
It is the various fundamentals (such as color, lines, texture etc.) of image.
Here, the convolutional neural networks for example can be full convolutional network (Fully Convolutional Networks,
FCN).The convolutional neural networks can for example include 5 convolutional layers and 5 pond layers.The size of the convolution kernel of convolutional layer is for example
Can be 3 × 3.Pond layer can be used for holding the information inputted with default window size and default window sliding step-length
The maximum pondization operation of row.Wherein, the default window size for example can be 2 × 2, and the window sliding step-length for example can be 2.
It should be noted that the convolutional neural networks can use nonlinear activation function (such as ReLU functions etc.) non-to information progress
It is linear to calculate.
Step 404, image feature information is inputted to the probability calculation model of training in advance, it is clearly general to obtain facial image
Rate.
In the present embodiment, above-mentioned electronic equipment is after the image feature information of above-mentioned facial image is obtained, above-mentioned electronics
Equipment can input the image feature information on the probability calculation model of training in advance, obtain the facial image clearly probability.
Wherein, the probability calculation model can be used for the probability for the image feature information and image clearly for characterizing the image for including face
Corresponding relation.In practice, the probability calculation model can export the probability of the image clearly including face.Moreover, the probability meter
The probability of the image clearly including face can be predicted with classification function (such as Softmax classification functions) by calculating model.
It should be noted that above-mentioned probability calculation model can be a full articulamentum (Fully in neutral net
Connected Layers, FC).Above-mentioned full convolutional network and the full articulamentum may be constructed a deep neural network (Deep
Neural Networks, DNN) entirety.Above-mentioned electronic equipment can be trained to the deep neural network simultaneously, i.e., same
When above-mentioned full convolutional network and above-mentioned full articulamentum are trained.
Specifically, above-mentioned electronic equipment can train to obtain above-mentioned convolutional neural networks and above-mentioned by following training step
Probability calculation model:
First, above-mentioned electronic equipment can extract sample image that is preset including showing face and the sample image
Mark training sample.Wherein, the mark can include being used to characterize whether the sample image clearly identifies, and the mark can
With including clearly first being identified and for characterizing unsharp second mark of the sample image for characterizing the sample image.This
In, first mark can be numeral, such as digital " 1 " etc..Second mark can also be digital, such as digital " 0 " etc..
Then, above-mentioned electronic equipment can utilize machine learning method, based on above-mentioned sample image, above-mentioned mark, preset
Classification Loss function and back-propagation algorithm train to obtain convolutional neural networks and probability calculation model.Wherein, the classification is damaged
Lose probability and the difference degree of mark included in above-mentioned mark that function can be used for characterizing probability calculation model output.
Above-mentioned Classification Loss function can be various loss function (such as Hinge Loss functions or the Softmax Loss for being used to classify
Function etc.).In the training process, Classification Loss function can constrain the mode of convolution kernel modification and direction, the target of training are
Make the value of Classification Loss function minimum.Thus, the full convolutional network that is obtained after training, the parameter of full articulamentum are Classification Loss
The value of function parameter corresponding when being minimum value.
It should be noted that above-mentioned back-propagation algorithm (Back Propgation Algorithm, BP algorithm) can also claim
For error back propagation (Error Back Propagation, BP) algorithm, or Back Propagation Algorithm.BP algorithm is by learning
Process is made up of the forward-propagating of signal and two processes of backpropagation of error.In feedforward network, input signal is through input
Layer input, is calculated by hidden layer and is exported by output layer, output valve is compared with mark value, if there is error, by error 25 reversely by defeated
Go out layer to input Es-region propagations, in this process, gradient descent algorithm can be utilized (such as to roll up neuron weights in convolutional layer
Parameter of product core etc.) it is adjusted.Herein, above-mentioned Classification Loss function can be used to characterize the error of output valve and mark value.
Clearly image and fuzzy in some optional implementations of the present embodiment, in above-mentioned sample image be present
Image.Training sample including fuzzy image is properly termed as negative sample.Image at least one negative sample can be passed through
Do what the fuzzy operations such as Gaussian Blur, motion blur and/or salt-pepper noise obtained to clearly sample image.
Step 405, based on facial image, clearly whether determine the probability facial image is clear, and generates testing result.
In the present embodiment, above-mentioned electronic equipment is it is determined that the facial image extracted clearly after probability, above-mentioned electronics
Equipment can whether the facial image be clear based on the determine the probability, and generates testing result.Here, the testing result for example may be used
With including for prompting facial image that above-mentioned electronic equipment is extracted clear or unsharp prompt message.In addition, above-mentioned electricity
Sub- equipment can be by the probability compared with probability threshold value, if the probability is not less than probability threshold value, above-mentioned electronic equipment can
To determine that the facial image is clear;Otherwise, above-mentioned electronic equipment can determine that the facial image is unintelligible.
Figure 4, it is seen that compared with embodiment corresponding to Fig. 2, the flow of the information generating method in the present embodiment
400 highlight the step of being enlarged to the scope of the human face region in image to be detected and using probability calculation model to image
The step of characteristic information is parsed.Thus, the scheme of the present embodiment description, can by being enlarged to the scope of human face region
To expand the area coverage of facial image, so as to the degree of accuracy of the clearly probability of facial image determined by improving, and then
The degree of accuracy for facial image whether clearly judged result can be improved.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides a kind of generation of information to fill
The one embodiment put, the device embodiment is corresponding with the embodiment of the method shown in Fig. 2, and the device specifically can apply to respectively
In kind electronic equipment.
As shown in figure 5, the information generation device 500 shown in the present embodiment includes:Acquiring unit 501, extraction unit 502,
Input block 503, determining unit 504 and generation unit 505.Wherein, acquiring unit 501 be configured to obtain image to be detected and
In advance to above-mentioned image to be detected carry out Face datection after gained, for indicating the human face region in above-mentioned image to be detected
Human face region information;Extraction unit 502 is configured to be based on above-mentioned human face region information, and people is extracted from above-mentioned image to be detected
Face image;Input block 503 is configured to, by the convolutional neural networks of above-mentioned facial image input training in advance, obtain image spy
Reference ceases, wherein, above-mentioned convolutional neural networks are used to extract characteristics of image;Determining unit 504 is configured to special to above-mentioned image
Reference breath is parsed, and determines above-mentioned facial image clearly probability;And generation unit 505 be configured to based on above-mentioned probability it is true
Whether fixed above-mentioned facial image is clear, and generates testing result.
In the present embodiment, in information generation device 500:Acquiring unit 501, extraction unit 502, input block 503, really
Order member 504 and the specific processing of generation unit 505 and its caused technique effect can be respectively with reference in the corresponding embodiments of figure 2
Step 201, step 202, step 203, the related description of step 204 and step 205, will not be repeated here.
In some optional implementations of the present embodiment, above-mentioned generation unit 505 can include:Determination subelement
(not shown), it is configured to determine whether above-mentioned probability is less than probability threshold value, if, it is determined that above-mentioned facial image is unclear
It is clear.
In some optional implementations of the present embodiment, above-mentioned determining unit 504 can include:Input subelement
(not shown), it is configured to, by the probability calculation model of above-mentioned image feature information input training in advance, obtain above-mentioned people
Face image clearly probability, wherein, the characteristics of image that above-mentioned probability calculation model can be used for characterizing the image for including face is believed
The corresponding relation of breath and the probability of image clearly.
In some optional implementations of the present embodiment, above-mentioned convolutional neural networks and above-mentioned probability calculation model can
To train to obtain by following training step:Extract it is preset, comprising the sample image and above-mentioned sample for showing face
The training sample of the mark of image, wherein, above-mentioned mark can include being used to characterize whether above-mentioned sample image clearly identifies,
Above-mentioned mark can include be used for characterize above-mentioned sample image clearly first mark and it is unclear for characterizing above-mentioned sample image
The second clear mark;Using machine learning method, based on above-mentioned sample image, above-mentioned mark, default Classification Loss function and
Back-propagation algorithm trains to obtain convolutional neural networks and probability calculation model, wherein, above-mentioned Classification Loss function can be used for
Characterize the probability and the difference degree of the mark included in above-mentioned mark of above-mentioned probability calculation model output.
In some optional implementations of the present embodiment, above-mentioned convolutional neural networks can include 5 convolutional layers and 5
Individual pond layer, above-mentioned pond layer can be used for default window size and default window sliding step-length to the information that is inputted
Perform maximum pondization operation.
In some optional implementations of the present embodiment, said extracted unit 502 can include:Expand subelement
(not shown), it is configured to expand the scope of the human face region indicated by above-mentioned human face region information, obtains the first face
Region;Subelement (not shown) is intercepted, is configured to intercept above-mentioned first human face region and obtains above-mentioned facial image.
In some optional implementations of the present embodiment, human face region can be rectangular area;And above-mentioned expansion
Subelement is further configured to:The height of human face region indicated by above-mentioned human face region information and width expansion is default again
Number or increase default value.
The device that above-described embodiment of the application provides is effectively utilized the extraction to facial image, reduces image detection
Scope, image detection efficiency can be improved.
Moreover, convolutional neural networks can generally extract the image feature information of the various dimensions of image.Based on training in advance
The image feature information of facial image that extracts of convolutional neural networks, to determine facial image clearly probability, Ke Yiti
The degree of accuracy of high true facial image clearly probability, and then can improve for facial image whether clearly judged result
The degree of accuracy.
Below with reference to Fig. 6, it illustrates suitable for for realizing the computer system 600 of the electronic equipment of the embodiment of the present application
Structural representation.Electronic equipment shown in Fig. 6 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes CPU (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into program in random access storage device (RAM) 603 from storage part 608 and
Perform various appropriate actions and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interfaces 605 are connected to lower component:Importation 606 including keyboard, mouse etc.;Penetrated including such as negative electrode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loudspeaker etc.;Storage part 608 including hard disk etc.;
And the communications portion 609 of the NIC including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net performs communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc., it is arranged on as needed on driver 610, in order to read from it
Computer program be mounted into as needed storage part 608.
Especially, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, it includes being carried on computer-readable medium
On computer program, the computer program include be used for execution flow chart shown in method program code.In such reality
To apply in example, the computer program can be downloaded and installed by communications portion 609 from network, and/or from detachable media
611 are mounted.When the computer program is performed by CPU (CPU) 601, perform what is limited in the system of the application
Above-mentioned function.
It should be noted that the computer-readable medium shown in the application can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two any combination.Computer-readable recording medium for example can be --- but not
Be limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor, or it is any more than combination.Meter
The more specifically example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more wires, just
Take formula computer disk, hard disk, random access storage device (RAM), read-only storage (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, computer-readable recording medium can be it is any including or storage journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this
In application, computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium beyond storage medium is read, the computer-readable medium, which can send, propagates or transmit, to be used for
By instruction execution system, device either device use or program in connection.Include on computer-readable medium
Program code can be transmitted with any appropriate medium, be included but is not limited to:Wirelessly, electric wire, optical cable, RF etc., or it is above-mentioned
Any appropriate combination.
Flow chart and block diagram in accompanying drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.At this point, each square frame in flow chart or block diagram can generation
The part of one module of table, program segment or code, a part for above-mentioned module, program segment or code include one or more
For realizing the executable instruction of defined logic function.It should also be noted that some as replace realization in, institute in square frame
The function of mark can also be with different from the order marked in accompanying drawing generation.For example, two square frames succeedingly represented are actual
On can perform substantially in parallel, they can also be performed in the opposite order sometimes, and this is depending on involved function.Also
It is noted that the combination of each square frame and block diagram in block diagram or flow chart or the square frame in flow chart, can use and perform rule
Fixed function or the special hardware based system of operation are realized, or can use the group of specialized hardware and computer instruction
Close to realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set within a processor, for example, can be described as:A kind of processor bag
Include acquiring unit, extraction unit, input block, determining unit and generation unit.Wherein, the title of these units is in certain situation
Under do not form restriction to the unit in itself, for example, acquiring unit is also described as " obtaining image to be detected and in advance
Gained, human face region information for indicating the human face region in image to be detected after Face datection are carried out to image to be detected
Unit ".
As on the other hand, present invention also provides a kind of computer-readable medium, the computer-readable medium can be
Included by electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in.
Above computer computer-readable recording medium carries one or more program, and when said one or multiple programs, by one, the electronics is set
During standby execution so that the electronic equipment includes:Obtain image to be detected and Face datection is carried out to above-mentioned image to be detected in advance
Human face region information obtained by afterwards, for indicating the human face region in above-mentioned image to be detected;Believed based on above-mentioned human face region
Breath, facial image is extracted from above-mentioned image to be detected;By the convolutional neural networks of above-mentioned facial image input training in advance, obtain
To image feature information, wherein, above-mentioned convolutional neural networks are used to extract characteristics of image;Above-mentioned image feature information is solved
Analysis, determines above-mentioned facial image clearly probability;It is whether clear based on the above-mentioned facial image of above-mentioned determine the probability, and generate detection
As a result.
Above description is only the preferred embodiment of the application and the explanation to institute's application technology principle.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the particular combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, carried out by above-mentioned technical characteristic or its equivalent feature
The other technical schemes for being combined and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical scheme that the technical characteristic of energy is replaced mutually and formed.
Claims (16)
1. a kind of information generating method, it is characterised in that methods described includes:
Obtain image to be detected and in advance to described image to be detected carry out Face datection after obtained by, it is described to be checked for indicating
The human face region information of human face region in altimetric image;
Based on the human face region information, facial image is extracted from described image to be detected;
By the convolutional neural networks of facial image input training in advance, image feature information is obtained, wherein, the convolution god
It is used to extract characteristics of image through network;
Described image characteristic information is parsed, determines the facial image clearly probability;
It is whether clear based on facial image described in the determine the probability, and generate testing result.
2. according to the method for claim 1, it is characterised in that described whether to be based on facial image described in the determine the probability
Clearly, including:
Determine whether the probability is less than probability threshold value, if, it is determined that the facial image is unintelligible.
3. according to the method for claim 1, it is characterised in that it is described that described image characteristic information is parsed, it is determined that
The facial image clearly probability, including:
By the probability calculation model of described image characteristic information input training in advance, the facial image clearly probability is obtained,
Wherein, the probability calculation model is used for pair of the probability for the image feature information and image clearly for characterizing the image for including face
It should be related to.
4. according to the method for claim 3, it is characterised in that the convolutional neural networks and the probability calculation model are
Train what is obtained by following training step:
The training sample of the mark of sample image that is preset including showing face and the sample image is extracted, wherein, institute
Stating mark includes being used to characterize whether the sample image clearly identifies, and the mark includes being used to characterize the sample image
Clearly first identify and for characterizing unsharp second mark of the sample image;
Using machine learning method, calculated based on the sample image, mark, default Classification Loss function and the backpropagation
Method trains to obtain convolutional neural networks and probability calculation model, wherein, the Classification Loss function is based on characterizing the probability
Calculate the probability and the difference degree of mark included in the mark of model output.
5. according to the method for claim 1, it is characterised in that the convolutional neural networks include 5 convolutional layers and 5 ponds
Change layer, the pond layer is used to perform maximum to the information inputted with default window size and default window sliding step-length
Pondization operates.
6. according to the method for claim 1, it is characterised in that it is described to be based on the human face region information, from described to be checked
Facial image is extracted in altimetric image, including:
Expand the scope of the human face region indicated by the human face region information, obtain the first human face region;
Intercept first human face region and obtain the facial image.
7. according to the method for claim 6, it is characterised in that human face region is rectangular area;And
The scope for expanding the human face region indicated by the human face region information, including:
By the height of the human face region indicated by the human face region information and width expansion preset multiple or increase default value.
8. a kind of information generation device, it is characterised in that described device includes:
Acquiring unit, it is configured to obtain image to be detected and carries out gained after Face datection to described image to be detected in advance
, human face region information for indicating the human face region in described image to be detected;
Extraction unit, it is configured to be based on the human face region information, facial image is extracted from described image to be detected;
Input block, it is configured to, by the convolutional neural networks of facial image input training in advance, obtain characteristics of image letter
Breath, wherein, the convolutional neural networks are used to extract characteristics of image;
Determining unit, it is configured to parse described image characteristic information, determines the facial image clearly probability;
Generation unit, whether based on the determine the probability described in facial image clear, and generate testing result if being configured to.
9. device according to claim 8, it is characterised in that the generation unit includes:
Determination subelement, it is configured to determine whether the probability is less than probability threshold value, if, it is determined that the facial image is not
Clearly.
10. device according to claim 8, it is characterised in that the determining unit includes:
Subelement is inputted, is configured to, by the probability calculation model of described image characteristic information input training in advance, obtain described
Facial image clearly probability, wherein, the probability calculation model is used for the image feature information for characterizing the image for including face
With the corresponding relation of the probability of image clearly.
11. device according to claim 10, it is characterised in that the convolutional neural networks and the probability calculation model
Train to obtain by following training step:
The training sample of the mark of sample image that is preset including showing face and the sample image is extracted, wherein, institute
Stating mark includes being used to characterize whether the sample image clearly identifies, and the mark includes being used to characterize the sample image
Clearly first identify and for characterizing unsharp second mark of the sample image;
Using machine learning method, calculated based on the sample image, mark, default Classification Loss function and the backpropagation
Method trains to obtain convolutional neural networks and probability calculation model, wherein, the Classification Loss function is based on characterizing the probability
Calculate the probability and the difference degree of mark included in the mark of model output.
12. device according to claim 8, it is characterised in that the convolutional neural networks include 5 convolutional layers and 5
Pond layer, the pond layer are used to perform most the information inputted with default window size and default window sliding step-length
Great Chiization operates.
13. device according to claim 8, it is characterised in that the extraction unit includes:
Expand subelement, be configured to expand the scope of the human face region indicated by the human face region information, obtain the first
Face region;
Subelement is intercepted, is configured to intercept first human face region and obtains the facial image.
14. device according to claim 13, it is characterised in that human face region is rectangular area;And
The expansion subelement is further configured to:
By the height of the human face region indicated by the human face region information and width expansion preset multiple or increase default value.
15. a kind of electronic equipment, it is characterised in that including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are by one or more of computing devices so that one or more of processors are real
The now method as described in any in claim 1-7.
16. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that described program is processed
The method as described in any in claim 1-7 is realized when device performs.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710910131.7A CN107578034A (en) | 2017-09-29 | 2017-09-29 | information generating method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710910131.7A CN107578034A (en) | 2017-09-29 | 2017-09-29 | information generating method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107578034A true CN107578034A (en) | 2018-01-12 |
Family
ID=61040300
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710910131.7A Pending CN107578034A (en) | 2017-09-29 | 2017-09-29 | information generating method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107578034A (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460364A (en) * | 2018-03-27 | 2018-08-28 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating information |
CN109035243A (en) * | 2018-08-10 | 2018-12-18 | 北京百度网讯科技有限公司 | Method and apparatus for exporting battery pole piece burr information |
CN109117736A (en) * | 2018-07-19 | 2019-01-01 | 厦门美图之家科技有限公司 | A kind of method and calculating equipment of judgement face visibility of a point |
CN109344908A (en) * | 2018-10-30 | 2019-02-15 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating model |
CN109376267A (en) * | 2018-10-30 | 2019-02-22 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating model |
CN110738080A (en) * | 2018-07-19 | 2020-01-31 | 杭州海康威视数字技术股份有限公司 | method, device and electronic equipment for identifying modified motor vehicle |
CN110909688A (en) * | 2019-11-26 | 2020-03-24 | 南京甄视智能科技有限公司 | Face detection small model optimization training method, face detection method and computer system |
CN111259698A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN111259695A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring information |
CN111783644A (en) * | 2020-06-30 | 2020-10-16 | 百度在线网络技术(北京)有限公司 | Detection method, device, equipment and computer storage medium |
CN109145828B (en) * | 2018-08-24 | 2020-12-25 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating video category detection model |
CN113614700A (en) * | 2020-03-03 | 2021-11-05 | 华为技术有限公司 | Image display monitoring method, device and equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631439A (en) * | 2016-02-18 | 2016-06-01 | 北京旷视科技有限公司 | Human face image collection method and device |
CN106650575A (en) * | 2016-09-19 | 2017-05-10 | 北京小米移动软件有限公司 | Face detection method and device |
CN107133948A (en) * | 2017-05-09 | 2017-09-05 | 电子科技大学 | Image blurring and noise evaluating method based on multitask convolutional neural networks |
-
2017
- 2017-09-29 CN CN201710910131.7A patent/CN107578034A/en active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105631439A (en) * | 2016-02-18 | 2016-06-01 | 北京旷视科技有限公司 | Human face image collection method and device |
CN106650575A (en) * | 2016-09-19 | 2017-05-10 | 北京小米移动软件有限公司 | Face detection method and device |
CN107133948A (en) * | 2017-05-09 | 2017-09-05 | 电子科技大学 | Image blurring and noise evaluating method based on multitask convolutional neural networks |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108460364A (en) * | 2018-03-27 | 2018-08-28 | 百度在线网络技术(北京)有限公司 | Method and apparatus for generating information |
CN110738080A (en) * | 2018-07-19 | 2020-01-31 | 杭州海康威视数字技术股份有限公司 | method, device and electronic equipment for identifying modified motor vehicle |
CN109117736A (en) * | 2018-07-19 | 2019-01-01 | 厦门美图之家科技有限公司 | A kind of method and calculating equipment of judgement face visibility of a point |
CN109035243A (en) * | 2018-08-10 | 2018-12-18 | 北京百度网讯科技有限公司 | Method and apparatus for exporting battery pole piece burr information |
CN109145828B (en) * | 2018-08-24 | 2020-12-25 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating video category detection model |
CN109376267B (en) * | 2018-10-30 | 2020-11-13 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating a model |
CN109344908B (en) * | 2018-10-30 | 2020-04-28 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating a model |
CN109376267A (en) * | 2018-10-30 | 2019-02-22 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating model |
CN109344908A (en) * | 2018-10-30 | 2019-02-15 | 北京字节跳动网络技术有限公司 | Method and apparatus for generating model |
CN111259698A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN111259695A (en) * | 2018-11-30 | 2020-06-09 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring information |
CN111259695B (en) * | 2018-11-30 | 2023-08-29 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring information |
CN111259698B (en) * | 2018-11-30 | 2023-10-13 | 百度在线网络技术(北京)有限公司 | Method and device for acquiring image |
CN110909688A (en) * | 2019-11-26 | 2020-03-24 | 南京甄视智能科技有限公司 | Face detection small model optimization training method, face detection method and computer system |
CN110909688B (en) * | 2019-11-26 | 2020-07-28 | 南京甄视智能科技有限公司 | Face detection small model optimization training method, face detection method and computer system |
CN113614700A (en) * | 2020-03-03 | 2021-11-05 | 华为技术有限公司 | Image display monitoring method, device and equipment |
CN111783644A (en) * | 2020-06-30 | 2020-10-16 | 百度在线网络技术(北京)有限公司 | Detection method, device, equipment and computer storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107578034A (en) | information generating method and device | |
CN108038469B (en) | Method and apparatus for detecting human body | |
CN107590807A (en) | Method and apparatus for detection image quality | |
CN107609536A (en) | Information generating method and device | |
CN114913565B (en) | Face image detection method, model training method, device and storage medium | |
CN107590482A (en) | information generating method and device | |
CN107644209A (en) | Method for detecting human face and device | |
CN108229575A (en) | For detecting the method and apparatus of target | |
CN107633218A (en) | Method and apparatus for generating image | |
CN107545241A (en) | Neural network model is trained and biopsy method, device and storage medium | |
CN107622252A (en) | information generating method and device | |
CN107908789A (en) | Method and apparatus for generating information | |
CN108446651A (en) | Face identification method and device | |
CN109472264A (en) | Method and apparatus for generating object detection model | |
CN108509916A (en) | Method and apparatus for generating image | |
CN109063587A (en) | data processing method, storage medium and electronic equipment | |
CN115861462B (en) | Training method and device for image generation model, electronic equipment and storage medium | |
CN108427941A (en) | Method, method for detecting human face and device for generating Face datection model | |
CN108509921A (en) | Method and apparatus for generating information | |
CN107220652A (en) | Method and apparatus for handling picture | |
CN109871791A (en) | Image processing method and device | |
CN107093164A (en) | Method and apparatus for generating image | |
CN108470179A (en) | Method and apparatus for detecting object | |
CN110070076A (en) | Method and apparatus for choosing trained sample | |
CN113240430B (en) | Mobile payment verification method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |