CN109753934A - A kind of method and identification device identifying image true-false - Google Patents
A kind of method and identification device identifying image true-false Download PDFInfo
- Publication number
- CN109753934A CN109753934A CN201910019689.5A CN201910019689A CN109753934A CN 109753934 A CN109753934 A CN 109753934A CN 201910019689 A CN201910019689 A CN 201910019689A CN 109753934 A CN109753934 A CN 109753934A
- Authority
- CN
- China
- Prior art keywords
- image
- output valve
- false
- cnn
- true
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses a kind of method for identifying image true-false, and this method may include: each frame image for obtaining images to be recognized, and each frame image includes the first image and the second image acquired by different modes;Pass through the facial image in the first convolutional Neural sub-network CNN the first image of training after being trained to, to obtain the first output valve, pass through the facial image in the 2nd CNN the second image of training after being trained to, to obtain the second output valve, first output valve indicates that the true and false degree of the facial image in the first image, the second output valve indicate the true and false degree of the facial image in the second image;Average value is calculated according to the first output valve and the second output valve, to determine the true and false of images to be recognized.The embodiment of the present application also provides corresponding identification device.Technical scheme can be improved the accuracy rate of image false-proof.
Description
Technical field
This application involves field of image recognition, and in particular to a kind of method and identification device for identifying image true-false.
Background technique
With the development of science and technology, in more and more products are suitable for attendance, certification is veritified.Recognition of face is to utilize to take the photograph
Camera or camera acquire image or video flowing containing face, and detect face in the picture automatically, and then to detection
The face arrived carries out a kind of technology of face recognition, that is to say, that by carrying out face alignment to user to judge the user's
Face true or false.
In the prior art, two kinds of identification methods are generallyd use to the true and false of identification face.The first identification method is to use
Single visible images carry out the identification of true or false to face;Second of identification method is using single near-infrared image to face
Carry out the identification of true or false.
However, being all the near-infrared figure of the visible images using single or single for two kinds of above-mentioned identification methods
As carrying out recognition of face, it is easy to appear the lower problem of identification face true or false accuracy rate.
Summary of the invention
The embodiment of the present application provides a kind of method and identification device for identifying image true-false, can be improved image false-proof
Accuracy rate.
In view of this, the embodiment of the present application provides following scheme:
The application first aspect provides a kind of method for identifying image true-false, and this method may include: to obtain figure to be identified
Each frame image of picture, each frame image include the first image and the second image acquired by different modes;By being instructed
The facial image in the first convolutional Neural sub-network CNN the first image of training after white silk, to obtain the first output valve, by being instructed
The facial image in the 2nd CNN the second image of training after white silk, to obtain the second output valve, the first output valve indicates the first image
In facial image true and false degree, the second output valve indicate the second image in facial image true and false degree;According to first
Output valve and the second output valve calculate average value, to determine the true and false of images to be recognized.
Optionally, with reference to the above first aspect, in the first possible implementation, pass through the first volume after being trained to
Facial image in product nerve sub-network CNN the first image of training passes through second after being trained to obtain the first output valve
Facial image in CNN the second image of training may include: to extract the first image by the first CNN to obtain the second output valve
In the corresponding fisrt feature figure of facial image, passing through the 2nd CNN, to extract facial image in the second image corresponding second special
Sign figure;Fisrt feature figure is predicted by the first CNN, and to obtain the first output valve, second feature figure is predicted by the 2nd CNN, with
Obtain the second output valve.
Optionally, with reference to the above first aspect, first aspect the first possible implementation, in second of possible reality
In existing mode, average value is calculated according to the first output valve and the second output valve, may include: according to the first output valve with it is corresponding
Weight and the second output valve are weighted and averaged with corresponding weight, obtain average value;Accordingly, to determine images to be recognized
The true and false may include: to judge whether the average value is greater than preset threshold;If more than, it is determined that it is true for going out images to be recognized;If small
In or be equal to, it is determined that it is false for going out images to be recognized.
Optionally, with reference to the above first aspect, first aspect the first possible implementation, in the third possible reality
It can also include: the facial image detected in the first image after each frame image for obtaining images to be recognized in existing mode
With the facial image in the second image.
Optionally, with reference to the above first aspect, in the fourth possible implementation, passing through first after being trained to
Facial image in convolutional Neural sub-network CNN the first image of training passes through second after being trained to obtain the first output valve
CNN training the second image in facial image, before obtaining the second output valve, can also include: calculate first-loss value and
Second penalty values, first-loss value are calculated according to the first output valve and corresponding first label value of the first output valve,
Second penalty values are calculated according to the second output valve and corresponding second label value of the second output valve, the first label value table
Show the target value of the facial image in the first image, the second label value indicates that the target of the facial image in the second image takes
Value;First-loss value and corresponding weight and the second penalty values are weighted summation with corresponding weight, so that weighting is asked
Value with after indicate the first CNN after being trained to and be trained to after the 2nd CNN degree of optimization.
The application second aspect provides a kind of identification device, which may include: acquiring unit, for obtain to
Identify that each frame image of image, each frame image include the first image and the second image acquired by different modes;Instruction
Practice unit, in the first image for obtaining by the first convolutional Neural sub-network CNN training acquiring unit after being trained to
Facial image, to obtain the first output valve, in the second image by the 2nd CNN training acquiring unit acquisition after being trained to
Facial image, to obtain the second output valve, the first output valve indicates the true and false of the facial image in the first image as a result, second is defeated
Value indicates the true and false result of the facial image in the second image out;Processing unit, for what is obtained after being trained according to training unit
First output valve and the second output valve calculate average value, to determine the true and false of images to be recognized.
Optionally, in conjunction with above-mentioned second aspect, in the first possible implementation, which may include:
Computing module, for the first output valve and corresponding weight and the second output valve to be weighted and averaged with corresponding weight, with
Obtain average value;After computing module obtains average value, which can also include: judgment module, calculate for judging
Whether the average value after module calculates is greater than preset threshold;First determining module, for judging that average value is big when judgment module
When preset threshold, determine that images to be recognized is true;Second determining module, for judging that average value is less than when judgment module
Or when being equal to preset threshold, determine that images to be recognized is false.
Optionally, in conjunction with above-mentioned second aspect, in the second possible implementation, which can also include
Detection unit, for detecting the face figure in the first image after each frame image that acquiring unit obtains images to be recognized
Facial image in picture and the second image.
The application third aspect provides a kind of computer equipment, which may include: processor and memory;
The memory is for storing program instruction, and when computer equipment operation, which executes the journey of memory storage
Sequence instruction, so that the computer equipment executes the identification such as above-mentioned first aspect, first aspect any one possible implementation
The method of image true-false.
The application fourth aspect provides a kind of computer readable storage medium, is stored in the computer readable storage medium
Instruction, when run on a computer, allowing computer to execute above-mentioned first aspect, first aspect, any one may
The method of implementation identification image true-false.
Wherein, second aspect, the third aspect, technical effect brought by any implementation can be found in fourth aspect
Technical effect brought by first aspect, details are not described herein again.
As can be seen from the above technical solutions, the embodiment of the present application has the advantage that
Since each frame image of images to be recognized all includes the first image and the second figure acquired by different modes
Picture, therefore the facial image in the first image and the second image is carried out respectively by the first CNN after being trained to and the 2nd CNN
Two class output valves can be calculated after training, so that average value is calculated according to these two types of output valves, it is to be identified to determine with this
The true and false of image can effectively improve the accuracy rate of image false-proof.
Detailed description of the invention
Fig. 1 is one embodiment schematic diagram of the method for identification image true-false provided by the embodiments of the present application;
Fig. 2 is another embodiment schematic diagram of the method for identification image true-false provided by the embodiments of the present application;
Fig. 3 is one embodiment schematic diagram of identification device provided by the embodiments of the present application;
Fig. 4 is another embodiment schematic diagram of identification device provided by the embodiments of the present application;
Fig. 5 is another embodiment schematic diagram of identification device provided by the embodiments of the present application;
Fig. 6 is one schematic diagram of hardware configuration of the communication device in the embodiment of the present application.
Specific embodiment
The embodiment of the present application provides a kind of method and identification device for identifying image false-proof, can effectively improve figure
As anti-fake accuracy rate.
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete
Site preparation description, it is clear that the described embodiments are only a part but not all of the embodiments of the present application.Based on this
Embodiment in application, every other reality obtained by those of ordinary skill in the art without making creative efforts
Example is applied, shall fall in the protection scope of this application.
The description and claims of this application and term " first ", " second ", " third ", " in above-mentioned attached drawing
The (if present)s such as four " are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should manage
The data that solution uses in this way are interchangeable under appropriate circumstances, so as to embodiments herein described herein such as can in addition to
The sequence other than those of diagram or description is implemented herein.In addition, term " includes " and " having " and their any change
Shape, it is intended that cover and non-exclusive include.Those of ordinary skill in the art it is found that with figure Computational frame differentiation and newly answer
With the appearance of scene, technical solution provided in an embodiment of the present invention is equally applicable for similar technical problem.
Fig. 1 is one embodiment schematic diagram of the method for identification image true-false provided by the embodiments of the present application.
As shown in Figure 1, one embodiment of the method for identification image true-false provided by the embodiments of the present application may include:
101, each frame image of images to be recognized is obtained.
In the present embodiment, target user is shot by binocular camera or binocular camera, is got to be identified
Each frame image of image, each frame image include the first image and the second image acquired by different modes, also
Be say that two or more different types of images can be obtained in each frame image simultaneously, such as: the first image, which can be, to be passed through
The collected visible images of visible light, the second image can be through the collected near-infrared image of near-infrared, actually answering
There are many more can be used for anti-fake identification image similar to this in, specifically herein without limitation.
102, by the facial image in the first convolutional Neural sub-network CNN the first image of training, to obtain the first output
Value, by the facial image in the 2nd CNN the second image of training, to obtain the second output valve.
In the embodiment of the present application, the first output valve can indicate the true and false degree of the facial image in the first image, second
Output valve can indicate the true and false degree of the facial image in the second image, that is to say, that output valve can be used to indicate face figure
The true and false degree of picture, therefore can be somebody's turn to do by the facial image in the first CNN training first image and by the 2nd CNN training
Facial image in second image can respectively obtain corresponding first output valve and the second output valve, but should be noted
First CNN and the 2nd CNN is the CNN after being trained to.Mentioned convolutional Neural sub-network (convolutional neural
It network:CNN) is a kind of feedforward neural network, its artificial neuron can respond single around in a part of coverage area
Member has outstanding performance for large-scale image procossing, therefore has been widely deployed in image recognition tasks, it mainly treats knowledge
Other image carries out the extraction of feature, and it includes convolutional layer, line rectification layer, pond layer and loss function layer substantially,
In every layer of convolutional layer be made of several convolution units, the purpose of convolution algorithm is to extract the different characteristic of input, and same layer
Convolution unit between can share weighting parameter, loss function layer embodies prediction result and true knot for determining training process
Otherness between fruit.
103, average value is calculated, to determine the true and false of images to be recognized.
In the embodiment of the present application, after obtaining the first output valve and the second output valve, need to the first output valve and second
Output valve is weighted and averaged, and allows to calculate the average value of images to be recognized, so as to be determined according to the average value
The true and false of images to be recognized out.If average value is greater than preset threshold, it can determine that images to be recognized is true;Conversely, if
Average value is less than or equal to preset threshold, then can determine that images to be recognized is vacation, mentioned preset threshold can basis
Practical application and sets itself, it can be used for measuring images to be recognized whether a boundary scale of the true and false.
Since each frame image of images to be recognized all includes the first image and the second figure acquired by different modes
Picture, therefore the facial image in the first image and the second image is carried out respectively by the first CNN after being trained to and the 2nd CNN
Two class output valves can be calculated after training, so that average value is calculated according to these two types of output valves, it is to be identified to determine with this
The true and false of image can effectively improve the accuracy rate of image false-proof.
Fig. 2 is another embodiment schematic diagram of the method for identification image true-false provided by the embodiments of the present application.
As shown in Fig. 2, another embodiment of the method for identification image true-false provided by the embodiments of the present application may include:
201, each frame image of images to be recognized is obtained.
In the present embodiment, target user is shot by binocular camera or binocular camera, is got to be identified
Each frame image of image, each frame image include the first image and the second image acquired by different modes, also
Be say that two or more different types of images can be obtained in each frame image simultaneously, such as: the first image, which can be, to be passed through
The collected visible images of visible light, the second image can be through the collected near-infrared image of near-infrared, actually answering
There are many more can be used for anti-fake identification image similar to this in, specifically herein without limitation.
It should be noted that since image is made of pixel one by one, each pixel is there are three channel, generation respectively
Table R, G, B color, therefore when acquiring each frame image by binocular camera or camera, if each frame image is included
By different modes acquire the first image and the second image be not 3 channel images, that is to say, that some images are single channels
Image just needs to change at this time 3 channel images, i.e., with 3 channel images of R, G and B, it is therefore an objective to because of convolutional Neural subnet
The facial image of network CNN training all must be 3 channels.And the method for changing into 3 channel images can be each channel R, G, B
Gray value is designed according to the gray value in original channel.
202, the facial image in the facial image and the second image in the first image is detected.
In the present embodiment, by obtaining obtained first image of each frame image and the second figure in images to be recognized
As after, need to detect the first image and the second image, it is therefore an objective to need to detect which is not deposited in both images
In facial image, if not detecting the facial image in both images, it just cannot get subsequent output valve, because
This can not just judge the true and false of images to be recognized, so can pass through output error message when facial image is not detected
Bulletin is carried out, each frame image of images to be recognized is reacquired.
It should be noted that can be cut out if the first image and the second image can detect facial image
Human face region, and size normalized can be carried out to the human face region of cutting, such as: adjustment pixel can obtain in this way
To several true and false samples.
203, the corresponding fisrt feature figure of facial image in the first image is extracted by the first CNN, is mentioned by the 2nd CNN
Take the corresponding second feature figure of the facial image in the second image.
In the embodiment of the present application, facial image has different features, and the prediction point of model can be carried out to different characteristic
Analyse the true and false degree of available corresponding facial image.Therefore the face in the first image can be extracted by the first CNN
The corresponding fisrt feature figure of image, the corresponding second feature of facial image in the second image can be extracted by the 2nd CNN
Figure.
204, fisrt feature figure is predicted by the first CNN, it is special by the 2nd CNN prediction second to obtain the first output valve
Sign figure, to obtain the second output valve.
In the present embodiment, after the characteristic pattern for extracting facial image, it is necessary to characteristic pattern is analyzed by model, thus
A predicted value is obtained, which can be used to indicate the true and false degree of facial image, that is to say, that can pass through the first CNN
Fisrt feature figure is predicted to obtain the first output valve, second feature figure is predicted by the 2nd CNN to obtain the second output valve.It is mentioned
And the first output valve and the second output valve be real number between 0 to 1.
It should be noted that by the facial image in the first CNN after being trained to the first image of training, to obtain the
One output valve, by the facial image in the 2nd CNN the second image of training after being trained to, before obtaining the second output valve,
Also need to carry out the first CNN and the 2nd CNN the training of one optimization degree, training process is the output facial image
What value and label value were realized by minimizing objective function, that is, embodied by penalty values, it is counter-propagating to the first CNN
With the 2nd CNN, while undated parameter, such as: weight and biasing.Illustrate the first CNN either the if penalty values are smaller so
Two CNN are trained to more optimize, then train the face figure in the first image by the first CNN after being trained to and the 2nd CNN
Facial image in picture and the second image, obtained first output valve and the second output valve will be more accurate, can also mention in this way
Height determines the accuracy of the true and false of images to be recognized.For the above-mentioned minimum objective function calculation formula referred to are as follows:Wherein, M is the class of true and false sample
Not, i.e. M=2, N are the quantity of true and false sample;L is loss function, passes through nonlinear class prediction function F and true and false sample
Label value maps to obtain;W is the weighted value of the first CNN and the 2nd CNN;γ is greater than 0 canonical constant;Φ function is canonical
, punishment is served to the weighted value of network.
It further illustrating, the training for optimization degree mentioned above is embodied by penalty values, because
This can calculate first-loss value and the second penalty values, and first-loss value is corresponding according to the first output valve and the first output valve
First label value is calculated, and the second penalty values are calculated according to the second output valve and corresponding second label value of the second output valve
It obtains.The above-mentioned first-loss value referred to and the second penalty values can pass through loss function L, that is, cross entropy loss function
CrossEntropyLoss is calculated, its calculation formula are as follows:Wherein, yjtIt is classification
The output valve of anticipation function F,It is corresponding label, j=2.The output valve of class prediction function F is exactly above-mentioned refers in fact
The first output valve or the second output valve, the first label value indicates the target value of the facial image in the first image, such as: true
The value of face is 1, and the value of false face is 0;Second label value indicates the target value of the facial image in the second image, such as: true
The value of face is 1, and the value of false face is 0.After this, by first-loss value and corresponding weight, the second penalty values and corresponding
Weight be weighted summation so that value after weighted sum indicate the first CNN after being trained to and be trained to after second
The degree of optimization of CNN.It can be represented by loss function loss formula: loss=a*loss1+b*loss2, wherein weight
A, b > 0, loss1 indicate that the loss function of the first CNN, loss2 indicate the loss function of the 2nd CNN.If the damage being calculated
Mistake value loss is smaller, alternatively penalty values loss it is smaller and smaller change to no longer decline when, indicate that the first CNN and second
CNN is trained to be optimal at this time, therefore the premise can also be used as the condition of training end.
205, it is weighted and averaged, is obtained with corresponding weight according to the first output valve and corresponding weight and the second output valve
To average value.
In the present embodiment, after obtaining the first output valve and the second output valve, need to export the first output valve and second
Value is weighted and averaged, that is, the first output valve and corresponding weight and the second output valve are weighted with corresponding weight
It is average, the average value of images to be recognized may further be calculated, so as to determine images to be recognized according to the average value
The true and false.
206, judge whether average value is greater than preset threshold.
In the present embodiment, due to preset threshold can be used for measure images to be recognized whether a boundary ruler of the true and false
Degree, therefore also need after calculating average value to carry out the two the comparison of size judges the true of images to be recognized with this
It is pseudo-.If more than thening follow the steps 207;Conversely, thening follow the steps 208 if being less than.
If 207, average value is greater than preset threshold, it is determined that images to be recognized is true.
Optionally, if 208, average value is less than or equal to preset threshold, it is determined that images to be recognized is false.
Since each frame image of images to be recognized all includes the first image and the second figure acquired by different modes
Picture, therefore the facial image in the first image and the second image is carried out respectively by the first CNN after being trained to and the 2nd CNN
Two class output valves can be calculated after training, so that average value is calculated according to these two types of output valves, it is to be identified to determine with this
The true and false of image can effectively improve the accuracy rate of image false-proof.
It is above-mentioned that mainly scheme provided by the embodiments of the present application is described.It can be understood that in order to realize above-mentioned function
Can, it contains and executes the corresponding hardware configuration of each function and/or software module.Those skilled in the art should be easy to realize
It arrives, module described in conjunction with the examples disclosed in the embodiments of the present disclosure and algorithm steps, the application can be with hardware or hard
The combining form of part and computer software is realized.Some function is actually in a manner of hardware or computer software driving hardware
It executes, specific application and design constraint depending on technical solution.Professional technician can specifically answer each
For using different methods to achieve the described function, but this realization is it is not considered that exceed scope of the present application.
The embodiment of the present application can carry out the division of functional module according to above method example to device, for example, can be right
The each functional module of each function division is answered, two or more functions can also be integrated in a processing module.
Above-mentioned integrated module both can take the form of hardware realization, can also be realized in the form of software function module.It needs
Illustrate, is schematical, only a kind of logical function partition to the division of module in the embodiment of the present application, it is practical to realize
When there may be another division manner.
Fig. 3 is one embodiment schematic diagram of identification device provided by the embodiments of the present application.
As shown in figure 3, identification device 30 provided by the embodiments of the present application includes acquiring unit 301, training unit 302, place
Manage unit 303;
Acquiring unit 301, for obtaining each frame image of images to be recognized, each frame image includes by not Tongfang
The first image and the second image of formula acquisition;
Training unit 302, for being obtained by the first convolutional Neural sub-network CNN training acquiring unit 301 after being trained to
The facial image in the first image taken passes through the 2nd CNN training acquiring unit after being trained to obtain the first output valve
Facial image in 301 the second images obtained, to obtain the second output valve, the first output valve indicates the face in the first image
The true and false of image is as a result, the second output valve indicates the true and false result of the facial image in the second image;
Processing unit 303, for being calculated according to the first output valve and the second output valve that are obtained after the training of training unit 302
Average value, to determine the true and false of images to be recognized.
In the embodiment of the present application, since each frame image of images to be recognized all includes the acquired by different modes
One image and the second image, therefore by the first CNN after being trained to and the 2nd CNN respectively in the first image and the second image
Facial image be trained after can calculate two class output valves, to calculate average value according to these two types of output valves, with
This true and false to determine images to be recognized, can effectively improve the accuracy rate of image false-proof.
The identification device in the embodiment of the present application is understood in detail in order to make it easy to understand, please referring to Fig. 4, Fig. 4 is
Another embodiment of identification device 40 provided by the embodiments of the present application, including acquiring unit 401, training unit 402, processing unit
403, it is similar with the function of above-mentioned 301-303;
Processing unit 403 in the present embodiment may include:
Computing module 4031, for carrying out the first output valve and corresponding weight and the second output valve with corresponding weight
Weighted average, to obtain average value;
After computing module 4031 obtains average value, the processing unit 403 further include:
Judgment module 4032, for judging whether the average value after computing module 4031 calculates is greater than preset threshold;
First determining module 4033, for determining when judgment module 4032 judges that average value is greater than preset threshold
Images to be recognized is true;
Second determining module 4034, for when judgment module 4032 judge average value be less than or equal to preset threshold when,
Determine that images to be recognized is false.
In the embodiment of the present application, since each frame image of images to be recognized all includes the acquired by different modes
One image and the second image, therefore by the first CNN after being trained to and the 2nd CNN respectively in the first image and the second image
Facial image be trained after can calculate two class output valves, to calculate average value according to these two types of output valves, with
This true and false to determine images to be recognized, can effectively improve the accuracy rate of image false-proof.
The identification device in the embodiment of the present application is understood in detail in order to make it easy to understand, please referring to Fig. 5, Fig. 5 is
Another embodiment of identification device 50 provided by the embodiments of the present application, including acquiring unit 501, training unit 502, processing unit
503, it is similar with the function of above-mentioned 301-303;
The identification device 50 of the present embodiment can also include: detection unit 504, for obtaining in acquiring unit 501 wait know
After each frame image of other image, the facial image in the facial image and the second image in the first image is detected.
The identification device in the embodiment of the present application is described from the angle of modular functionality entity above, below from hard
The identification device in the embodiment of the present application is described in the angle of part processing.Fig. 6 is the communication device in the embodiment of the present application
One schematic diagram of hardware configuration.As shown in fig. 6, the communication device may include:
The communication device includes at least one processor 601, communication line 607, memory 603 and at least one communication
Interface 604.
Processor 601 can be a general central processor (central processing unit, CPU), micro process
Device, application-specific integrated circuit (application-specific integrated circuit, server I C) or one
Or it is multiple for controlling the integrated circuit of application scheme program execution.
Communication line 607 may include an access, and information is transmitted between said modules.
Communication interface 604, using the device of any transceiver one kind, for other devices or communication, such as
Ethernet, wireless access network (radio access network, RAN), WLAN (wireless local area
Networks, WLAN) etc..
Memory 603 can be read-only memory (read-only memory, ROM) or can store static information and instruction
Other kinds of static memory, random access memory (random access memory, RAM) or letter can be stored
The other kinds of dynamic storage device of breath and instruction, memory, which can be, to be individually present, and communication line 607 and processor are passed through
It is connected.Memory can also be integrated with processor.
Wherein, memory 603 be used for store execution application scheme computer executed instructions, and by processor 601
Control executes.Processor 601 is for executing the computer executed instructions stored in memory 603, to realize that the application is above-mentioned
The method for the identification image true-false that embodiment provides.
Optionally, the computer executed instructions in the embodiment of the present application can also be referred to as application code, the application
Embodiment is not especially limited this.
In the concrete realization, as one embodiment, communication device may include multiple processors, such as the place in Fig. 6
Manage device 601 and processor 602.Each of these processors can be monokaryon (single-CPU) processor, can also
To be multicore (multi-CPU) processor.Here processor can refer to one or more devices, circuit, and/or be used for
Handle the processing core of data (such as computer program instructions).
In the concrete realization, as one embodiment, communication device can also include output device 605 and input unit
606.Output device 605 and processor 601 communicate, and can show information in many ways.Input unit 606 and processor
601 communications, can receive the input of user in many ways.For example, input unit 606 can be mouse, touch panel device or
Sensing device etc..
Above-mentioned communication device can be a fexible unit either dedicated unit.In the concrete realization, it communicates
Device can be desktop computer, portable computer, nas server, wireless terminal device, embedded equipment or have similar knot in Fig. 6
The device of structure.The embodiment of the present application does not limit the type of communication device.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real
It is existing.When implemented in software, it can entirely or partly realize in the form of a computer program product.
It is apparent to those skilled in the art that for convenience and simplicity of description, the identification of foregoing description
The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed identification device and method, Ke Yitong
Other modes are crossed to realize.For example, the embodiment of identification device described above is only schematical, for example, the list
Member division, only a kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or
Component can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point is shown
The mutual coupling, direct-coupling or communication connection shown or discussed can be through some interfaces, between module or unit
Coupling or communication connection are connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit
The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple
In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme
's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit
It is that each unit physically exists alone, can also be integrated in one unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of software functional units.
If the integrated unit is realized in the form of SFU software functional unit and sells or use as independent product
When, it can store in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially
The all or part of the part that contributes to existing technology or the technical solution can be in the form of software products in other words
It embodies, which is stored in a storage medium, including some instructions are used so that a computer
Equipment (can be personal computer, server or the network equipment etc.) executes the complete of each embodiment the method for the application
Portion or part steps.And storage medium above-mentioned includes: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are various can store journey
The medium of sequence code.
The above, above embodiments are only to illustrate the technical solution of the application, rather than its limitations;Although referring to before
Embodiment is stated the application is described in detail, those skilled in the art should understand that: it still can be to preceding
Technical solution documented by each embodiment is stated to modify or equivalent replacement of some of the technical features;And these
It modifies or replaces, the spirit and scope of each embodiment technical solution of the application that it does not separate the essence of the corresponding technical solution.
Claims (10)
1. a kind of method for identifying image true-false characterized by comprising
Each frame image of images to be recognized is obtained, each frame image includes the first image acquired by different modes
With the second image;
By the facial image in the first convolutional Neural sub-network CNN after being trained to training the first image, to obtain the
One output valve, by the facial image in the 2nd CNN training second image after being trained to, to obtain the second output valve,
First output valve indicates the true and false degree of the facial image in the first image, and second output valve indicates described the
The true and false degree of facial image in two images;
Average value is calculated according to first output valve and second output valve, with the true and false of the determination images to be recognized.
2. the method according to claim 1, wherein passing through the first convolutional Neural sub-network CNN after being trained to
Facial image in training the first image passes through the 2nd CNN after being trained to training described the to obtain the first output valve
Facial image in two images, to obtain the second output valve, comprising:
The corresponding fisrt feature figure of facial image in the first image is extracted by the first CNN, passes through described second
CNN extracts the corresponding second feature figure of facial image in second image;
The fisrt feature figure is predicted by the first CNN, and to obtain the first output valve, institute is predicted by the 2nd CNN
Second feature figure is stated, to obtain the second output valve.
3. method according to claim 1 or 2, which is characterized in that according to first output valve and second output
Value calculates average value, comprising:
It is weighted and averaged, is obtained with corresponding weight according to first output valve and corresponding weight and second output valve
To average value;
Accordingly, with the true and false of the determination images to be recognized, comprising:
Judge whether the average value is greater than preset threshold;
If more than, it is determined that it is true for going out the images to be recognized;
If being less than or equal to, it is determined that it is false for going out the images to be recognized.
4. method according to claim 1 or 2, which is characterized in that in each frame image for obtaining the images to be recognized
Later, further includes:
Detect the facial image in the facial image and second image in the first image.
5. the method according to claim 1, wherein passing through the first convolutional Neural sub-network after being trained to
Facial image in CNN training the first image passes through the 2nd CNN training institute after being trained to obtain the first output valve
The facial image in the second image is stated, before obtaining the second output valve, further includes:
Calculate first-loss value and the second penalty values, the first-loss value is according to first output valve and described first defeated
It is worth what corresponding first label value was calculated out, second penalty values are according to second output valve and described second defeated
It is worth what corresponding second label value was calculated out, first label value indicates the mesh of the facial image in the first image
Value is marked, second label value indicates the target value of the facial image in second image;
The first-loss value and corresponding weight and second penalty values are weighted summation with corresponding weight, so that
Value after weighted sum indicate the first CNN after being trained to and be trained to after the 2nd CNN degree of optimization.
6. a kind of identification device characterized by comprising
Acquiring unit, for obtaining each frame image of images to be recognized, each frame image includes to pass through different modes
The first image and the second image of acquisition;
Training unit, the institute for being obtained by the first convolutional Neural sub-network CNN training acquiring unit after being trained to
The facial image in the first image is stated, to obtain the first output valve, passes through the 2nd CNN training acquiring unit after being trained to
The facial image in second image obtained, to obtain the second output valve, first output valve indicates first figure
The true and false of facial image as in is as a result, second output valve indicates the true and false knot of the facial image in second image
Fruit;
Processing unit, after according to training unit training based on obtained first output valve and second output valve
Average value is calculated, with the true and false of the determination images to be recognized.
7. identification device according to claim 6, which is characterized in that the processing unit includes:
Computing module, for carrying out first output valve and corresponding weight and second output valve with corresponding weight
Weighted average, to obtain average value;
After the computing module obtains the average value, the processing unit further include:
Judgment module, for judging whether the average value after the computing module calculates is greater than preset threshold;
First determining module, for determining when the judgment module judges that the average value is greater than the preset threshold
The images to be recognized is true;
Second determining module, for when the judgment module judge the average value be less than or equal to the preset threshold when,
Determine that the images to be recognized is false.
8. identification device according to claim 6, which is characterized in that the identification device further includes detection unit, is used for
After the acquiring unit obtains each frame image of the images to be recognized, the facial image in the first image is detected
With the facial image in second image.
9. a kind of computer equipment, which is characterized in that the computer equipment includes: input/output (I/O) interface, processor
And memory,
Program instruction is stored in the memory;
The processor executes such as side as claimed in any one of claims 1 to 5 for executing the program instruction stored in memory
Method.
10. a kind of computer readable storage medium, including instruction, which is characterized in that when described instruction is transported on a computing device
When row, so that the computer equipment executes method according to any one of claims 1 to 5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910019689.5A CN109753934A (en) | 2019-01-09 | 2019-01-09 | A kind of method and identification device identifying image true-false |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910019689.5A CN109753934A (en) | 2019-01-09 | 2019-01-09 | A kind of method and identification device identifying image true-false |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109753934A true CN109753934A (en) | 2019-05-14 |
Family
ID=66405305
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910019689.5A Pending CN109753934A (en) | 2019-01-09 | 2019-01-09 | A kind of method and identification device identifying image true-false |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109753934A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114067445A (en) * | 2021-11-26 | 2022-02-18 | 中科海微(北京)科技有限公司 | Data processing method, device and equipment for face authenticity identification and storage medium |
WO2022127480A1 (en) * | 2020-12-15 | 2022-06-23 | 展讯通信(天津)有限公司 | Facial recognition method and related device |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069448A (en) * | 2015-09-29 | 2015-11-18 | 厦门中控生物识别信息技术有限公司 | True and false face identification method and device |
CN105654028A (en) * | 2015-09-29 | 2016-06-08 | 厦门中控生物识别信息技术有限公司 | True and false face identification method and apparatus thereof |
CN106709477A (en) * | 2017-02-23 | 2017-05-24 | 哈尔滨工业大学深圳研究生院 | Face recognition method and system based on adaptive score fusion and deep learning |
CN107577987A (en) * | 2017-08-01 | 2018-01-12 | 广州广电卓识智能科技有限公司 | Identity authentication method, system and device |
CN108460366A (en) * | 2018-03-27 | 2018-08-28 | 百度在线网络技术(北京)有限公司 | Identity identifying method and device |
US20180357500A1 (en) * | 2017-06-13 | 2018-12-13 | Alibaba Group Holding Limited | Facial recognition method and apparatus and imposter recognition method and apparatus |
CN109034102A (en) * | 2018-08-14 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Human face in-vivo detection method, device, equipment and storage medium |
-
2019
- 2019-01-09 CN CN201910019689.5A patent/CN109753934A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105069448A (en) * | 2015-09-29 | 2015-11-18 | 厦门中控生物识别信息技术有限公司 | True and false face identification method and device |
CN105654028A (en) * | 2015-09-29 | 2016-06-08 | 厦门中控生物识别信息技术有限公司 | True and false face identification method and apparatus thereof |
CN106709477A (en) * | 2017-02-23 | 2017-05-24 | 哈尔滨工业大学深圳研究生院 | Face recognition method and system based on adaptive score fusion and deep learning |
US20180357500A1 (en) * | 2017-06-13 | 2018-12-13 | Alibaba Group Holding Limited | Facial recognition method and apparatus and imposter recognition method and apparatus |
CN107577987A (en) * | 2017-08-01 | 2018-01-12 | 广州广电卓识智能科技有限公司 | Identity authentication method, system and device |
CN108460366A (en) * | 2018-03-27 | 2018-08-28 | 百度在线网络技术(北京)有限公司 | Identity identifying method and device |
CN109034102A (en) * | 2018-08-14 | 2018-12-18 | 腾讯科技(深圳)有限公司 | Human face in-vivo detection method, device, equipment and storage medium |
Non-Patent Citations (3)
Title |
---|
YEMING825723: "目标函数,损失函数,代价函数,经验风险,结构风险", 《HTTPS://BLOG.CSDN.NET/YINGJIAOTUO8368/ARTICLE/DETAILS/79147291》 * |
平原2018: "过拟合、正则化和损失函数", 《HTTPS://BLOG.CSDN.NET/SINAT_30353259/ARTICLE/DETAILS/80991942》 * |
白水BAISHUI: "优化损失函数中的经验风险与结构风险", 《HTTPS://BLOG.CSDN.NET/BAISHUINIYAONULIA/ARTICLE/DETAILS/80836343》 * |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022127480A1 (en) * | 2020-12-15 | 2022-06-23 | 展讯通信(天津)有限公司 | Facial recognition method and related device |
CN114067445A (en) * | 2021-11-26 | 2022-02-18 | 中科海微(北京)科技有限公司 | Data processing method, device and equipment for face authenticity identification and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109522793B (en) | Method for detecting and identifying abnormal behaviors of multiple persons based on machine vision | |
CN112153736B (en) | Personnel action identification and position estimation method based on channel state information | |
Gu et al. | Paws: Passive human activity recognition based on wifi ambient signals | |
CN105139040B (en) | A kind of queueing condition information detecting method and its system | |
CN106845487B (en) | End-to-end license plate identification method | |
CN109284733B (en) | Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network | |
CN107657249A (en) | Method, apparatus, storage medium and the processor that Analysis On Multi-scale Features pedestrian identifies again | |
CN109598242B (en) | Living body detection method | |
CN110378235A (en) | A kind of fuzzy facial image recognition method, device and terminal device | |
CN105989331B (en) | Face feature extraction element, facial feature extraction method, image processing equipment and image processing method | |
CN106875422A (en) | Face tracking method and device | |
CN109145717A (en) | A kind of face identification method of on-line study | |
JP2017191501A (en) | Information processing apparatus, information processing method, and program | |
CN109840982B (en) | Queuing recommendation method and device and computer readable storage medium | |
CN111950525B (en) | Fine-grained image classification method based on destructive reconstruction learning and GoogLeNet | |
CN105701467A (en) | Many-people abnormal behavior identification method based on human body shape characteristic | |
CN103020589B (en) | A kind of single training image per person method | |
CN107947874B (en) | Indoor map semantic identification method based on WiFi channel state information | |
CN110033487A (en) | Vegetables and fruits collecting method is blocked based on depth association perception algorithm | |
CN110516512B (en) | Training method of pedestrian attribute analysis model, pedestrian attribute identification method and device | |
CN111178276B (en) | Image processing method, image processing apparatus, and computer-readable storage medium | |
CN110399822A (en) | Action identification method of raising one's hand, device and storage medium based on deep learning | |
CN103020655B (en) | A kind of remote identity authentication method based on single training image per person | |
WO2024060978A1 (en) | Key point detection model training method and apparatus and virtual character driving method and apparatus | |
CN107886077A (en) | A kind of crop pests recognition methods and its system based on wechat public number |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 523710, 26, 188 Industrial Road, Pingshan Town, Guangdong, Dongguan, Tangxia Applicant after: Entropy Technology Co.,Ltd. Address before: 523710, 26, 188 Industrial Road, Pingshan Town, Guangdong, Dongguan, Tangxia Applicant before: ZKTECO Co.,Ltd. |
|
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190514 |