CN111126216A - Risk detection method, device and equipment - Google Patents
Risk detection method, device and equipment Download PDFInfo
- Publication number
- CN111126216A CN111126216A CN201911286075.XA CN201911286075A CN111126216A CN 111126216 A CN111126216 A CN 111126216A CN 201911286075 A CN201911286075 A CN 201911286075A CN 111126216 A CN111126216 A CN 111126216A
- Authority
- CN
- China
- Prior art keywords
- detection
- images
- trained
- user
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/80—Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/40—Spoof detection, e.g. liveness detection
- G06V40/45—Detection of the body part being alive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Multimedia (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the specification provides a risk detection method, a risk detection device and risk detection equipment, wherein the method comprises the following steps: acquiring a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time; carrying out consistency detection on the plurality of images through a pre-trained first detection model to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result; and determining whether the living body detection of the user to be detected has the attack risk or not according to the first detection result and the second detection result.
Description
Technical Field
The present disclosure relates to the field of data processing technologies, and in particular, to a risk detection method, apparatus, and device.
Background
Along with the requirement of people on safety is higher and higher, the binocular camera is widely applied to various safety scenes such as entrance guard, security protection and the like, so that the living body attack is avoided by detecting two images obtained by shooting the binocular camera once. Generally, the effective imaging area of a binocular camera is the intersection of the imaging areas of the two cameras, and the area outside the effective imaging area is called a blind area. And only one camera can collect the image of the blind area, and the other camera cannot collect the image of the blind area. Because the blind area of binocular camera has this characteristic, consequently, still have the risk of live body attack.
Disclosure of Invention
One or more embodiments of the present disclosure provide a risk detection method, apparatus, and device, which combine consistency detection and liveness detection to determine whether there is an attack risk in liveness detection of a user to be detected, so that not only is the problem of liveness attack in a blind area of a multi-view camera solved, but also security is greatly improved.
To solve the above technical problem, one or more embodiments of the present specification are implemented as follows:
one or more embodiments of the present specification provide a risk detection comprising:
acquiring a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
carrying out consistency detection on the plurality of images through a pre-trained first detection model to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result;
and determining whether the living body detection of the user to be detected has the attack risk or not according to the first detection result and the second detection result.
One or more embodiments of the present specification provide a risk detection device, including:
the acquisition module acquires a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
the first training module is used for carrying out consistency detection on the plurality of images through a first detection model trained in advance to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result;
and the determining module is used for determining whether the attack risk exists according to the first detection result and the second detection result.
One or more embodiments of the present specification provide a risk detection device comprising:
a processor; and the number of the first and second groups,
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
carrying out consistency detection on the plurality of images through a pre-trained first detection model to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result;
and determining whether the living body detection of the user to be detected has the attack risk or not according to the first detection result and the second detection result.
One or more embodiments of the present specification provide a storage medium storing computer-executable instructions that, when executed, implement the following:
acquiring a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
carrying out consistency detection on the plurality of images through a pre-trained first detection model to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result;
and determining whether the living body detection of the user to be detected has the attack risk or not according to the first detection result and the second detection result.
One embodiment of the present specification detects a combination of biometric images of a user to be detected based on a first detection model and a second detection model trained in advance; therefore, the consistency detection and the living body detection are combined to determine whether the living body detection of the user to be detected has the risk of being attacked or not; the method and the device have the advantages that the living attack of the effective imaging area of the multi-view camera is detected, the living attack of the blind area of the multi-view camera is also detected, the living attack problem of the blind area of the multi-view camera is solved, and the safety is greatly improved.
Drawings
In order to more clearly illustrate one or more embodiments or prior art solutions of the present specification, the drawings that are needed in the description of the embodiments or prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present specification, and that other drawings can be obtained by those skilled in the art without inventive exercise.
Fig. 1 is a schematic view of a risk detection method provided in one or more embodiments of the present disclosure;
fig. 2 is a first schematic flow chart of a risk detection method according to one or more embodiments of the present disclosure;
fig. 3 is a second schematic flow chart of a risk detection method according to one or more embodiments of the present disclosure;
FIG. 4 is a detailed diagram of step S100-4 provided in one or more embodiments of the present description;
FIG. 5 is a detailed diagram of step S100-6 provided in one or more embodiments of the present description;
FIG. 6 is a detailed diagram of step A2 provided by one or more embodiments of the present description;
FIG. 7 is a schematic block diagram illustrating a risk detection device according to one or more embodiments of the present disclosure;
fig. 8 is a schematic structural diagram of a risk detection device according to one or more embodiments of the present disclosure.
Detailed Description
In order to make those skilled in the art better understand the technical solutions in one or more embodiments of the present disclosure, the technical solutions in one or more embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in one or more embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, and not all embodiments. All other embodiments that can be derived by a person skilled in the art from one or more of the embodiments described herein without making any inventive step shall fall within the scope of protection of this document.
The face recognition is not strange to people, and the risk in the face recognition process mainly lies in live attack, namely an attacker uses photos, videos and the like to imitate a real person to brush the face so as to attempt to pass the face recognition. The method adopts a binocular camera to collect face images, and respectively carries out living body detection on a color image (RGB image) and a near infrared image (IR image) which are acquired by the binocular camera once, and is the current mainstream anti-attack measure. However, because the binocular camera has a shooting blind area, and the blind area has the characteristics that one camera can collect the image of the camera and the other camera cannot collect the image of the camera, the blind area becomes a breakthrough for an attacker to carry out in-vivo attack, so that the in-vivo detection of the binocular camera is degraded to that of the monocular camera, and the success rate of the attack is greatly improved. In view of the above, one or more embodiments of the present disclosure provide a risk detection method, an apparatus, and a device, which can detect whether there is an attack risk in living body detection of a user to be detected, specifically, pre-train a first detection model and a second detection model, and perform consistency detection on a combination of biometric images of the user to be detected through the first detection model to obtain a first detection result; performing living body detection on the biological characteristic image combination of the user to be detected by adopting a second detection model to obtain a second detection result; therefore, whether the living body detection of the user to be detected has the attack risk or not is determined according to the first detection result and the second detection result. Wherein the combination of biometric images comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time; specifying body parts such as irises, faces, etc. From this, detect consistency and live body detection and combine together, solve the live body attack problem of many meshes camera blind area, very big promotion the security.
Fig. 1 is a schematic view of an application scenario of a risk detection method provided in one or more embodiments of the present specification, as shown in fig. 1, the scenario includes: the system comprises a multi-view camera and a risk detection device, wherein the multi-view camera comprises a plurality of cameras; the risk detection device may be a device independent of the multi-view camera or a device deployed in the multi-view camera.
Specifically, the multi-camera carries out single shooting on the appointed body part of the user to be detected to obtain a biological characteristic image combination comprising a plurality of images; the method comprises the steps that a risk detection device obtains a biological characteristic image combination of a user to be detected, and consistency detection is carried out on a plurality of images through pre-training a first detection model to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result; and determining whether the living body detection of the user to be detected has the attack risk or not according to the obtained first detection result and the second detection result. From this, detect consistency and live body detection and combine together, solve the live body attack problem of many meshes camera blind area, very big promotion the security.
Based on the application scenario architecture, one or more embodiments of the present specification provide a risk detection method; fig. 2 is a schematic flow diagram of a risk detection method according to one or more embodiments of the present disclosure, where the method in fig. 2 can be performed by the risk detection apparatus in fig. 1, as shown in fig. 2, and the method includes the following steps:
step S102, obtaining a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
wherein, the designated body part can be an iris, a human face and the like.
Step S104, carrying out consistency detection on the obtained multiple images through a pre-trained first detection model to obtain a first detection result; performing living body detection on the acquired images through a pre-trained second detection model to obtain a second detection result;
the first detection model is used for carrying out consistency detection on a plurality of images included in the biological characteristic image combination; when the blind area attacks, the difference between a plurality of images shot by a plurality of cameras in the multi-view camera is large, so that the consistency detection is carried out on the plurality of images included in the biological characteristic image combination through the first detection model, and the blind area attacks can be effectively intercepted; the training process of the first detection model is described in detail later. The second detection model is used for carrying out living body detection on a plurality of images included in the biological characteristic image combination so as to intercept living body attack of the effective imaging area; the second detection model may be the same as or different from the existing in-vivo detection model, and since the training process of the detection model for performing in-vivo detection is a well-known technical means for those skilled in the art, the training process of the second detection model will not be described in detail in the embodiments of the present specification.
And S106, determining whether the living body detection of the user to be detected has the attacked risk or not according to the first detection result and the second detection result.
In one or more embodiments of the present specification, a combination of biometric images of a user to be detected is detected based on a first detection model and a second detection model trained in advance; therefore, the consistency detection and the living body detection are combined to determine whether the living body detection of the user to be detected has the risk of being attacked or not; the method and the device have the advantages that the living attack of the effective imaging area of the multi-view camera is detected, the living attack of the blind area of the multi-view camera is also detected, the living attack problem of the blind area of the multi-view camera is solved, and the safety is greatly improved.
In order to implement consistency detection on a plurality of images included in a combination of biological feature images, in one or more embodiments of the present specification, as shown in fig. 3, step S102 further includes:
s100-2, acquiring a combination of biological characteristic images to be trained, which are acquired by a multi-view camera;
s100-4, performing second preprocessing on the combination of the biological feature images to be trained to obtain a sample set to be trained;
and S100-6, training a first detection model based on the sample set.
In order to avoid poor detection performance of the trained first detection model caused by the difference, in one or more embodiments of the present specification, before step S100-4, the method further includes:
s100-3, calibrating a plurality of cameras in the multi-view camera to obtain a conversion matrix;
specifically, one camera is randomly selected from a plurality of cameras included in the multi-camera as a reference camera, and the cameras except the reference camera are used as the cameras to be calibrated; respectively solving an internal reference matrix and an external reference matrix between a reference camera and each camera to be calibrated by using a black-and-white grid calibration board; converting two images acquired by a corresponding reference camera and a camera to be calibrated at a single time from a world coordinate system to a camera coordinate system according to the external reference matrix; converting the two images from a camera coordinate system to a pixel coordinate system according to the corresponding internal reference matrix; determining a first coordinate of the same pixel point in the two images in a world coordinate system and a second coordinate in a pixel coordinate system; and determining a conversion matrix for converting the first coordinate into the second coordinate, taking the determined conversion matrix as a conversion matrix between the reference camera and the corresponding camera to be calibrated, and taking each obtained conversion matrix as a conversion matrix of the multi-view camera. The method comprises the steps of solving an internal reference matrix and an external reference matrix between a reference camera and a camera to be calibrated, and determining a conversion matrix between the two cameras according to the internal reference matrix and the external reference matrix, wherein the reference can be made to the calibration process of the existing binocular camera, and further detailed description is not given to the embodiment of the specification.
Further, after the reference camera is selected, the method further includes: recording camera information of a reference camera; and after obtaining the conversion matrix of the multi-view camera, the method further comprises the following steps: saving the conversion matrix; when the consistency detection is carried out on the biological characteristic image combination of the user to be detected, the reference image in the biological characteristic image combination of the user to be detected is determined according to the recorded camera information of the reference camera, and the spatial alignment processing is carried out on the image to be aligned in the biological characteristic image combination of the user to be detected according to the stored conversion matrix.
After the transformation matrix of the multi-view camera is obtained, second preprocessing can be performed on the images in the combination of the biological feature images to be trained according to the transformation matrix, so that poor detection performance of the trained first detection model due to difference between the images is avoided. Specifically, as shown in fig. 4, step S100-4 includes:
s100-4-2, according to the transformation matrix, performing spatial alignment processing on images in the biological feature image combination to be trained;
specifically, for each combination of biological characteristic images to be trained, determining the one-to-one correspondence relationship between a plurality of images and a plurality of cameras; taking an image corresponding to the reference camera in the plurality of images as a reference image, and taking an image except the reference image in the plurality of images as an image to be aligned; and according to the conversion matrix between the camera corresponding to each image to be aligned and the reference camera, carrying out space alignment operation on the corresponding image to be aligned so as to align the corresponding image to be aligned with the reference image space.
Optionally, the image name of each image includes a camera identifier of a corresponding camera; correspondingly, determining the one-to-one correspondence relationship between the plurality of images and the plurality of cameras includes: determining a one-to-one corresponding relation between each image and a camera according to the camera identification included in the image name of each image;
or a plurality of cameras included in the multi-view camera establish transmission channels with the risk detection device in advance respectively, and each camera sends the shot image to the risk detection device through the corresponding transmission channel; correspondingly, determining the one-to-one correspondence relationship between the plurality of images and the plurality of cameras includes: acquiring a determined camera mark from the corresponding relation between the channel mark and the camera mark according to the channel mark of the transmission channel for receiving each image; and determining the camera corresponding to the acquired camera identification as the camera corresponding to the corresponding image.
S100-4-4, combining the biological characteristic images after the spatial alignment processing as samples, and dividing the samples into positive samples and negative samples; wherein, the images in the positive sample are consistent, and the images in the negative sample are inconsistent;
generally, all images in the combination of the biological characteristic images after the spatial alignment processing are consistent, but when the multi-view camera collects the combination of the biological characteristic images, the phenomena of shaking, partial shielding and the like due to the action of external force exist, so that all images in the combination of the biological characteristic images after the spatial alignment processing may have slight differences; based on this, dividing the samples into positive samples and negative samples in step S100-4-4 includes: determining whether a plurality of images included in each sample are consistent, and if so, determining the corresponding sample as a positive sample; and if the samples are not consistent, determining the corresponding sample as a negative sample. Furthermore, when the multi-view camera collects the combination of the biological characteristic images, the phenomena of shaking, partial shielding and the like due to the action of external force belong to accidental phenomena, so that the number of negative samples is possibly insufficient for model training; in this regard, the dividing the samples into the positive samples and the negative samples in step S100-4-4 may further include: carrying out cross combination on different samples to obtain a negative sample; the cross-combined negative sample comprises a plurality of images which are obtained by shooting at different times and correspond to the plurality of cameras one by one.
For example, the multi-view camera is a four-view camera, and for convenience of description, four cameras included in the four-view camera are respectively denoted as a camera 1, a camera 2, a camera 3, and a camera 4; correspondingly recording images shot by each camera as an image 1, an image 2, an image 3 and an image 4; the image 2 in the sample 1 and the image 2 in the sample 2 can be exchanged to obtain a new sample 1 and a new sample 2, and the new sample 1 and the new sample 2 are determined as negative samples; the image 2 in the sample 1 can be replaced by the image 2 in the sample 2, the image 3 in the sample 1 is replaced by the image 3 in the sample 4, a new sample 1 is obtained, and the new sample 1 is determined to be a negative sample; it should be noted that the cross-combination mode can be set in practical application according to the need.
And S100-4-6, determining the positive sample and the negative sample as a sample set to be trained.
And carrying out second preprocessing on the combination of the biological characteristic images to be trained, and dividing the combination into a positive sample and a negative sample so as to train the first detection model based on a sample set comprising the positive sample and the negative sample. Specifically, as shown in fig. 5, step S100-6 includes:
s100-6-2, dividing a sample set into a training set and a testing set; wherein the training set and the test set comprise positive samples and negative samples in the same proportion;
specifically, according to a preset proportion of positive samples to negative samples, randomly selecting a first number of positive samples and a second number of negative samples from the positive samples included in the sample set, and determining the selected samples as a training set; randomly selecting a third number of positive samples and a fourth number of negative samples from the positive samples included in the sample set, and determining the selected samples as a test set; wherein, when the first number is the same as the third number, the second number is the same as the fourth number; when the first number is different from the third number, the second number is different from the fourth number.
S100-6-4, performing third pretreatment on the training set and the test set to obtain a target training set and a target test set;
specifically, determining a one-to-one correspondence relationship between each image in a training set and a camera; acquiring images corresponding to the cameras from the training set according to the determined corresponding relation to obtain corresponding training subsets; determining conversion parameters of corresponding cameras according to the images included in each training subset; and according to the conversion parameters, carrying out preset conversion processing on images corresponding to corresponding cameras in the training set and the test set to obtain a target training set and a target test set.
The images included in the training set and the test set have large pixel ranges, are uncontrollable and are not beneficial to training; therefore, in the embodiment of the present specification, the conversion parameter of the corresponding camera is determined according to the image included in each training subset, and the preset conversion processing is performed on the image corresponding to the corresponding camera in the training set and the test set according to the conversion parameter, so that the pixels of each image are converted into a certain interval, which is favorable for training. The conversion parameters and the preset conversion processing can be set automatically according to the requirements in practical application; for example, the conversion parameters include pixel mean and pixel variance; the preset conversion processing is that the pixel value of each pixel point of each image is subtracted from the pixel mean value in the conversion parameter to obtain a subtraction result, and the subtraction result is divided by the pixel variance to obtain a division result; determining an image formed by the division result as a target image; thereby obtaining a target training set and a target test set.
Further, after obtaining the conversion parameters of each camera, the method further includes: and storing the obtained conversion parameters so as to perform first preprocessing on the images in the biological characteristic image combination according to the stored conversion parameters when the consistency detection is performed on the biological characteristic image combination of the user to be detected.
And S100-6-6, performing training operation based on the target training set and the target testing set to obtain a first detection model.
Specifically, each image included in the target training set is input to a twin network for two-classification training to obtain an initial detection model; inputting the target test set into the obtained initial detection model to obtain a detection result, and determining the initial detection model as a first detection model if the detection result meets a preset condition; and if the detection result does not meet the preset condition, model training is carried out again based on the target training set to obtain a new initial detection model, and the new initial detection model is detected according to the test set until the first detection model is obtained. The detection result represents a probability that each image in the biological feature image combination is consistent, and the preset condition is that the probability is greater than a preset probability.
The first detection model is trained to perform consistency detection on each image included in the biological characteristic image combination when the biological characteristic image combination of the user to be detected is acquired, so that whether the living body detection has the risk of being attacked or not is determined by combining the living body detection result.
As described above, in order to avoid an erroneous detection result caused by a difference in shooting angles of a plurality of cameras of a multi-purpose camera, in one or more embodiments of the present specification, when performing consistency detection on a plurality of images included in a combination of biometric images of a user to be detected, first preprocessing is performed on the plurality of images, and specifically, in step S104, consistency detection is performed on the plurality of images through a first detection model trained in advance, the method includes:
step A2, performing first preprocessing on a plurality of images to obtain processed images;
specifically, as shown in fig. 6, in one or more embodiments of the present disclosure, step a2 includes:
a2-2, determining the one-to-one correspondence of a plurality of images and a plurality of cameras in a multi-view camera;
the implementation process of step a2-2 can refer to the foregoing related description, and the repetition points are not described herein again.
A2-4, acquiring a conversion matrix of the multi-view camera and a conversion parameter corresponding to each camera in the multiple cameras; the conversion matrix is obtained by calibrating a plurality of cameras before training the first detection model; when the conversion parameter is used for training the first detection model, performing third pretreatment on a sample set to be trained to obtain the conversion parameter;
specifically, the multi-view camera is marked as an N-view camera, wherein N is greater than or equal to 2; and acquiring the stored N-1 conversion matrixes of the N cameras and the conversion parameters corresponding to each camera in the N cameras.
A2-6, according to the transformation matrix, carrying out space alignment processing on a plurality of images;
specifically, a reference camera is determined according to the stored camera information of the reference camera; taking an image corresponding to the reference camera in the plurality of images as a reference image, and taking an image except the reference image in the plurality of images as an image to be aligned; and according to the obtained conversion matrix between the camera corresponding to each image to be aligned and the reference camera, carrying out space alignment operation on the corresponding image to be aligned so as to make the image to be aligned and the corresponding reference image space aligned.
Step A2-8, according to the conversion parameter, the corresponding image after the space alignment processing is processed with the preset conversion processing;
the implementation process of step a2-8 can refer to the foregoing related description, and the repetition points are not described herein again.
Step A2-10, determining the image after the preset conversion processing as the processed image.
Step a4, inputting the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
The consistency detection based on the first detection model has better detection capability on the live body attack of the blind zone of the multi-view camera, and the live body detection based on the second detection model has better detection capability on the live body attack of the effective imaging area of the multi-view camera; therefore, in the embodiment of the present specification, it is determined whether the living body test of the user to be tested is at risk of being attacked or not according to the first test result obtained based on the first test model and the second test result obtained based on the second test model. Optionally, in one or more embodiments of the present specification, step S106 includes:
according to a preset weighting coefficient, performing weighting calculation on the first detection result and the second detection result to obtain a calculation result;
if the calculation result is larger than a preset first threshold value, determining that the living body detection of the user to be detected has no attacked risk;
and if the calculation result is not greater than the preset first threshold, determining that the living body detection of the user to be detected has the attacked risk.
The weighting coefficient and the first threshold value can be set automatically according to needs in practical application; therefore, the first detection result and the second detection result are subjected to weighted fusion to determine whether the living body detection of the user to be detected has the risk of being attacked or not, and the safety is improved.
Alternatively, in one or more embodiments of the present application, step S106 includes:
and if the first detection result is greater than a preset second threshold value and the second detection result is greater than a preset third threshold value, determining that the living body detection of the user to be detected has no attacked risk.
Specifically, whether the first detection result is greater than a preset second threshold value or not is determined, if not, it is determined that a plurality of images included in the combination of the biological feature images of the user to be detected are inconsistent, and the living body detection of the user to be detected has a risk of being attacked; if the biological characteristic image combination of the user to be detected is larger than the second threshold, determining that a plurality of images included in the biological characteristic image combination of the user to be detected are consistent, determining whether a second detection result is larger than a preset third threshold, if so, determining that the living body detection of the user to be detected does not have the risk of being attacked, and if not, determining that the living body detection of the user to be detected has the risk of being attacked. Or determining whether the second detection result is greater than a preset third threshold, and if not, determining that the living body detection of the user to be detected has the attacked risk; if the first detection result is greater than the third threshold, determining whether the first detection result is greater than a preset second threshold, and if the first detection result is not greater than the second threshold, determining that a plurality of images included in the biological characteristic image combination of the user to be detected are inconsistent, and the living body detection of the user to be detected has the risk of being attacked; and if the value is larger than the second threshold value, determining that the living body detection of the user to be detected has no attacked risk.
Therefore, whether the living body detection of the user to be detected has the attacked risk or not is determined according to the first detection result and the second detection result in sequence, and the safety is improved.
On the basis of any of the above embodiments, optionally, the multi-view camera is a binocular camera.
In one or more embodiments of the present specification, a combination of biometric images of a user to be detected is detected based on a first detection model and a second detection model trained in advance; therefore, the consistency detection and the living body detection are combined to determine whether the living body detection of the user to be detected has the risk of being attacked or not; the method and the device have the advantages that the living attack of the effective imaging area of the multi-view camera is detected, the living attack of the blind area of the multi-view camera is also detected, the living attack problem of the blind area of the multi-view camera is solved, and the safety is greatly improved.
Based on the same technical concept, the risk detection method described with reference to fig. 2 to 6 further provides a risk detection device according to one or more embodiments of the present disclosure. Fig. 7 is a schematic diagram illustrating a module composition of a risk detection apparatus according to one or more embodiments of the present disclosure, where the apparatus is configured to perform the risk detection method described in fig. 2 to 6, and as shown in fig. 7, the apparatus includes:
an obtaining module 201, configured to obtain a biometric image combination of a user to be detected, where the biometric image combination includes: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
a first training module 202, configured to perform consistency detection on the multiple images through a pre-trained first detection model to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result;
a determining module 203, which determines whether there is an attack risk according to the first detection result and the second detection result.
The risk detection device provided in one or more embodiments of the present specification detects a combination of biometric images of a user to be detected based on a first detection model and a second detection model trained in advance; therefore, the consistency detection and the living body detection are combined to determine whether the living body detection of the user to be detected has the risk of being attacked or not; the method and the device have the advantages that the living attack of the effective imaging area of the multi-view camera is detected, the living attack of the blind area of the multi-view camera is also detected, the living attack problem of the blind area of the multi-view camera is solved, and the safety is greatly improved.
Optionally, the first training module 202 is configured to perform first preprocessing on the multiple images to obtain processed images; and the number of the first and second groups,
inputting the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
Optionally, the first training module 202 determines a one-to-one correspondence relationship between the plurality of images and a plurality of cameras in the multi-view camera; and the number of the first and second groups,
acquiring a conversion matrix of the multi-view camera and a conversion parameter corresponding to each camera in the plurality of cameras; the conversion matrix is obtained by calibrating the plurality of cameras before the first detection model is trained; when the conversion parameter is used for training the first detection model, performing third pretreatment on a sample set to be trained to obtain the conversion parameter;
according to the conversion matrix, carrying out spatial alignment processing on the plurality of images;
according to the conversion parameters, carrying out preset conversion processing on the corresponding image after the space alignment processing;
and determining the image after the preset conversion processing as a processed image.
Optionally, the determining module 203 performs weighted calculation on the first detection result and the second detection result according to a preset weighting coefficient to obtain a calculation result;
if the calculation result is larger than a preset first threshold value, determining that the living body detection of the user to be detected has no attacked risk;
and if the calculation result is not greater than a preset first threshold value, determining that the living body detection of the user to be detected has the attacked risk.
Optionally, the determining module 203 determines that the living body test of the user to be tested does not have the risk of being attacked if it is determined that the first test result is greater than a preset second threshold and the second test result is greater than a preset third threshold.
Optionally, the apparatus further comprises: a second training module;
the second training module is used for acquiring a combination of the biological characteristic images to be trained, which are acquired by the multi-view camera; and the number of the first and second groups,
performing second preprocessing on the combination of the biological characteristic images to be trained to obtain a sample set to be trained;
training the first detection model based on the sample set.
Optionally, the apparatus further comprises: a calibration module;
the calibration module is used for calibrating a plurality of cameras in the multi-view camera to obtain a conversion matrix;
the second training module is used for carrying out space alignment processing on the images in the biological characteristic image combination to be trained according to the conversion matrix; and the number of the first and second groups,
combining the biological characteristic images after the spatial alignment processing to be used as samples, and dividing the samples into positive samples and negative samples; wherein the images in the positive sample are consistent, and the images in the negative sample are inconsistent;
determining the positive and negative samples as a set of samples to be trained.
Optionally, the second training module divides the sample set into a training set and a test set; wherein the training set and the test set comprise the same proportion of the positive samples to the negative samples; and the number of the first and second groups,
performing third preprocessing on the training set and the test set to obtain a target training set and a target test set;
and carrying out training operation based on the target training set and the target testing set to obtain a first detection model.
Optionally, the second training module determines a one-to-one correspondence relationship between each image in the training set and the camera; and the number of the first and second groups,
according to the determined corresponding relation, obtaining images corresponding to the cameras from the training set to obtain corresponding training subsets;
determining conversion parameters of corresponding cameras according to the images included in each training subset;
and according to the conversion parameters, carrying out preset conversion processing on images corresponding to corresponding cameras in the training set and the test set to obtain a target training set and a target test set.
The risk detection device provided in one or more embodiments of the present specification detects a combination of biometric images of a user to be detected based on a first detection model and a second detection model trained in advance; therefore, the consistency detection and the living body detection are combined to determine whether the living body detection of the user to be detected has the risk of being attacked or not; the method and the device have the advantages that the living attack of the effective imaging area of the multi-view camera is detected, the living attack of the blind area of the multi-view camera is also detected, the living attack problem of the blind area of the multi-view camera is solved, and the safety is greatly improved.
It should be noted that the embodiment of the risk detection apparatus in this specification and the embodiment of the risk detection method in this specification are based on the same inventive concept, and therefore, for specific implementation of this embodiment, reference may be made to implementation of the corresponding risk detection method, and repeated details are not described again.
Further, based on the same technical concept, the risk detection method described in correspondence to fig. 2 to 6 above further provides a risk detection device according to one or more embodiments of the present specification, where the risk detection device is configured to perform the risk detection method described above, and fig. 8 is a schematic structural diagram of the risk detection device according to one or more embodiments of the present specification.
As shown in FIG. 8, the risk detection devices may vary significantly depending on configuration or performance, and may include one or more processors 301 and memory 302, where the memory 302 may store one or more stored applications or data. Memory 302 may be, among other things, transient storage or persistent storage. The application program stored in memory 302 may include one or more modules (not shown), each of which may include a series of computer-executable instructions in a risk detection device. Still further, the processor 301 may be configured to communicate with the memory 302 to execute a series of computer-executable instructions in the memory 302 on the risk detection device. The risk detection apparatus may also include one or more power supplies 303, one or more wired or wireless network interfaces 304, one or more input-output interfaces 305, one or more keyboards 306, and the like.
In one particular embodiment, a risk detection device includes a memory, and one or more programs, wherein the one or more programs are stored in the memory, and the one or more programs may include one or more modules, and each module may include a series of computer-executable instructions for the risk detection device, and the one or more programs configured to be executed by one or more processors include computer-executable instructions for:
acquiring a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
carrying out consistency detection on the plurality of images through a pre-trained first detection model to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result;
and determining whether the living body detection of the user to be detected has the attack risk or not according to the first detection result and the second detection result.
In one or more embodiments of the present specification, a user may directly interact with a virtual robot in a group including the virtual robot, and input a link address of a public opinion to be queried, so as to achieve the purpose of accurately querying the public opinion; the user does not need to extract keywords of the public sentiments to be inquired in advance, and the first client does not need to display a plurality of public sentiments in a listing manner for the user to select, so that the inquiry efficiency is improved, the user experience is improved, the link address is more deterministic, and the accuracy of the inquiry result is ensured.
Optionally, when executed, the computer-executable instructions, when executing the consistency detection on the plurality of images by the pre-trained first detection model, comprise:
performing first preprocessing on the plurality of images to obtain processed images;
inputting the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
Optionally, when executed, the first preprocessing the plurality of images to obtain a processed image includes:
determining a one-to-one correspondence relationship between the plurality of images and a plurality of cameras in the multi-view camera;
acquiring a conversion matrix of the multi-view camera and a conversion parameter corresponding to each camera in the plurality of cameras; the conversion matrix is obtained by calibrating the plurality of cameras before the first detection model is trained; when the conversion parameter is used for training the first detection model, performing third pretreatment on a sample set to be trained to obtain the conversion parameter;
according to the conversion matrix, carrying out spatial alignment processing on the plurality of images;
according to the conversion parameters, carrying out preset conversion processing on the corresponding image after the space alignment processing;
and determining the image after the preset conversion processing as a processed image.
Optionally, when executed, the determining, according to the first detection result and the second detection result, whether the live test of the user to be tested is at risk of attack includes:
according to a preset weighting coefficient, carrying out weighting calculation on the first detection result and the second detection result to obtain a calculation result;
if the calculation result is larger than a preset first threshold value, determining that the living body detection of the user to be detected has no attacked risk;
and if the calculation result is not greater than a preset first threshold value, determining that the living body detection of the user to be detected has the attacked risk.
Optionally, when executed, the determining, according to the first detection result and the second detection result, whether the live test of the user to be tested is at risk of attack includes:
and if the first detection result is greater than a preset second threshold value and the second detection result is greater than a preset third threshold value, determining that the living body detection of the user to be detected has no attacked risk.
Optionally, the computer-executable instructions, when executed, further comprise, before acquiring the combination of the biometric images of the user to be detected:
acquiring a combination of the biological characteristic images to be trained, which are acquired by the multi-view camera;
performing second preprocessing on the combination of the biological characteristic images to be trained to obtain a sample set to be trained;
training the first detection model based on the sample set.
Optionally, the computer-executable instructions, when executed, further comprise, before performing the second preprocessing on the combination of biometric images to be trained:
calibrating a plurality of cameras in the multi-view camera to obtain a conversion matrix;
the second preprocessing is performed on the combination of the biological feature images to be trained to obtain a sample set to be trained, and the method comprises the following steps:
according to the conversion matrix, carrying out spatial alignment processing on the images in the biological feature image combination to be trained;
combining the biological characteristic images after the spatial alignment processing to be used as samples, and dividing the samples into positive samples and negative samples; wherein the images in the positive sample are consistent, and the images in the negative sample are inconsistent;
determining the positive and negative samples as a set of samples to be trained.
Optionally, computer executable instructions, when executed, said training said first detection model based on said sample set, comprise:
dividing the sample set into a training set and a test set; wherein the training set and the test set comprise the same proportion of the positive samples to the negative samples;
performing third preprocessing on the training set and the test set to obtain a target training set and a target test set;
and carrying out training operation based on the target training set and the target testing set to obtain a first detection model.
Optionally, when executed, the third preprocessing is performed on the training set and the test set to obtain a target training set and a target test set, where the third preprocessing includes:
determining a one-to-one correspondence relationship between each image in the training set and the cameras;
according to the determined corresponding relation, obtaining images corresponding to the cameras from the training set to obtain corresponding training subsets;
determining conversion parameters of corresponding cameras according to the images included in each training subset;
and according to the conversion parameters, carrying out preset conversion processing on images corresponding to corresponding cameras in the training set and the test set to obtain a target training set and a target test set.
The risk detection device provided by one or more embodiments of the present specification detects a combination of biometric images of a user to be detected based on a first detection model and a second detection model trained in advance; therefore, the consistency detection and the living body detection are combined to determine whether the living body detection of the user to be detected has the risk of being attacked or not; the method and the device have the advantages that the living attack of the effective imaging area of the multi-view camera is detected, the living attack of the blind area of the multi-view camera is also detected, the living attack problem of the blind area of the multi-view camera is solved, and the safety is greatly improved.
It should be noted that the embodiment of the risk detection device in this specification and the embodiment of the risk detection method in this specification are based on the same inventive concept, and therefore, for specific implementation of this embodiment, reference may be made to implementation of the corresponding risk detection method, and repeated details are not described again.
Further, corresponding to the risk detection methods shown in fig. 2 to fig. 6, based on the same technical concept, one or more embodiments of the present specification further provide a storage medium for storing computer-executable instructions, where in a specific embodiment, the storage medium may be a usb disk, an optical disk, a hard disk, and the like, and the computer-executable instructions stored in the storage medium, when being executed by a processor, can implement the following processes:
acquiring a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
carrying out consistency detection on the plurality of images through a pre-trained first detection model to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result;
and determining whether the living body detection of the user to be detected has the attack risk or not according to the first detection result and the second detection result.
In one or more embodiments of the present specification, a combination of biometric images of a user to be detected is detected based on a first detection model and a second detection model trained in advance; therefore, the consistency detection and the living body detection are combined to determine whether the living body detection of the user to be detected has the risk of being attacked or not; the method and the device have the advantages that the living attack of the effective imaging area of the multi-view camera is detected, the living attack of the blind area of the multi-view camera is also detected, the living attack problem of the blind area of the multi-view camera is solved, and the safety is greatly improved.
Optionally, the storage medium stores computer-executable instructions that, when executed by the processor, perform consistency detection on the plurality of images through a pre-trained first detection model, including:
performing first preprocessing on the plurality of images to obtain processed images;
inputting the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
Optionally, the storage medium stores computer-executable instructions that, when executed by the processor, perform a first pre-processing on the plurality of images to obtain a processed image, including:
determining a one-to-one correspondence relationship between the plurality of images and a plurality of cameras in the multi-view camera;
acquiring a conversion matrix of the multi-view camera and a conversion parameter corresponding to each camera in the plurality of cameras; the conversion matrix is obtained by calibrating the plurality of cameras before the first detection model is trained; when the conversion parameter is used for training the first detection model, performing third pretreatment on a sample set to be trained to obtain the conversion parameter;
according to the conversion matrix, carrying out spatial alignment processing on the plurality of images;
according to the conversion parameters, carrying out preset conversion processing on the corresponding image after the space alignment processing;
and determining the image after the preset conversion processing as a processed image.
Optionally, when executed by a processor, the determining whether the live test of the user to be tested is at risk of attack according to the first test result and the second test result includes:
according to a preset weighting coefficient, carrying out weighting calculation on the first detection result and the second detection result to obtain a calculation result;
if the calculation result is larger than a preset first threshold value, determining that the living body detection of the user to be detected has no attacked risk;
and if the calculation result is not greater than a preset first threshold value, determining that the living body detection of the user to be detected has the attacked risk.
Optionally, when executed by a processor, the determining whether the live test of the user to be tested is at risk of attack according to the first test result and the second test result includes:
and if the first detection result is greater than a preset second threshold value and the second detection result is greater than a preset third threshold value, determining that the living body detection of the user to be detected has no attacked risk.
Optionally, the storage medium stores computer-executable instructions, which when executed by the processor, further comprise, before the acquiring the combination of the biometric images of the user to be detected:
acquiring a combination of the biological characteristic images to be trained, which are acquired by the multi-view camera;
performing second preprocessing on the combination of the biological characteristic images to be trained to obtain a sample set to be trained;
training the first detection model based on the sample set.
Optionally, the storage medium stores computer-executable instructions, which when executed by the processor, further include, before performing the second preprocessing on the combination of biometric images to be trained:
calibrating a plurality of cameras in the multi-view camera to obtain a conversion matrix;
the second preprocessing is performed on the combination of the biological feature images to be trained to obtain a sample set to be trained, and the method comprises the following steps:
according to the conversion matrix, carrying out spatial alignment processing on the images in the biological feature image combination to be trained;
combining the biological characteristic images after the spatial alignment processing to be used as samples, and dividing the samples into positive samples and negative samples; wherein the images in the positive sample are consistent, and the images in the negative sample are inconsistent;
determining the positive and negative samples as a set of samples to be trained.
Optionally, the storage medium stores computer-executable instructions that, when executed by a processor, said training said first detection model based on said sample set comprises:
dividing the sample set into a training set and a test set; wherein the training set and the test set comprise the same proportion of the positive samples to the negative samples;
performing third preprocessing on the training set and the test set to obtain a target training set and a target test set;
and carrying out training operation based on the target training set and the target testing set to obtain a first detection model.
Optionally, when executed by the processor, the computer-executable instructions stored in the storage medium perform a third preprocessing on the training set and the test set to obtain a target training set and a target test set, including:
determining a one-to-one correspondence relationship between each image in the training set and the cameras;
according to the determined corresponding relation, obtaining images corresponding to the cameras from the training set to obtain corresponding training subsets;
determining conversion parameters of corresponding cameras according to the images included in each training subset;
and according to the conversion parameters, carrying out preset conversion processing on images corresponding to corresponding cameras in the training set and the test set to obtain a target training set and a target test set.
One or more embodiments of the present specification provide a storage medium storing computer-executable instructions that, when executed by a processor, detect a combination of biometric images of a user to be detected based on a first detection model and a second detection model trained in advance; therefore, the consistency detection and the living body detection are combined to determine whether the living body detection of the user to be detected has the risk of being attacked or not; the method and the device have the advantages that the living attack of the effective imaging area of the multi-view camera is detected, the living attack of the blind area of the multi-view camera is also detected, the living attack problem of the blind area of the multi-view camera is solved, and the safety is greatly improved.
It should be noted that the embodiment related to the storage medium in this specification and the embodiment related to the risk detection method in this specification are based on the same inventive concept, and therefore, for specific implementation of this embodiment, reference may be made to implementation of the corresponding risk detection method, and repeated details are not described again.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
In the 30 s of the 20 th century, improvements in a technology could clearly be distinguished between improvements in hardware (e.g., improvements in circuit structures such as diodes, transistors, switches, etc.) and improvements in software (improvements in process flow). However, as technology advances, many of today's process flow improvements have been seen as direct improvements in hardware circuit architecture. Designers almost always obtain the corresponding hardware circuit structure by programming an improved method flow into the hardware circuit. Thus, it cannot be said that an improvement in the process flow cannot be realized by hardware physical modules. For example, a Programmable Logic Device (PLD), such as a Field Programmable Gate Array (FPGA), is an integrated circuit whose Logic functions are determined by programming the Device by a user. A digital system is "integrated" on a PLD by the designer's own programming without requiring the chip manufacturer to design and fabricate application-specific integrated circuit chips. Furthermore, nowadays, instead of manually making an integrated Circuit chip, such Programming is often implemented by "logic compiler" software, which is similar to a software compiler used in program development and writing, but the original code before compiling is also written by a specific Programming Language, which is called Hardware Description Language (HDL), and HDL is not only one but many, such as abel (advanced Boolean Expression Language), ahdl (alternate Language Description Language), traffic, pl (core unified Programming Language), HDCal, JHDL (Java Hardware Description Language), langue, Lola, HDL, laspam, hardsradware (Hardware Description Language), vhjhd (Hardware Description Language), and vhigh-Language, which are currently used in most common. It will also be apparent to those skilled in the art that hardware circuitry that implements the logical method flows can be readily obtained by merely slightly programming the method flows into an integrated circuit using the hardware description languages described above.
The controller may be implemented in any suitable manner, for example, the controller may take the form of, for example, a microprocessor or processor and a computer-readable medium storing computer-readable program code (e.g., software or firmware) executable by the (micro) processor, logic gates, switches, an Application Specific Integrated Circuit (ASIC), a programmable logic controller, and an embedded microcontroller, examples of which include, but are not limited to, the following microcontrollers: ARC 625D, Atmel AT91SAM, Microchip PIC18F26K20, and Silicone Labs C8051F320, the memory controller may also be implemented as part of the control logic for the memory. Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may thus be considered a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a cellular telephone, a camera phone, a smartphone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various units by function, and are described separately. Of course, the functions of the units may be implemented in the same software and/or hardware or in multiple software and/or hardware when implementing the embodiments of the present description.
One skilled in the art will recognize that one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The description has been presented with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the description. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, a computer readable medium does not include a transitory computer readable medium such as a modulated data signal and a carrier wave.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
One or more embodiments of the present description may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the specification may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment.
The above description is only an example of this document and is not intended to limit this document. Various modifications and changes may occur to those skilled in the art from this document. Any modifications, equivalents, improvements, etc. which come within the spirit and principle of the disclosure are intended to be included within the scope of the claims of this document.
Claims (16)
1. A method of risk detection, comprising:
acquiring a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
carrying out consistency detection on the plurality of images through a pre-trained first detection model to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result;
and determining whether the living body detection of the user to be detected has the attack risk or not according to the first detection result and the second detection result.
2. The method of claim 1, the consistency detection of the plurality of images by a pre-trained first detection model, comprising:
performing first preprocessing on the plurality of images to obtain processed images;
inputting the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
3. The method of claim 2, the first pre-processing the plurality of images to obtain a processed image, comprising:
determining a one-to-one correspondence relationship between the plurality of images and a plurality of cameras in the multi-view camera;
acquiring a conversion matrix of the multi-view camera and a conversion parameter corresponding to each camera in the plurality of cameras; the conversion matrix is obtained by calibrating the plurality of cameras before the first detection model is trained; when the conversion parameter is used for training the first detection model, performing third pretreatment on a sample set to be trained to obtain the conversion parameter;
according to the conversion matrix, carrying out spatial alignment processing on the plurality of images;
according to the conversion parameters, carrying out preset conversion processing on the corresponding image after the space alignment processing;
and determining the image after the preset conversion processing as a processed image.
4. The method according to claim 1, wherein the determining whether the living body test of the user to be tested is at risk of attack according to the first test result and the second test result comprises:
according to a preset weighting coefficient, carrying out weighting calculation on the first detection result and the second detection result to obtain a calculation result;
if the calculation result is larger than a preset first threshold value, determining that the living body detection of the user to be detected has no attacked risk;
and if the calculation result is not greater than a preset first threshold value, determining that the living body detection of the user to be detected has the attacked risk.
5. The method according to claim 1, wherein the determining whether the living body test of the user to be tested is at risk of attack according to the first test result and the second test result comprises:
and if the first detection result is greater than a preset second threshold value and the second detection result is greater than a preset third threshold value, determining that the living body detection of the user to be detected has no attacked risk.
6. The method of claim 1, the multi-view camera being a binocular camera.
7. The method according to any one of claims 1 to 6, wherein before the acquiring of the combination of the biometric images of the user to be detected, the method further comprises:
acquiring a combination of the biological characteristic images to be trained, which are acquired by the multi-view camera;
performing second preprocessing on the combination of the biological characteristic images to be trained to obtain a sample set to be trained;
training the first detection model based on the sample set.
8. The method of claim 7, prior to the second preprocessing of the combination of biometric images to be trained, further comprising:
calibrating a plurality of cameras in the multi-view camera to obtain a conversion matrix;
the second preprocessing is performed on the combination of the biological feature images to be trained to obtain a sample set to be trained, and the method comprises the following steps:
according to the conversion matrix, carrying out spatial alignment processing on the images in the biological feature image combination to be trained;
combining the biological characteristic images after the spatial alignment processing to be used as samples, and dividing the samples into positive samples and negative samples; wherein the images in the positive sample are consistent, and the images in the negative sample are inconsistent;
determining the positive and negative samples as a set of samples to be trained.
9. The method of claim 8, the training the first detection model based on the sample set, comprising:
dividing the sample set into a training set and a test set; wherein the training set and the test set comprise the same proportion of the positive samples to the negative samples;
performing third preprocessing on the training set and the test set to obtain a target training set and a target test set;
and carrying out training operation based on the target training set and the target testing set to obtain a first detection model.
10. The method of claim 9, wherein the third preprocessing the training set and the test set to obtain a target training set and a target test set comprises:
determining a one-to-one correspondence relationship between each image in the training set and the cameras;
according to the determined corresponding relation, obtaining images corresponding to the cameras from the training set to obtain corresponding training subsets;
determining conversion parameters of corresponding cameras according to the images included in each training subset;
and according to the conversion parameters, carrying out preset conversion processing on images corresponding to corresponding cameras in the training set and the test set to obtain a target training set and a target test set.
11. A risk detection device, comprising:
the acquisition module acquires a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
the first training module is used for carrying out consistency detection on the plurality of images through a first detection model trained in advance to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result;
and the determining module is used for determining whether the attack risk exists according to the first detection result and the second detection result.
12. The apparatus of claim 11, wherein the first and second electrodes are disposed in a substantially cylindrical configuration,
the first training module is used for carrying out first preprocessing on the plurality of images to obtain processed images; and the number of the first and second groups,
inputting the processed image to the first detection model to perform consistency detection on the processed image based on the first detection model.
13. The apparatus of claim 11 or 12, further comprising: a training module;
the training module is used for acquiring a combination of the biological characteristic images to be trained, which are acquired by the multi-view camera; and the number of the first and second groups,
performing second preprocessing on the combination of the biological characteristic images to be trained to obtain a sample set to be trained;
training the first detection model based on the sample set.
14. The apparatus of claim 13, the apparatus further comprising: a calibration module;
the calibration module is used for calibrating a plurality of cameras in the multi-view cameras to obtain a conversion matrix;
the training module is used for carrying out space alignment processing on the images in the biological characteristic image combination to be trained according to the conversion matrix; and the number of the first and second groups,
combining the biological characteristic images after the spatial alignment processing to be used as samples, and dividing the samples into positive samples and negative samples; wherein the images in the positive sample are consistent, and the images in the negative sample are inconsistent;
determining the positive and negative samples as a set of samples to be trained.
15. A risk detection device comprising:
a processor; and the number of the first and second groups,
a memory arranged to store computer executable instructions that, when executed, cause the processor to:
acquiring a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
carrying out consistency detection on the plurality of images through a pre-trained first detection model to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result;
and determining whether the living body detection of the user to be detected has the attack risk or not according to the first detection result and the second detection result.
16. A storage medium storing computer-executable instructions that when executed implement the following:
acquiring a biological characteristic image combination of a user to be detected, wherein the biological characteristic image combination comprises: a plurality of images obtained by shooting the appointed body part of the user to be detected by the multi-camera at a single time;
carrying out consistency detection on the plurality of images through a pre-trained first detection model to obtain a first detection result; performing living body detection on the plurality of images through a pre-trained second detection model to obtain a second detection result;
and determining whether the living body detection of the user to be detected has the attack risk or not according to the first detection result and the second detection result.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911286075.XA CN111126216A (en) | 2019-12-13 | 2019-12-13 | Risk detection method, device and equipment |
PCT/CN2020/124141 WO2021114916A1 (en) | 2019-12-13 | 2020-10-27 | Risk detection method, apparatus and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911286075.XA CN111126216A (en) | 2019-12-13 | 2019-12-13 | Risk detection method, device and equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111126216A true CN111126216A (en) | 2020-05-08 |
Family
ID=70498894
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911286075.XA Pending CN111126216A (en) | 2019-12-13 | 2019-12-13 | Risk detection method, device and equipment |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN111126216A (en) |
WO (1) | WO2021114916A1 (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111539490A (en) * | 2020-06-19 | 2020-08-14 | 支付宝(杭州)信息技术有限公司 | Business model training method and device |
CN112084915A (en) * | 2020-08-31 | 2020-12-15 | 支付宝(杭州)信息技术有限公司 | Model training method, living body detection method, device and electronic equipment |
WO2021114916A1 (en) * | 2019-12-13 | 2021-06-17 | 支付宝(杭州)信息技术有限公司 | Risk detection method, apparatus and device |
CN113569708A (en) * | 2021-07-23 | 2021-10-29 | 北京百度网讯科技有限公司 | Living body recognition method, living body recognition device, electronic apparatus, and storage medium |
CN113850214A (en) * | 2021-09-29 | 2021-12-28 | 支付宝(杭州)信息技术有限公司 | Injection attack identification method and device for living body detection |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113569873B (en) * | 2021-08-19 | 2024-03-29 | 支付宝(杭州)信息技术有限公司 | Image processing method, device and equipment |
CN114650186A (en) * | 2022-04-22 | 2022-06-21 | 北京三快在线科技有限公司 | Anomaly detection method and detection device thereof |
CN115567371B (en) * | 2022-11-16 | 2023-03-10 | 支付宝(杭州)信息技术有限公司 | Abnormity detection method, device, equipment and readable storage medium |
CN117975044B (en) * | 2024-02-20 | 2024-09-10 | 蚂蚁云创数字科技(北京)有限公司 | Image processing method and device based on feature space |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050265585A1 (en) * | 2004-06-01 | 2005-12-01 | Lumidigm, Inc. | Multispectral liveness determination |
CN101110102A (en) * | 2006-07-20 | 2008-01-23 | 中国科学院自动化研究所 | Game scene and role control method based on fists of player |
CN101393599A (en) * | 2007-09-19 | 2009-03-25 | 中国科学院自动化研究所 | Game role control method based on human face expression |
CN101866427A (en) * | 2010-07-06 | 2010-10-20 | 西安电子科技大学 | Method for detecting and classifying fabric defects |
CN103077521A (en) * | 2013-01-08 | 2013-05-01 | 天津大学 | Area-of-interest extracting method used for video monitoring |
CN106372601A (en) * | 2016-08-31 | 2017-02-01 | 上海依图网络科技有限公司 | In vivo detection method based on infrared visible binocular image and device |
CN106897675A (en) * | 2017-01-24 | 2017-06-27 | 上海交通大学 | The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features |
CN108446612A (en) * | 2018-03-07 | 2018-08-24 | 腾讯科技(深圳)有限公司 | vehicle identification method, device and storage medium |
CN109600548A (en) * | 2018-11-30 | 2019-04-09 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN110059644A (en) * | 2019-04-23 | 2019-07-26 | 杭州智趣智能信息技术有限公司 | A kind of biopsy method based on facial image, system and associated component |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9202105B1 (en) * | 2012-01-13 | 2015-12-01 | Amazon Technologies, Inc. | Image analysis for user authentication |
CN110298230A (en) * | 2019-05-06 | 2019-10-01 | 深圳市华付信息技术有限公司 | Silent biopsy method, device, computer equipment and storage medium |
CN110363087B (en) * | 2019-06-12 | 2022-02-25 | 苏宁云计算有限公司 | Long-baseline binocular face in-vivo detection method and system |
CN111126216A (en) * | 2019-12-13 | 2020-05-08 | 支付宝(杭州)信息技术有限公司 | Risk detection method, device and equipment |
-
2019
- 2019-12-13 CN CN201911286075.XA patent/CN111126216A/en active Pending
-
2020
- 2020-10-27 WO PCT/CN2020/124141 patent/WO2021114916A1/en active Application Filing
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050265585A1 (en) * | 2004-06-01 | 2005-12-01 | Lumidigm, Inc. | Multispectral liveness determination |
CN101110102A (en) * | 2006-07-20 | 2008-01-23 | 中国科学院自动化研究所 | Game scene and role control method based on fists of player |
CN101393599A (en) * | 2007-09-19 | 2009-03-25 | 中国科学院自动化研究所 | Game role control method based on human face expression |
CN101866427A (en) * | 2010-07-06 | 2010-10-20 | 西安电子科技大学 | Method for detecting and classifying fabric defects |
CN103077521A (en) * | 2013-01-08 | 2013-05-01 | 天津大学 | Area-of-interest extracting method used for video monitoring |
CN106372601A (en) * | 2016-08-31 | 2017-02-01 | 上海依图网络科技有限公司 | In vivo detection method based on infrared visible binocular image and device |
CN106897675A (en) * | 2017-01-24 | 2017-06-27 | 上海交通大学 | The human face in-vivo detection method that binocular vision depth characteristic is combined with appearance features |
CN108446612A (en) * | 2018-03-07 | 2018-08-24 | 腾讯科技(深圳)有限公司 | vehicle identification method, device and storage medium |
CN109600548A (en) * | 2018-11-30 | 2019-04-09 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN110059644A (en) * | 2019-04-23 | 2019-07-26 | 杭州智趣智能信息技术有限公司 | A kind of biopsy method based on facial image, system and associated component |
Non-Patent Citations (2)
Title |
---|
傅德胜等: "《微机显示技术与应用》", 31 December 1997, 南京:东南大学出版社 * |
田启川: "《虹膜识别原理及算法》", 30 June 2010, 北京:国防工业出版社 * |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021114916A1 (en) * | 2019-12-13 | 2021-06-17 | 支付宝(杭州)信息技术有限公司 | Risk detection method, apparatus and device |
CN111539490A (en) * | 2020-06-19 | 2020-08-14 | 支付宝(杭州)信息技术有限公司 | Business model training method and device |
CN111539490B (en) * | 2020-06-19 | 2020-10-16 | 支付宝(杭州)信息技术有限公司 | Business model training method and device |
CN112084915A (en) * | 2020-08-31 | 2020-12-15 | 支付宝(杭州)信息技术有限公司 | Model training method, living body detection method, device and electronic equipment |
CN113569708A (en) * | 2021-07-23 | 2021-10-29 | 北京百度网讯科技有限公司 | Living body recognition method, living body recognition device, electronic apparatus, and storage medium |
CN113850214A (en) * | 2021-09-29 | 2021-12-28 | 支付宝(杭州)信息技术有限公司 | Injection attack identification method and device for living body detection |
Also Published As
Publication number | Publication date |
---|---|
WO2021114916A1 (en) | 2021-06-17 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111126216A (en) | Risk detection method, device and equipment | |
CN113095124B (en) | Face living body detection method and device and electronic equipment | |
WO2019100724A1 (en) | Method and device for training multi-label classification model | |
KR20210047336A (en) | Image processing method and apparatus, electronic device and storage medium | |
US10397498B2 (en) | Compressive sensing capturing device and method | |
CN114424258B (en) | Attribute identification method, attribute identification device, storage medium and electronic equipment | |
Loke et al. | Indian sign language converter system using an android app | |
CN112424795B (en) | Face anti-counterfeiting method, processor chip and electronic equipment | |
US10848746B2 (en) | Apparatus including multiple cameras and image processing method | |
CN109063776B (en) | Image re-recognition network training method and device and image re-recognition method and device | |
CN111340014B (en) | Living body detection method, living body detection device, living body detection apparatus, and storage medium | |
Chaudhary et al. | Depth‐based end‐to‐end deep network for human action recognition | |
CN113505682B (en) | Living body detection method and living body detection device | |
US9323989B2 (en) | Tracking device | |
CN111797971A (en) | Method, device and electronic system for processing data by using convolutional neural network | |
CN111160251B (en) | Living body identification method and device | |
CN107479715A (en) | The method and apparatus that virtual reality interaction is realized using gesture control | |
CN115600157A (en) | Data processing method and device, storage medium and electronic equipment | |
Liu et al. | Two-stream refinement network for RGB-D saliency detection | |
US11954600B2 (en) | Image processing device, image processing method and image processing system | |
KR20220143550A (en) | Method and apparatus for generating point cloud encoder and method and apparatus for generating point cloud data, electronic device and computer storage medium | |
CN117115883A (en) | Training method of biological detection model, biological detection method and related products | |
CN115578796A (en) | Training method, device, equipment and medium for living body detection model | |
CN111291685B (en) | Training method and device for face detection model | |
CN110705575A (en) | Positioning method and device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200508 |
|
RJ01 | Rejection of invention patent application after publication |