Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
Fig. 1 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 1, the electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs and the like, and at least one computer program is stored on the memory, and can be executed by the processor to realize the data processing method suitable for the electronic device provided in the embodiment of the application. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement a data processing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The network interface may be an ethernet card or a wireless network card, etc. for communicating with an external electronic device. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
FIG. 2 is a flow diagram of a data processing method in one embodiment. As shown in fig. 2, the data processing method includes steps 202 to 206. Wherein:
step 202, a face recognition model stored in a first operating environment is obtained.
In particular, the electronic device may include a processor, and the processor may store, calculate, transmit, and the like, data. The processor in the electronic device may operate in different environments, for example, the processor may operate in a TEE (Trusted Execution Environment) or an REE (Rich Execution Environment), where when the processor operates in the TEE, the security of data is higher; when running in REE, the data is less secure.
The electronic device can allocate resources of the processor, and divide different resources for different operating environments. For example, generally, there are fewer processes with higher security requirements in the electronic device, and there are more common processes, so that the electronic device can divide a small part of resources of the processor into a higher security operating environment, and divide a large part of resources into a less high security operating environment.
The face recognition model is an algorithm model for performing recognition processing on a face in an image, and is generally stored in a file form. It can be understood that, because the algorithm for recognizing the face in the image is relatively complex, the storage space occupied when the face recognition model is stored is relatively large. After the electronic device divides the processor into different operating environments, the storage space divided into the first operating environment is unnecessarily divided into the storage space in the second operating environment, so that the electronic device can store the face recognition model in the first operating environment to ensure that the second operating environment has enough space to process data.
Step 204, initializing the face recognition model in the first operating environment, and dividing the initialized face recognition model into at least two model data packets.
Before the face recognition processing is performed on the image, the face recognition model needs to be initialized. If the face recognition model is stored in the second operating environment, the storage space in the second operating environment needs to be occupied for storing the face recognition model, and the storage space in the second operating environment needs to be occupied for initializing the face recognition model, so that the resource consumption of the second operating environment is too large, and the efficiency of data processing is influenced.
For example, the face recognition model occupies 20M of memory, an additional 10M of memory is required for initializing the face recognition model, and if the storage and initialization are both performed in the second operating environment, a total of 30M of memory of the second operating environment is required. If the face recognition model is stored in the first operating environment, initialized in the first operating environment and then sent to the second operating environment, only 10M of memory in the second operating environment needs to be occupied, and the resource occupancy rate in the second operating environment is greatly reduced.
The electronic equipment stores the face recognition model in the first operating environment, initializes the face recognition model in the first operating environment, and transmits the initialized face recognition model to the second operating environment, so that the occupation of the storage space in the second operating environment can be reduced. Further, after the face recognition model is initialized, the initialized face recognition model may be segmented into at least two model data packets, so that the initialized face recognition model is transmitted in segments.
Step 206, sequentially transmitting the model data packet from the first operating environment to a second operating environment, and generating a target face recognition model according to the model data packet in the second operating environment; the storage space of the first operation environment is larger than that of the second operation environment, and the target face recognition model is used for carrying out face recognition processing on the image.
Specifically, the face recognition model is stored in a file form, and after the initialized face recognition model is divided into model data packets by the first operating environment, the model data packets obtained are sequentially sent to the second operating environment. And after the model data packet is transmitted to the second operation environment, the model data packets are spliced together to generate a target face recognition model. For example, the face recognition model may be segmented according to different functional modules, and after the segmented face recognition model is transmitted to the second operating environment, model data packets corresponding to the functional modules may be spliced to generate the final target face recognition model.
In one embodiment, execution of step 202 may begin upon detection of satisfaction of an initialization condition. For example, the face recognition model is stored in the first operating environment, the electronic device may initialize the face recognition model when starting up, initialize the face recognition model when detecting that an application program requiring face recognition processing is opened, initialize the face recognition model when detecting a face recognition instruction, compress the initialized face recognition model, and transmit the compressed face recognition model to the second operating environment.
In other embodiments provided by the application, before the face recognition model is initialized in the first operating environment, the remaining storage space in the second operating environment may be obtained; if the residual storage space is smaller than the space threshold, initializing the face recognition model in the first operating environment, and dividing the initialized face recognition model into at least two model data packets. The space threshold may be set as required, and is generally the sum of the storage space occupied by the face recognition model and the storage space occupied when the face recognition model is initialized.
If the remaining storage space in the second operating environment is large, the face recognition model can be directly sent to the second operating environment, initialization processing is carried out in the second operating environment, and the original face recognition model is deleted after initialization is completed, so that the data security can be ensured. The data processing method may further include: if the residual storage space is larger than or equal to the space threshold, the face recognition model is divided into at least two model data packets in the first operation environment, and the model data packets are transmitted into the second operation environment; generating a target face recognition model from the model data packet in a second operating environment, and initializing the target face recognition model; deleting the target face recognition model before initialization, and keeping the target face recognition model after initialization. After the target face recognition model is generated in the second operating environment, face recognition processing can be directly performed according to the target face recognition model.
It will be appreciated that a face recognition model may generally include a plurality of processing modules, each performing a different process, and that the plurality of processing modules may be independent of each other. For example, a face detection module, a face matching module, and a liveness detection module may be included. Some of the modules may have relatively low security requirements, and some of the modules may have relatively high security requirements. Therefore, the processing module with lower security requirement can be initialized in the first operating environment, and the processing module with higher security requirement can be initialized in the second operating environment.
Specifically, step 204 may include: the method comprises the steps of carrying out first initialization on a face recognition model in a first running environment, and dividing the face recognition model after the first initialization into at least two model data packets. Step 206 may also be followed by: and performing second initialization on a second module in the target face recognition model, wherein the second module is a module except the first module in the face recognition model, and the security of the first module is lower than that of the second module. For example, the first module may be a face detection module, the second module may be a face matching module and a living body detection module, and the first module has a low requirement on security and is initialized in the first operating environment. The second module has a high requirement on security and is initialized in the second operating environment.
The data processing method provided by the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, divide the initialized face recognition model into at least two model data packets, and transmit the data packets to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the face recognition model is divided into a plurality of data packets for transmission, so that the data transmission efficiency is improved.
Fig. 3 is a flowchart of a data processing method in another embodiment. As shown in fig. 3, the data processing method includes steps 302 to 314. Wherein:
step 302, a face recognition model stored in a first operating environment is obtained.
Generally, before face recognition processing, a face recognition model is trained, so that the recognition accuracy of the face recognition model is higher. In the process of training the model, a training image set is obtained, images in the training image set are used as the input of the model, and the training parameters of the model are continuously adjusted according to the training result obtained in the training process, so that the optimal parameters of the model are obtained. The more images included in the training image set, the more accurate the model obtained by training, but the time consumption is increased correspondingly.
In one embodiment, the electronic device may be a terminal that interacts with the user, and the face recognition model may be trained on the server due to limited terminal resources. And after the face recognition model is trained by the server, the trained face recognition model is sent to the terminal. And after the terminal receives the trained face recognition model, storing the trained face recognition model in a first operating environment. Step 302 may also be preceded by: the terminal receives the face recognition model sent by the server and stores the face recognition model into a first operating environment of the terminal.
The terminal can comprise a first operation environment and a second operation environment, the terminal can perform face recognition processing on the image in the second operation environment, but the terminal can store the received face recognition model in the storage space of the first operation environment because the storage space of the terminal divided into the first operation environment is larger than the storage space of the terminal divided into the second operation environment. In an embodiment, each time the restart of the terminal is detected, the face recognition model stored in the first operating environment may be loaded into the second operating environment, so that when the face recognition processing needs to be performed on the image, the face recognition model loaded in the second operating environment may be directly called for processing. Step 302 may specifically include: and when the terminal is detected to be restarted, acquiring the face recognition model stored in the first operating environment.
It can be understood that the face recognition model can be updated, when the face recognition model is updated, the server sends the updated face recognition model to the terminal, and after the terminal receives the updated face recognition model, the updated face recognition model is stored in the first operating environment to cover the original face recognition model. And then the terminal is controlled to restart, and after the terminal is restarted, the updated face recognition model is obtained and initialized.
And 304, initializing the face recognition model in the first running environment, acquiring the space capacity of the shared buffer area, and dividing the face recognition model into at least two model data packets according to the space capacity.
Before the face recognition processing is performed by the face recognition model, the face recognition model needs to be initialized. In the initialization process, parameters, modules and the like in the face recognition model can be set to be in default states. Because the memory is also occupied in the process of initializing the model, the terminal can initialize the face recognition model in the first operating environment and then send the initialized face recognition model to the second operating environment, so that the face recognition processing can be directly carried out in the second operating environment without occupying extra memory to initialize the model.
The face recognition model may be stored in a file form, or may be stored in other forms, which is not limited herein. The face recognition model may generally include a plurality of functional models, and may include, for example, a face detection module, a face matching module, a liveness detection module, and the like. When the face recognition model is cut, the face recognition model can be divided into at least two model data packets according to each functional module, so that the target face recognition model can be conveniently generated by subsequent recombination. In other embodiments, the segmentation may be performed in other manners, which are not limited.
And step 306, endowing each model data packet with a corresponding data number, and sequentially transmitting the model data packets from the first operating environment to the second operating environment according to the data numbers.
It is understood that when storing data, the data is written to consecutive memory addresses sequentially according to the time sequence of storage. After the face recognition model is segmented, the segmented model data packets can be numbered, and then the model data packets can be sequentially transmitted to the second operation environment according to the data numbers to be stored. After the transmission of the model data packets is finished, the model data packets are spliced in sequence to generate a target face recognition model.
In one embodiment, the data transmission between the first operating environment and the second operating environment may be implemented by a shared Buffer (Share Buffer), so that when the first operating environment cuts the face recognition model, the cutting may be performed according to the capacity of the shared Buffer. Specifically, the space capacity of a shared buffer area is obtained, and a face recognition model is divided into at least two model data packets according to the space capacity; and the data volume of the model data packet is less than or equal to the space capacity.
It should be noted that the shared buffer is a channel through which the first operating environment and the second operating environment transmit data, and both the first operating environment and the second operating environment can access the shared buffer. The electronic equipment can configure the shared buffer area, and the space size of the shared buffer area can be set according to requirements. For example, the electronic device may set the storage space of the shared buffer to be 5M or 10M. When data is transmitted, the face recognition model is cut according to the capacity of the shared buffer area and then transmitted, so that the shared buffer area does not need to be additionally configured with larger capacity to transmit data, and the resource occupation of the electronic equipment is reduced.
When the face recognition model is transmitted through the shared buffer area, the method specifically includes: and sequentially transmitting the model data packet from the first operation environment to the shared buffer, and transmitting the model data packet from the shared buffer to the second operation environment. Step 306 may specifically include: and giving a corresponding data number to each model data packet, sequentially transmitting the model data packets from the first operation environment to the shared buffer area according to the data number, and then transmitting the model data packets from the shared buffer area to the second operation environment.
FIG. 4 is a system diagram illustrating a method for implementing data processing in one embodiment. As shown in fig. 4, the system includes a first runtime environment 402, a shared buffer 404, and a second runtime environment 406. The first runtime environment 402 and the second runtime environment 406 may perform data transfer through the shared buffer 404. The face recognition model is stored in the first operating environment 402, and the system may acquire the face recognition model stored in the first operating environment 402, initialize the acquired face recognition model, segment the initialized face recognition model, transmit a model data packet formed by segmentation into the shared buffer 404, and transmit the model data packet into the second operating environment 406 through the shared buffer 404. Finally, the model data packets are spliced into the target face recognition model in the second operating environment 406.
And 308, splicing the model data packets according to the data numbers in the second operating environment to generate a target face recognition model.
Specifically, the data numbers may be used to represent an arrangement order of the model data packets, and after the model data packets are transmitted into the second operating environment, the model data packets are arranged in order according to the data numbers, and then are spliced according to the arrangement order to generate the target face recognition model.
FIG. 5 is a diagram of a segmented face recognition model in one embodiment. As shown in fig. 5, the face recognition model 502 is stored in a file form, and the face recognition model 502 is divided into 3 model data packets 504, where the model data packets 504 may also be in a file form. The data size of the segmented model data packets 504 is smaller than the data size of the face recognition model 502, and the data size of each model data packet 504 may be the same or different. For example, the face recognition model 502 has a total of 30M, and can be divided equally according to the size of the data volume, and each model data packet is 10M.
In step 310, when the face recognition instruction is detected, the security level of the face recognition instruction is determined.
The face recognition models are stored in the first running environment and the second running environment, and the terminal can perform face recognition processing in the first running environment and can also perform face recognition processing in the second running environment. Specifically, the terminal may determine whether to perform face recognition processing in the first operating environment or to perform face recognition processing in the second operating environment according to a face recognition instruction that triggers the face recognition processing.
The face recognition instruction is initiated by an upper application of the terminal, and when the upper application initiates the face recognition instruction, information such as time for initiating the face recognition instruction, an application identifier, an operation identifier and the like can be written into the face recognition. The application identifier may be used to identify an application program that initiates the face recognition instruction, and the operation identifier may be used to identify an application operation that needs a face recognition result to perform. For example, the application operations such as payment, unlocking, beautifying and the like can be performed through the face recognition result, and the operation identifier in the face recognition instruction is used for indicating the application operations such as payment, unlocking, beautifying and the like.
The security level is used to indicate that the security of the application operation is low, and the higher the security level is, the higher the requirement of the application operation on the security is. For example, if the security requirement of the payment operation is high and the security requirement of the beauty operation is low, the security level of the payment operation is higher than that of the beauty operation. The security level can be directly written into the face recognition instruction, and after the terminal detects the face recognition instruction, the security level in the face recognition instruction is directly read. The corresponding relation of the operation identifiers can also be established in advance, and after the face recognition instruction is detected, the corresponding safety level is obtained through the operation identifiers in the face recognition instruction.
In step 312, if the security level is lower than the level threshold, a face recognition process is performed according to the face recognition model in the first operating environment.
When the security level is lower than the level threshold value, the security requirement of the application operation initiating the face recognition processing is considered to be low, and the face recognition processing can be directly performed in the first running environment according to the face recognition model. Specifically, the face recognition processing may include, but is not limited to, one or more of face detection, face matching, and living body detection, where the face detection refers to a process of detecting whether a face exists in an image, the face matching refers to a process of matching a detected face with a preset face, and the living body detection refers to a process of detecting whether a face in an image is a living body.
Step 314, if the security level is higher than the level threshold, performing face recognition processing according to the face recognition model in a second operating environment; the safety of the second operation environment is higher than that of the first operation environment.
When the security level is higher than the level threshold, the security requirement of the application operation initiating the face recognition processing is considered to be high, and the face recognition processing can be performed according to the face recognition model in the second running environment. Specifically, the terminal can send the face recognition instruction to a second operation environment, and the camera module is controlled to collect images through the second operation environment. The collected image is firstly sent to a second running environment, the safety level of the application operation is judged in the second running environment, and if the safety level is lower than a level threshold value, the collected image is sent to the first running environment for face recognition processing; and if the safety level is higher than the level threshold, performing face recognition processing on the acquired image in a second running environment.
Specifically, as shown in fig. 6, when performing the face recognition processing in the first operating environment, the method includes:
step 602, controlling the camera module to collect a first target image and a speckle image, and sending the first target image to a first operating environment and sending the speckle image to a second operating environment.
The application installed in the terminal can initiate a face recognition instruction and send the face recognition instruction to the second operating environment. When the safety level of the detected face recognition instruction in the second operating environment is lower than the level threshold, the camera module can be controlled to acquire the first target image and the speckle image. The first target image collected by the camera module can be directly sent to the first operation environment, and the collected speckle pattern is sent to the second operation environment.
In one embodiment, the first target image may be a visible light image or other types of images, which are not limited herein. When the first target image is a visible light image, the camera module may include an RGB (Red Green Blue ) camera, and the first target image is collected by the RGB camera. Still can include radium-shine lamp and laser camera in the camera module, the steerable radium-shine lamp of terminal is opened, then gathers the laser speckle that radium-shine lamp transmitted through the laser camera and shines the speckle image that forms on the object.
Specifically, when laser is irradiated on an optically rough surface with average fluctuation larger than the wavelength order, wavelets scattered by randomly distributed surface elements on the surface are mutually superposed to enable a reflected light field to have random spatial light intensity distribution and present a granular structure, namely laser speckle. The laser speckles formed are highly random, and therefore, the laser speckles generated by the laser emitted by different laser emitters are different. When the resulting laser speckle is projected onto objects of different depths and shapes, the resulting speckle images are not identical. The laser speckles formed by different laser lamps have uniqueness, so that the obtained speckle images also have uniqueness.
And step 604, calculating to obtain a depth image according to the speckle image in the second operating environment, and sending the depth image to the first operating environment.
The terminal can ensure that the speckle images are processed in a safe environment all the time in order to protect the safety of data, so the terminal can transmit the speckle images to a second operating environment for processing. The depth image is an image representing depth information of a subject, and is calculated from the speckle image. The terminal can control the camera module to simultaneously acquire the first target image and the speckle image, and the depth information of the object in the first target image can be represented according to the depth image obtained by the speckle image calculation.
A depth image may be computed from the speckle image and the reference image in a second operating environment. The depth image is an image acquired when laser speckle is irradiated to a reference plane, so the reference image is an image with reference depth information. First, the relative depth can be calculated according to the position offset of the speckle point in the speckle image relative to the scattered spot in the reference image, and the relative depth can represent the depth information of the actual shooting object to the reference plane. And then calculating the actual depth information of the object according to the acquired relative depth and the reference depth. Specifically, the reference image is compared with the speckle image to obtain offset information, and the offset information is used for representing the horizontal offset of the speckle point in the speckle image relative to the corresponding scattered spot in the reference image; and calculating to obtain a depth image according to the offset information and the reference depth information.
FIG. 7 is a schematic diagram of computing depth information in one embodiment. As shown in fig. 7, the laser light 702 can generate laser speckles, which are reflected off of an object and then captured by the laser camera 704 to form an image. In the calibration process of the camera, laser speckles emitted by the laser lamp 702 are reflected by the reference plane 708, reflected light is collected by the laser camera 704, and a reference image is obtained by imaging through the imaging plane 710. The reference depth L from the reference plane 708 to the laser lamp 702 is known. In the process of actually calculating the depth information, laser speckles emitted by the laser lamp 702 are reflected by the object 706, reflected light is collected by the laser camera 704, and an actual speckle image is obtained by imaging through the imaging plane 710. The calculation formula for obtaining the actual depth information is as follows:
where L is the distance between the laser lamp 702 and the reference plane 708, f is the focal length of the lens in the laser camera 704, CD is the distance between the laser lamp 702 and the laser camera 704, and AB is the offset distance between the image of the object 706 and the image of the reference plane 708. AB may be the product of the pixel offset n and the actual distance p of the pixel. When the distance Dis between the object 704 and the laser lamp 702 is greater than the distance L between the reference plane 706 and the laser lamp 702, AB is a negative value; AB is positive when the distance Dis between the object 704 and the laser lamp 702 is less than the distance L between the reference plane 706 and the laser lamp 702.
And 606, performing face recognition processing on the first target image and the depth image through a face recognition model in the first running environment.
After the depth image is obtained through calculation in the second running environment, the depth image obtained through calculation can be sent to the first running environment, then face recognition processing is carried out according to the first target image and the depth image in the first running environment, the first running environment sends a face recognition result to the upper layer application, and the upper layer application can carry out corresponding application operation according to the face recognition result.
For example, when the image is subjected to the beauty processing, the position and the area where the face is located can be detected by the first target image. Because the first target image and the depth image are corresponding, the depth information of the face can be obtained through the corresponding area of the depth image, the three-dimensional feature of the face can be constructed through the depth information of the face, and therefore the face can be beautified according to the three-dimensional feature of the face.
In other embodiments provided in the present application, as shown in fig. 8, when performing face recognition processing in the second operating environment, the method specifically includes:
and step 802, controlling the camera module to collect a second target image and a speckle image, and sending the second target image and the speckle image to a second operating environment.
In one embodiment, the second target image can be an infrared image, the camera module can comprise a floodlight, a laser lamp and a laser camera, the floodlight can be controlled by the terminal to be turned on, and then the infrared image formed by irradiating an object through the floodlight is collected through the laser camera to serve as the second target image. The terminal can also control the laser lamp to be started, and then a laser camera is used for collecting speckle images formed by the laser lamp irradiating objects.
The time interval between the collection of the second target image and the speckle image is short, so that the consistency of the collected second target image and the speckle image can be ensured, a larger error between the second target image and the speckle image is avoided, and the accuracy of image processing is improved. Specifically, the camera module is controlled to collect a second target image, and the camera module is controlled to collect a speckle image; wherein a time interval between a first time of acquiring the second target image and a second time of acquiring the speckle image is less than a first threshold.
The floodlight controller and the laser lamp controller can be respectively arranged and are respectively connected through two paths of Pulse Width Modulation (PWM), when the floodlight is required to be controlled to be started or the laser lamp is required to be started, the floodlight can be controlled to be started by transmitting Pulse waves to the floodlight controller through the PWM or the laser lamp controller is controlled to be started by transmitting the Pulse waves to the laser lamp controller, and the time interval between the acquisition of the second target image and the speckle image is controlled by transmitting the Pulse waves to the two controllers through the PWM. It is understood that the second target image may be an infrared image, or may be other types of images, and is not limited herein. For example, the second target image may also be a visible light image.
And step 804, calculating to obtain a depth image according to the speckle image in the second operating environment.
It should be noted that, when the security level of the face recognition instruction is higher than the level threshold, it is considered that the security requirement of the application operation initiating the face recognition instruction is higher, and then the face recognition processing needs to be performed in an environment with higher security, so as to ensure the security of data processing. And the second target image and the speckle image acquired by the camera module are directly sent to a second operation environment, and then the depth image is calculated according to the speckle image in the second operation environment.
Step 806, performing face recognition processing on the second target image and the depth image through the face recognition model in the second operating environment.
In one embodiment, when the face recognition processing is performed in the second operating environment, the face detection may be performed according to the second target image, and whether the second target image includes the target face or not may be detected. And if the second target image contains the target face, matching the detected target face with a preset face. And if the detected target face is matched with the preset face, acquiring target depth information of the target face according to the depth image, and detecting whether the target face is a living body according to the target depth information.
When the target face is matched, the face attribute features of the target face can be extracted, the extracted face attribute features are matched with the face attribute features of the preset face, and if the matching value exceeds the matching threshold value, the face matching is considered to be successful. For example, the characteristics of the human face, such as the deflection angle, the brightness information, the facial features and the like, can be extracted as the human face attribute characteristics, and if the matching degree of the human face attribute characteristics of the target human face and the human face attribute characteristics of the preset human face exceeds 90%, the human face matching is considered to be successful.
Generally, in the process of face authentication, if a face in a photo or sculpture is taken, the extracted face attribute features may be successfully authenticated. In order to improve the accuracy, the living body detection processing can be performed according to the acquired depth image, so that it is necessary to ensure that the acquired face is a living body face before the authentication is successful. It can be understood that the acquired second target image may represent detail information of a human face, and the acquired depth image may represent corresponding depth information, and living body detection may be performed according to the depth image. For example, if the photographed face is a face in a photograph, it can be determined from the depth image that the acquired face is not a solid, and the acquired face can be considered to be a non-living face.
Specifically, the performing of the living body detection according to the corrected depth image includes: and searching face depth information corresponding to the target face in the depth image, wherein if the face depth information corresponding to the target face exists in the depth image and the face depth information conforms to a face three-dimensional rule, the target face is a living body face. The face stereo rule is a rule with face three-dimensional depth information.
In an embodiment, an artificial intelligence model may be further used to perform artificial intelligence recognition on the second target image and the depth image, acquire a living body attribute feature corresponding to the target face, and determine whether the target face is a living body face image according to the acquired living body attribute feature. The living body attribute features may include skin characteristics, a direction of a texture, a density of the texture, a width of the texture, and the like corresponding to the target face, and if the living body attribute features conform to a living body rule of the face, the target face is considered to have biological activity, that is, the target face is the living body face.
It is to be understood that, when processing such as face detection, face matching, and living body detection is performed, the processing order may be changed as necessary. For example, the human face may be authenticated first, and then whether the human face is a living body may be detected. Or whether the human face is a living body can be detected firstly, and then the human face is authenticated.
In the embodiment provided by the application, in order to ensure the safety of data, when the face recognition model is transmitted, the compressed face recognition model can be encrypted, and the encrypted face recognition model is transmitted from the first operating environment to the second operating environment; and decrypting the face recognition model after the encryption processing in a second running environment, and storing the face recognition model after the decryption processing.
The first operating environment may be a normal operating environment, the second operating environment may be a safe operating environment, and the second operating environment may be safer than the first operating environment. The first execution environment is generally configured to process application operations with lower security, and the second execution environment is generally configured to process application operations with higher security. For example, operations with low security requirements, such as shooting and gaming, may be performed in a first operating environment, and operations with high security requirements, such as payment and unlocking, may be performed in a second operating environment.
The second operating environment is generally used for performing application operations with high security requirements, and therefore, when the face recognition model is sent to the second operating environment, the security of the face recognition model also needs to be ensured. After the face recognition model is compressed in the first operating environment, the compressed face recognition model may be encrypted, and then the encrypted face recognition model may be sent to the second operating environment through the shared buffer.
And after the encrypted face recognition model is transmitted into the shared buffer area from the first running environment, the encrypted face recognition model is transmitted into the second running environment from the shared buffer area. And the second operating environment decrypts the received face recognition model after encryption. The algorithm for encrypting the face recognition model is not limited in this embodiment. For example, the Encryption processing may be performed according to an Encryption Algorithm such as DES (Data Encryption Standard), MD5(Message-Digest Algorithm 5), HAVAL (Diffie-Hellman, key exchange Algorithm), or the like.
In one embodiment, after generating the target face recognition model in the second operating environment, the method may further include: and deleting the target face recognition model in the second operating environment when detecting that the duration of the non-called target face recognition model exceeds a duration threshold or detecting that the terminal is closed. This frees up storage space in the second operating environment to save space on the electronic device.
Furthermore, the operation condition can be detected in the operation process of the electronic equipment, and the storage space occupied by the target face recognition model is released according to the operation condition of the electronic equipment. Specifically, when it is detected that the electronic device is in a stuck state and the time length of the non-invoked target face recognition model exceeds a time length threshold, the target face recognition model in the second operating environment is deleted.
After the target face recognition model is released, the face recognition model stored in the first operating environment can be acquired when the electronic equipment is detected to recover to a normal operating state or a face recognition instruction is detected; then initializing the face recognition model in a first running environment, and dividing the initialized face recognition model into at least two model data packets; and sequentially transmitting the model data packets from the first operating environment to a second operating environment, and generating a target face recognition model according to the model data packets in the second operating environment.
The data processing method provided by the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, divide the initialized face recognition model into at least two model data packets, and transmit the data packets to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the face recognition model is divided into a plurality of data packets for transmission, so that the data transmission efficiency is improved. In addition, the processing is carried out in the first running environment or the second running environment according to the safety level selection of the face recognition instruction, all the applications are prevented from being processed in the second running environment, and the resource occupancy rate of the second running environment can be reduced.
It should be understood that, although the steps in the flowcharts of fig. 2, 3, 6, and 8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3, 6, and 8 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
Fig. 9 is a hardware configuration diagram for implementing the data processing method in one embodiment. As shown in fig. 9, the electronic device may include a camera module 910, a Central Processing Unit (CPU) 920 and a Micro Control Unit (MCU) 930, where the camera module 910 includes a laser camera 912, a floodlight 914, an RGB camera 916 and a laser light 918. The mcu 930 includes a PWM (Pulse Width Modulation) module 932, an SPI/I2C (Serial Peripheral Interface/Inter-Integrated Circuit) module 934, a RAM (Random Access Memory) module 936, and a Depth Engine module 938. The central processing unit 920 may be in a multi-core operation mode, and a CPU core in the central processing unit 920 may operate under a TEE or a REE. Both the TEE and the REE are running modes of an ARM module (Advanced RISC Machines). The natural operating environment in the cpu 920 may be the first operating environment, and the security is low. The trusted operating environment in the central processing unit 920 is the second operating environment, and the security is high. It is understood that, since the mcu 930 is a processing module independent from the cpu 920 and the input and output of the mcu 930 are controlled by the cpu 920 under the trusted operating environment, the mcu 930 is also a processing module with higher security, and it can be considered that the mcu 930 is also under the secure operating environment, i.e. the mcu 930 is also under the second operating environment.
Generally, the operation behavior with higher security requirement needs to be executed in the second operation environment, and other operation behaviors can be executed in the first operation environment. In this embodiment, the central processing unit 920 may send a face recognition instruction to the SPI/I2C module 934 in the micro control unit 930 through the trusted operating environment control SECURE SPI/I2C. After receiving the face recognition instruction, if the safety level of the face recognition instruction is determined to be higher than the level threshold, the micro control unit 930 transmits a pulse wave through the PWM module 932 to control the opening of the floodlight 914 in the camera module 910 to collect an infrared image, and controls the opening of the laser light 918 in the camera module 910 to collect a speckle image. The camera module 910 can transmit the collected infrared image and speckle image to a Depth Engine module 938 in the micro-control unit 930, and the Depth Engine module 938 can calculate a Depth image according to the speckle image and transmit the infrared image and the Depth image to a trusted operating environment of the central processor 920. The trusted operating environment of the cpu 920 performs face recognition processing according to the received infrared image and depth image.
If the safety level of the face recognition instruction is lower than the level threshold value, the PWM module 932 emits pulse waves to control the laser lamp 918 in the camera module 910 to be turned on to collect speckle images, and the RGB camera 916 collects visible light images. The camera module 910 directly sends the collected visible light image to the natural operation environment of the central processor 920, transmits the speckle image to the DepthEngine module 938 in the micro control unit 930, and the Depth Engine module 938 can calculate the Depth image according to the speckle image and send the Depth image to the trusted operation environment of the central processor 920. And then the depth image is sent to a natural running environment by the credible running environment, and the face recognition processing is carried out in the natural running environment according to the visible light image and the depth image.
FIG. 10 is a block diagram of a data processing apparatus according to an embodiment. As shown in fig. 10, the data processing apparatus 1000 includes a model acquisition module 1002, a model segmentation module 1004, and a model transmission module 1006. Wherein:
a model obtaining module 1002, configured to obtain a face recognition model stored in a first operating environment.
A model segmentation module 1004 configured to initialize the face recognition model in the first operating environment, and segment the initialized face recognition model into at least two model data packets.
A model transmission module 1006, configured to sequentially transmit the model data packet from the first operating environment to a second operating environment, and generate a target face recognition model according to the model data packet in the second operating environment; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
The data processing apparatus provided in the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, divide the initialized face recognition model into at least two model data packets, and transmit the data packets to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the face recognition model is divided into a plurality of data packets for transmission, so that the data transmission efficiency is improved.
Fig. 11 is a schematic structural diagram of a data processing apparatus according to another embodiment. As shown in fig. 11, the data processing apparatus 1100 includes a model acquisition module 1102, a model segmentation module 1104, a model transmission module 1106, and a face recognition module 1108. Wherein:
a model obtaining module 1102, configured to obtain a face recognition model stored in a first operating environment.
A model segmentation module 1104, configured to initialize the face recognition model in the first operating environment, and segment the initialized face recognition model into at least two model data packets.
A model transmission module 1106, configured to sequentially transmit the model data packet from the first operating environment to a second operating environment, and generate a target face recognition model according to the model data packet in the second operating environment; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
A face recognition module 1108, configured to determine a security level of a face recognition instruction when the face recognition instruction is detected; if the safety level is lower than a level threshold, performing face recognition processing according to the face recognition model in the first operating environment; if the safety level is higher than a level threshold, performing face recognition processing according to the face recognition model in the second operating environment; and the safety of the second operation environment is higher than that of the first operation environment.
The data processing method provided by the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, divide the initialized face recognition model into at least two model data packets, and transmit the data packets to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the face recognition model is divided into a plurality of data packets for transmission, so that the data transmission efficiency is improved. In addition, the processing is carried out in the first running environment or the second running environment according to the safety level selection of the face recognition instruction, all the applications are prevented from being processed in the second running environment, and the resource occupancy rate of the second running environment can be reduced.
In one embodiment, the model segmentation module 1104 is further configured to obtain a space capacity of the shared buffer, and segment the face recognition model into at least two model data packets according to the space capacity; wherein the data volume of the model data packet is less than or equal to the space capacity.
In one embodiment, the model transmission module 1106 is further configured to sequentially transfer the model data packet from the first runtime environment to a shared buffer and transfer the model data packet from the shared buffer to a second runtime environment.
In one embodiment, the model transmission module 1106 is further configured to assign a corresponding data number to each model data packet, and sequentially transmit the model data packets from the first operating environment to the second operating environment according to the data numbers; and splicing the model data packets in the second operating environment according to the data numbers to generate a target face recognition model.
In one embodiment, the model transmission module 1106 is further configured to encrypt the model data packet and transmit the encrypted model data packet from the first operating environment to the second operating environment; and decrypting the encrypted model data packet in the second operating environment.
In one embodiment, the face recognition module 1108 is further configured to control the camera module to acquire a first target image and a speckle image, send the first target image to a first operating environment, and send the speckle image to a second operating environment; calculating to obtain a depth image according to the speckle image in the second operating environment, and sending the depth image to the first operating environment; and carrying out face recognition processing on the first target image and the depth image through a face recognition model in the first running environment.
In one embodiment, the face recognition module 1108 is further configured to control the camera module to acquire a second target image and a speckle image, and send the second target image and the speckle image to the second operating environment; calculating to obtain a depth image according to the speckle image in the second operating environment; and carrying out face recognition processing on the second target image and the depth image through the face recognition model in the second running environment.
The division of the modules in the data processing apparatus is only for illustration, and in other embodiments, the data processing apparatus may be divided into different modules as needed to complete all or part of the functions of the data processing apparatus.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the data processing methods provided by the above-described embodiments.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the data processing method provided by the above embodiments.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.