[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108985255B - Data processing method, apparatus, computer-readable storage medium and electronic device - Google Patents

Data processing method, apparatus, computer-readable storage medium and electronic device Download PDF

Info

Publication number
CN108985255B
CN108985255B CN201810866139.2A CN201810866139A CN108985255B CN 108985255 B CN108985255 B CN 108985255B CN 201810866139 A CN201810866139 A CN 201810866139A CN 108985255 B CN108985255 B CN 108985255B
Authority
CN
China
Prior art keywords
operating environment
face recognition
model
recognition model
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810866139.2A
Other languages
Chinese (zh)
Other versions
CN108985255A (en
Inventor
郭子青
周海涛
欧锦荣
谭筱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201810866139.2A priority Critical patent/CN108985255B/en
Publication of CN108985255A publication Critical patent/CN108985255A/en
Priority to PCT/CN2019/082696 priority patent/WO2020024619A1/en
Priority to EP19843800.4A priority patent/EP3671551A4/en
Priority to US16/740,374 priority patent/US11373445B2/en
Application granted granted Critical
Publication of CN108985255B publication Critical patent/CN108985255B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Processing (AREA)

Abstract

本申请涉及一种数据处理方法、装置、计算机可读存储介质和电子设备。所述方法包括:获取第一运行环境中存储的人脸识别模型;在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型分割成至少两个模型数据包;依次将所述模型数据包从所述第一运行环境传入到第二运行环境,并在所述第二运行环境中根据所述模型数据包生成目标人脸识别模型;其中,所述第一运行环境的存储空间大于所述第二运行环境的存储空间,所述目标人脸识别模型用于对图像进行人脸识别处理。上述数据处理方法、装置、计算机可读存储介质和电子设备,可以提高数据处理的效率。

Figure 201810866139

The present application relates to a data processing method, apparatus, computer-readable storage medium and electronic device. The method includes: acquiring a face recognition model stored in a first operating environment; initializing the face recognition model in the first operating environment, and dividing the initialized face recognition model into at least two A model data package; sequentially transferring the model data package from the first operating environment to the second operating environment, and generating a target face recognition model according to the model data package in the second operating environment; wherein, The storage space of the first operating environment is larger than the storage space of the second operating environment, and the target face recognition model is used to perform face recognition processing on images. The above data processing method, apparatus, computer-readable storage medium and electronic device can improve the efficiency of data processing.

Figure 201810866139

Description

Data processing method and device, computer readable storage medium and electronic equipment
Technical Field
The present application relates to the field of computer technologies, and in particular, to a data processing method and apparatus, a computer-readable storage medium, and an electronic device.
Background
The application of the face recognition technology is gradually applied to the work and life of people, for example, face images can be collected for payment authentication and unlocking authentication, and the face images shot can be beautified. The face in the image can be detected through the face recognition technology, and the face in the image can be recognized as the face of a person, so that the identity of a user can be recognized. Because the algorithm of face recognition is complex, the storage space occupied by the algorithm model for face recognition processing is also large.
Disclosure of Invention
The embodiment of the application provides a data processing method and device, a computer readable storage medium and electronic equipment, which can improve the data processing efficiency.
A method of data processing, the method comprising:
acquiring a face recognition model stored in a first operating environment;
initializing the face recognition model in the first operating environment, and dividing the initialized face recognition model into at least two model data packets;
sequentially transmitting the model data packets from the first operating environment to a second operating environment, and generating a target face recognition model according to the model data packets in the second operating environment;
the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
A data processing apparatus, the apparatus comprising:
the model acquisition module is used for acquiring a face recognition model stored in a first operating environment;
the model segmentation module is used for initializing the face recognition model in the first operating environment and segmenting the initialized face recognition model into at least two model data packets;
the model transmission module is used for sequentially transmitting the model data packets from the first operating environment to a second operating environment and generating a target face recognition model according to the model data packets in the second operating environment; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
A computer-readable storage medium, on which a computer program is stored which, when executed by a processor, carries out the steps of:
acquiring a face recognition model stored in a first operating environment;
initializing the face recognition model in the first operating environment, and dividing the initialized face recognition model into at least two model data packets;
sequentially transmitting the model data packets from the first operating environment to a second operating environment, and generating a target face recognition model according to the model data packets in the second operating environment;
the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
An electronic device comprising a memory and a processor, the memory having stored therein computer-readable instructions that, when executed by the processor, cause the processor to perform the steps of:
acquiring a face recognition model stored in a first operating environment;
initializing the face recognition model in the first operating environment, and dividing the initialized face recognition model into at least two model data packets;
sequentially transmitting the model data packets from the first operating environment to a second operating environment, and generating a target face recognition model according to the model data packets in the second operating environment;
the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
The data processing method, the data processing device, the computer readable storage medium and the electronic equipment can store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, divide the initialized face recognition model into at least two model data packets, and transmit the data packets to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the face recognition model is divided into a plurality of data packets for transmission, so that the data transmission efficiency is improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present application, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a schematic diagram showing an internal structure of an electronic apparatus according to an embodiment;
FIG. 2 is a flow diagram of a data processing method in one embodiment;
FIG. 3 is a flow chart of a data processing method in another embodiment;
FIG. 4 is a system diagram illustrating a method for implementing data processing in one embodiment;
FIG. 5 is a diagram of a segmented face recognition model in one embodiment;
FIG. 6 is a flowchart of a data processing method in yet another embodiment;
FIG. 7 is a schematic diagram of computing depth information in one embodiment;
FIG. 8 is a flowchart of a data processing method in yet another embodiment;
FIG. 9 is a diagram of a hardware configuration for implementing a data processing method in one embodiment;
FIG. 10 is a schematic diagram showing the structure of a data processing apparatus according to an embodiment;
fig. 11 is a schematic structural diagram of a data processing apparatus according to another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
Fig. 1 is a schematic diagram of an internal structure of an electronic device in one embodiment. As shown in fig. 1, the electronic device includes a processor, a memory, and a network interface connected by a system bus. Wherein, the processor is used for providing calculation and control capability and supporting the operation of the whole electronic equipment. The memory is used for storing data, programs and the like, and at least one computer program is stored on the memory, and can be executed by the processor to realize the data processing method suitable for the electronic device provided in the embodiment of the application. The Memory may include a non-volatile storage medium such as a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random-Access-Memory (RAM). For example, in one embodiment, the memory includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The computer program can be executed by a processor to implement a data processing method provided in the following embodiments. The internal memory provides a cached execution environment for the operating system computer programs in the non-volatile storage medium. The network interface may be an ethernet card or a wireless network card, etc. for communicating with an external electronic device. The electronic device may be a mobile phone, a tablet computer, or a personal digital assistant or a wearable device, etc.
FIG. 2 is a flow diagram of a data processing method in one embodiment. As shown in fig. 2, the data processing method includes steps 202 to 206. Wherein:
step 202, a face recognition model stored in a first operating environment is obtained.
In particular, the electronic device may include a processor, and the processor may store, calculate, transmit, and the like, data. The processor in the electronic device may operate in different environments, for example, the processor may operate in a TEE (Trusted Execution Environment) or an REE (Rich Execution Environment), where when the processor operates in the TEE, the security of data is higher; when running in REE, the data is less secure.
The electronic device can allocate resources of the processor, and divide different resources for different operating environments. For example, generally, there are fewer processes with higher security requirements in the electronic device, and there are more common processes, so that the electronic device can divide a small part of resources of the processor into a higher security operating environment, and divide a large part of resources into a less high security operating environment.
The face recognition model is an algorithm model for performing recognition processing on a face in an image, and is generally stored in a file form. It can be understood that, because the algorithm for recognizing the face in the image is relatively complex, the storage space occupied when the face recognition model is stored is relatively large. After the electronic device divides the processor into different operating environments, the storage space divided into the first operating environment is unnecessarily divided into the storage space in the second operating environment, so that the electronic device can store the face recognition model in the first operating environment to ensure that the second operating environment has enough space to process data.
Step 204, initializing the face recognition model in the first operating environment, and dividing the initialized face recognition model into at least two model data packets.
Before the face recognition processing is performed on the image, the face recognition model needs to be initialized. If the face recognition model is stored in the second operating environment, the storage space in the second operating environment needs to be occupied for storing the face recognition model, and the storage space in the second operating environment needs to be occupied for initializing the face recognition model, so that the resource consumption of the second operating environment is too large, and the efficiency of data processing is influenced.
For example, the face recognition model occupies 20M of memory, an additional 10M of memory is required for initializing the face recognition model, and if the storage and initialization are both performed in the second operating environment, a total of 30M of memory of the second operating environment is required. If the face recognition model is stored in the first operating environment, initialized in the first operating environment and then sent to the second operating environment, only 10M of memory in the second operating environment needs to be occupied, and the resource occupancy rate in the second operating environment is greatly reduced.
The electronic equipment stores the face recognition model in the first operating environment, initializes the face recognition model in the first operating environment, and transmits the initialized face recognition model to the second operating environment, so that the occupation of the storage space in the second operating environment can be reduced. Further, after the face recognition model is initialized, the initialized face recognition model may be segmented into at least two model data packets, so that the initialized face recognition model is transmitted in segments.
Step 206, sequentially transmitting the model data packet from the first operating environment to a second operating environment, and generating a target face recognition model according to the model data packet in the second operating environment; the storage space of the first operation environment is larger than that of the second operation environment, and the target face recognition model is used for carrying out face recognition processing on the image.
Specifically, the face recognition model is stored in a file form, and after the initialized face recognition model is divided into model data packets by the first operating environment, the model data packets obtained are sequentially sent to the second operating environment. And after the model data packet is transmitted to the second operation environment, the model data packets are spliced together to generate a target face recognition model. For example, the face recognition model may be segmented according to different functional modules, and after the segmented face recognition model is transmitted to the second operating environment, model data packets corresponding to the functional modules may be spliced to generate the final target face recognition model.
In one embodiment, execution of step 202 may begin upon detection of satisfaction of an initialization condition. For example, the face recognition model is stored in the first operating environment, the electronic device may initialize the face recognition model when starting up, initialize the face recognition model when detecting that an application program requiring face recognition processing is opened, initialize the face recognition model when detecting a face recognition instruction, compress the initialized face recognition model, and transmit the compressed face recognition model to the second operating environment.
In other embodiments provided by the application, before the face recognition model is initialized in the first operating environment, the remaining storage space in the second operating environment may be obtained; if the residual storage space is smaller than the space threshold, initializing the face recognition model in the first operating environment, and dividing the initialized face recognition model into at least two model data packets. The space threshold may be set as required, and is generally the sum of the storage space occupied by the face recognition model and the storage space occupied when the face recognition model is initialized.
If the remaining storage space in the second operating environment is large, the face recognition model can be directly sent to the second operating environment, initialization processing is carried out in the second operating environment, and the original face recognition model is deleted after initialization is completed, so that the data security can be ensured. The data processing method may further include: if the residual storage space is larger than or equal to the space threshold, the face recognition model is divided into at least two model data packets in the first operation environment, and the model data packets are transmitted into the second operation environment; generating a target face recognition model from the model data packet in a second operating environment, and initializing the target face recognition model; deleting the target face recognition model before initialization, and keeping the target face recognition model after initialization. After the target face recognition model is generated in the second operating environment, face recognition processing can be directly performed according to the target face recognition model.
It will be appreciated that a face recognition model may generally include a plurality of processing modules, each performing a different process, and that the plurality of processing modules may be independent of each other. For example, a face detection module, a face matching module, and a liveness detection module may be included. Some of the modules may have relatively low security requirements, and some of the modules may have relatively high security requirements. Therefore, the processing module with lower security requirement can be initialized in the first operating environment, and the processing module with higher security requirement can be initialized in the second operating environment.
Specifically, step 204 may include: the method comprises the steps of carrying out first initialization on a face recognition model in a first running environment, and dividing the face recognition model after the first initialization into at least two model data packets. Step 206 may also be followed by: and performing second initialization on a second module in the target face recognition model, wherein the second module is a module except the first module in the face recognition model, and the security of the first module is lower than that of the second module. For example, the first module may be a face detection module, the second module may be a face matching module and a living body detection module, and the first module has a low requirement on security and is initialized in the first operating environment. The second module has a high requirement on security and is initialized in the second operating environment.
The data processing method provided by the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, divide the initialized face recognition model into at least two model data packets, and transmit the data packets to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the face recognition model is divided into a plurality of data packets for transmission, so that the data transmission efficiency is improved.
Fig. 3 is a flowchart of a data processing method in another embodiment. As shown in fig. 3, the data processing method includes steps 302 to 314. Wherein:
step 302, a face recognition model stored in a first operating environment is obtained.
Generally, before face recognition processing, a face recognition model is trained, so that the recognition accuracy of the face recognition model is higher. In the process of training the model, a training image set is obtained, images in the training image set are used as the input of the model, and the training parameters of the model are continuously adjusted according to the training result obtained in the training process, so that the optimal parameters of the model are obtained. The more images included in the training image set, the more accurate the model obtained by training, but the time consumption is increased correspondingly.
In one embodiment, the electronic device may be a terminal that interacts with the user, and the face recognition model may be trained on the server due to limited terminal resources. And after the face recognition model is trained by the server, the trained face recognition model is sent to the terminal. And after the terminal receives the trained face recognition model, storing the trained face recognition model in a first operating environment. Step 302 may also be preceded by: the terminal receives the face recognition model sent by the server and stores the face recognition model into a first operating environment of the terminal.
The terminal can comprise a first operation environment and a second operation environment, the terminal can perform face recognition processing on the image in the second operation environment, but the terminal can store the received face recognition model in the storage space of the first operation environment because the storage space of the terminal divided into the first operation environment is larger than the storage space of the terminal divided into the second operation environment. In an embodiment, each time the restart of the terminal is detected, the face recognition model stored in the first operating environment may be loaded into the second operating environment, so that when the face recognition processing needs to be performed on the image, the face recognition model loaded in the second operating environment may be directly called for processing. Step 302 may specifically include: and when the terminal is detected to be restarted, acquiring the face recognition model stored in the first operating environment.
It can be understood that the face recognition model can be updated, when the face recognition model is updated, the server sends the updated face recognition model to the terminal, and after the terminal receives the updated face recognition model, the updated face recognition model is stored in the first operating environment to cover the original face recognition model. And then the terminal is controlled to restart, and after the terminal is restarted, the updated face recognition model is obtained and initialized.
And 304, initializing the face recognition model in the first running environment, acquiring the space capacity of the shared buffer area, and dividing the face recognition model into at least two model data packets according to the space capacity.
Before the face recognition processing is performed by the face recognition model, the face recognition model needs to be initialized. In the initialization process, parameters, modules and the like in the face recognition model can be set to be in default states. Because the memory is also occupied in the process of initializing the model, the terminal can initialize the face recognition model in the first operating environment and then send the initialized face recognition model to the second operating environment, so that the face recognition processing can be directly carried out in the second operating environment without occupying extra memory to initialize the model.
The face recognition model may be stored in a file form, or may be stored in other forms, which is not limited herein. The face recognition model may generally include a plurality of functional models, and may include, for example, a face detection module, a face matching module, a liveness detection module, and the like. When the face recognition model is cut, the face recognition model can be divided into at least two model data packets according to each functional module, so that the target face recognition model can be conveniently generated by subsequent recombination. In other embodiments, the segmentation may be performed in other manners, which are not limited.
And step 306, endowing each model data packet with a corresponding data number, and sequentially transmitting the model data packets from the first operating environment to the second operating environment according to the data numbers.
It is understood that when storing data, the data is written to consecutive memory addresses sequentially according to the time sequence of storage. After the face recognition model is segmented, the segmented model data packets can be numbered, and then the model data packets can be sequentially transmitted to the second operation environment according to the data numbers to be stored. After the transmission of the model data packets is finished, the model data packets are spliced in sequence to generate a target face recognition model.
In one embodiment, the data transmission between the first operating environment and the second operating environment may be implemented by a shared Buffer (Share Buffer), so that when the first operating environment cuts the face recognition model, the cutting may be performed according to the capacity of the shared Buffer. Specifically, the space capacity of a shared buffer area is obtained, and a face recognition model is divided into at least two model data packets according to the space capacity; and the data volume of the model data packet is less than or equal to the space capacity.
It should be noted that the shared buffer is a channel through which the first operating environment and the second operating environment transmit data, and both the first operating environment and the second operating environment can access the shared buffer. The electronic equipment can configure the shared buffer area, and the space size of the shared buffer area can be set according to requirements. For example, the electronic device may set the storage space of the shared buffer to be 5M or 10M. When data is transmitted, the face recognition model is cut according to the capacity of the shared buffer area and then transmitted, so that the shared buffer area does not need to be additionally configured with larger capacity to transmit data, and the resource occupation of the electronic equipment is reduced.
When the face recognition model is transmitted through the shared buffer area, the method specifically includes: and sequentially transmitting the model data packet from the first operation environment to the shared buffer, and transmitting the model data packet from the shared buffer to the second operation environment. Step 306 may specifically include: and giving a corresponding data number to each model data packet, sequentially transmitting the model data packets from the first operation environment to the shared buffer area according to the data number, and then transmitting the model data packets from the shared buffer area to the second operation environment.
FIG. 4 is a system diagram illustrating a method for implementing data processing in one embodiment. As shown in fig. 4, the system includes a first runtime environment 402, a shared buffer 404, and a second runtime environment 406. The first runtime environment 402 and the second runtime environment 406 may perform data transfer through the shared buffer 404. The face recognition model is stored in the first operating environment 402, and the system may acquire the face recognition model stored in the first operating environment 402, initialize the acquired face recognition model, segment the initialized face recognition model, transmit a model data packet formed by segmentation into the shared buffer 404, and transmit the model data packet into the second operating environment 406 through the shared buffer 404. Finally, the model data packets are spliced into the target face recognition model in the second operating environment 406.
And 308, splicing the model data packets according to the data numbers in the second operating environment to generate a target face recognition model.
Specifically, the data numbers may be used to represent an arrangement order of the model data packets, and after the model data packets are transmitted into the second operating environment, the model data packets are arranged in order according to the data numbers, and then are spliced according to the arrangement order to generate the target face recognition model.
FIG. 5 is a diagram of a segmented face recognition model in one embodiment. As shown in fig. 5, the face recognition model 502 is stored in a file form, and the face recognition model 502 is divided into 3 model data packets 504, where the model data packets 504 may also be in a file form. The data size of the segmented model data packets 504 is smaller than the data size of the face recognition model 502, and the data size of each model data packet 504 may be the same or different. For example, the face recognition model 502 has a total of 30M, and can be divided equally according to the size of the data volume, and each model data packet is 10M.
In step 310, when the face recognition instruction is detected, the security level of the face recognition instruction is determined.
The face recognition models are stored in the first running environment and the second running environment, and the terminal can perform face recognition processing in the first running environment and can also perform face recognition processing in the second running environment. Specifically, the terminal may determine whether to perform face recognition processing in the first operating environment or to perform face recognition processing in the second operating environment according to a face recognition instruction that triggers the face recognition processing.
The face recognition instruction is initiated by an upper application of the terminal, and when the upper application initiates the face recognition instruction, information such as time for initiating the face recognition instruction, an application identifier, an operation identifier and the like can be written into the face recognition. The application identifier may be used to identify an application program that initiates the face recognition instruction, and the operation identifier may be used to identify an application operation that needs a face recognition result to perform. For example, the application operations such as payment, unlocking, beautifying and the like can be performed through the face recognition result, and the operation identifier in the face recognition instruction is used for indicating the application operations such as payment, unlocking, beautifying and the like.
The security level is used to indicate that the security of the application operation is low, and the higher the security level is, the higher the requirement of the application operation on the security is. For example, if the security requirement of the payment operation is high and the security requirement of the beauty operation is low, the security level of the payment operation is higher than that of the beauty operation. The security level can be directly written into the face recognition instruction, and after the terminal detects the face recognition instruction, the security level in the face recognition instruction is directly read. The corresponding relation of the operation identifiers can also be established in advance, and after the face recognition instruction is detected, the corresponding safety level is obtained through the operation identifiers in the face recognition instruction.
In step 312, if the security level is lower than the level threshold, a face recognition process is performed according to the face recognition model in the first operating environment.
When the security level is lower than the level threshold value, the security requirement of the application operation initiating the face recognition processing is considered to be low, and the face recognition processing can be directly performed in the first running environment according to the face recognition model. Specifically, the face recognition processing may include, but is not limited to, one or more of face detection, face matching, and living body detection, where the face detection refers to a process of detecting whether a face exists in an image, the face matching refers to a process of matching a detected face with a preset face, and the living body detection refers to a process of detecting whether a face in an image is a living body.
Step 314, if the security level is higher than the level threshold, performing face recognition processing according to the face recognition model in a second operating environment; the safety of the second operation environment is higher than that of the first operation environment.
When the security level is higher than the level threshold, the security requirement of the application operation initiating the face recognition processing is considered to be high, and the face recognition processing can be performed according to the face recognition model in the second running environment. Specifically, the terminal can send the face recognition instruction to a second operation environment, and the camera module is controlled to collect images through the second operation environment. The collected image is firstly sent to a second running environment, the safety level of the application operation is judged in the second running environment, and if the safety level is lower than a level threshold value, the collected image is sent to the first running environment for face recognition processing; and if the safety level is higher than the level threshold, performing face recognition processing on the acquired image in a second running environment.
Specifically, as shown in fig. 6, when performing the face recognition processing in the first operating environment, the method includes:
step 602, controlling the camera module to collect a first target image and a speckle image, and sending the first target image to a first operating environment and sending the speckle image to a second operating environment.
The application installed in the terminal can initiate a face recognition instruction and send the face recognition instruction to the second operating environment. When the safety level of the detected face recognition instruction in the second operating environment is lower than the level threshold, the camera module can be controlled to acquire the first target image and the speckle image. The first target image collected by the camera module can be directly sent to the first operation environment, and the collected speckle pattern is sent to the second operation environment.
In one embodiment, the first target image may be a visible light image or other types of images, which are not limited herein. When the first target image is a visible light image, the camera module may include an RGB (Red Green Blue ) camera, and the first target image is collected by the RGB camera. Still can include radium-shine lamp and laser camera in the camera module, the steerable radium-shine lamp of terminal is opened, then gathers the laser speckle that radium-shine lamp transmitted through the laser camera and shines the speckle image that forms on the object.
Specifically, when laser is irradiated on an optically rough surface with average fluctuation larger than the wavelength order, wavelets scattered by randomly distributed surface elements on the surface are mutually superposed to enable a reflected light field to have random spatial light intensity distribution and present a granular structure, namely laser speckle. The laser speckles formed are highly random, and therefore, the laser speckles generated by the laser emitted by different laser emitters are different. When the resulting laser speckle is projected onto objects of different depths and shapes, the resulting speckle images are not identical. The laser speckles formed by different laser lamps have uniqueness, so that the obtained speckle images also have uniqueness.
And step 604, calculating to obtain a depth image according to the speckle image in the second operating environment, and sending the depth image to the first operating environment.
The terminal can ensure that the speckle images are processed in a safe environment all the time in order to protect the safety of data, so the terminal can transmit the speckle images to a second operating environment for processing. The depth image is an image representing depth information of a subject, and is calculated from the speckle image. The terminal can control the camera module to simultaneously acquire the first target image and the speckle image, and the depth information of the object in the first target image can be represented according to the depth image obtained by the speckle image calculation.
A depth image may be computed from the speckle image and the reference image in a second operating environment. The depth image is an image acquired when laser speckle is irradiated to a reference plane, so the reference image is an image with reference depth information. First, the relative depth can be calculated according to the position offset of the speckle point in the speckle image relative to the scattered spot in the reference image, and the relative depth can represent the depth information of the actual shooting object to the reference plane. And then calculating the actual depth information of the object according to the acquired relative depth and the reference depth. Specifically, the reference image is compared with the speckle image to obtain offset information, and the offset information is used for representing the horizontal offset of the speckle point in the speckle image relative to the corresponding scattered spot in the reference image; and calculating to obtain a depth image according to the offset information and the reference depth information.
FIG. 7 is a schematic diagram of computing depth information in one embodiment. As shown in fig. 7, the laser light 702 can generate laser speckles, which are reflected off of an object and then captured by the laser camera 704 to form an image. In the calibration process of the camera, laser speckles emitted by the laser lamp 702 are reflected by the reference plane 708, reflected light is collected by the laser camera 704, and a reference image is obtained by imaging through the imaging plane 710. The reference depth L from the reference plane 708 to the laser lamp 702 is known. In the process of actually calculating the depth information, laser speckles emitted by the laser lamp 702 are reflected by the object 706, reflected light is collected by the laser camera 704, and an actual speckle image is obtained by imaging through the imaging plane 710. The calculation formula for obtaining the actual depth information is as follows:
Figure BDA0001750979930000131
where L is the distance between the laser lamp 702 and the reference plane 708, f is the focal length of the lens in the laser camera 704, CD is the distance between the laser lamp 702 and the laser camera 704, and AB is the offset distance between the image of the object 706 and the image of the reference plane 708. AB may be the product of the pixel offset n and the actual distance p of the pixel. When the distance Dis between the object 704 and the laser lamp 702 is greater than the distance L between the reference plane 706 and the laser lamp 702, AB is a negative value; AB is positive when the distance Dis between the object 704 and the laser lamp 702 is less than the distance L between the reference plane 706 and the laser lamp 702.
And 606, performing face recognition processing on the first target image and the depth image through a face recognition model in the first running environment.
After the depth image is obtained through calculation in the second running environment, the depth image obtained through calculation can be sent to the first running environment, then face recognition processing is carried out according to the first target image and the depth image in the first running environment, the first running environment sends a face recognition result to the upper layer application, and the upper layer application can carry out corresponding application operation according to the face recognition result.
For example, when the image is subjected to the beauty processing, the position and the area where the face is located can be detected by the first target image. Because the first target image and the depth image are corresponding, the depth information of the face can be obtained through the corresponding area of the depth image, the three-dimensional feature of the face can be constructed through the depth information of the face, and therefore the face can be beautified according to the three-dimensional feature of the face.
In other embodiments provided in the present application, as shown in fig. 8, when performing face recognition processing in the second operating environment, the method specifically includes:
and step 802, controlling the camera module to collect a second target image and a speckle image, and sending the second target image and the speckle image to a second operating environment.
In one embodiment, the second target image can be an infrared image, the camera module can comprise a floodlight, a laser lamp and a laser camera, the floodlight can be controlled by the terminal to be turned on, and then the infrared image formed by irradiating an object through the floodlight is collected through the laser camera to serve as the second target image. The terminal can also control the laser lamp to be started, and then a laser camera is used for collecting speckle images formed by the laser lamp irradiating objects.
The time interval between the collection of the second target image and the speckle image is short, so that the consistency of the collected second target image and the speckle image can be ensured, a larger error between the second target image and the speckle image is avoided, and the accuracy of image processing is improved. Specifically, the camera module is controlled to collect a second target image, and the camera module is controlled to collect a speckle image; wherein a time interval between a first time of acquiring the second target image and a second time of acquiring the speckle image is less than a first threshold.
The floodlight controller and the laser lamp controller can be respectively arranged and are respectively connected through two paths of Pulse Width Modulation (PWM), when the floodlight is required to be controlled to be started or the laser lamp is required to be started, the floodlight can be controlled to be started by transmitting Pulse waves to the floodlight controller through the PWM or the laser lamp controller is controlled to be started by transmitting the Pulse waves to the laser lamp controller, and the time interval between the acquisition of the second target image and the speckle image is controlled by transmitting the Pulse waves to the two controllers through the PWM. It is understood that the second target image may be an infrared image, or may be other types of images, and is not limited herein. For example, the second target image may also be a visible light image.
And step 804, calculating to obtain a depth image according to the speckle image in the second operating environment.
It should be noted that, when the security level of the face recognition instruction is higher than the level threshold, it is considered that the security requirement of the application operation initiating the face recognition instruction is higher, and then the face recognition processing needs to be performed in an environment with higher security, so as to ensure the security of data processing. And the second target image and the speckle image acquired by the camera module are directly sent to a second operation environment, and then the depth image is calculated according to the speckle image in the second operation environment.
Step 806, performing face recognition processing on the second target image and the depth image through the face recognition model in the second operating environment.
In one embodiment, when the face recognition processing is performed in the second operating environment, the face detection may be performed according to the second target image, and whether the second target image includes the target face or not may be detected. And if the second target image contains the target face, matching the detected target face with a preset face. And if the detected target face is matched with the preset face, acquiring target depth information of the target face according to the depth image, and detecting whether the target face is a living body according to the target depth information.
When the target face is matched, the face attribute features of the target face can be extracted, the extracted face attribute features are matched with the face attribute features of the preset face, and if the matching value exceeds the matching threshold value, the face matching is considered to be successful. For example, the characteristics of the human face, such as the deflection angle, the brightness information, the facial features and the like, can be extracted as the human face attribute characteristics, and if the matching degree of the human face attribute characteristics of the target human face and the human face attribute characteristics of the preset human face exceeds 90%, the human face matching is considered to be successful.
Generally, in the process of face authentication, if a face in a photo or sculpture is taken, the extracted face attribute features may be successfully authenticated. In order to improve the accuracy, the living body detection processing can be performed according to the acquired depth image, so that it is necessary to ensure that the acquired face is a living body face before the authentication is successful. It can be understood that the acquired second target image may represent detail information of a human face, and the acquired depth image may represent corresponding depth information, and living body detection may be performed according to the depth image. For example, if the photographed face is a face in a photograph, it can be determined from the depth image that the acquired face is not a solid, and the acquired face can be considered to be a non-living face.
Specifically, the performing of the living body detection according to the corrected depth image includes: and searching face depth information corresponding to the target face in the depth image, wherein if the face depth information corresponding to the target face exists in the depth image and the face depth information conforms to a face three-dimensional rule, the target face is a living body face. The face stereo rule is a rule with face three-dimensional depth information.
In an embodiment, an artificial intelligence model may be further used to perform artificial intelligence recognition on the second target image and the depth image, acquire a living body attribute feature corresponding to the target face, and determine whether the target face is a living body face image according to the acquired living body attribute feature. The living body attribute features may include skin characteristics, a direction of a texture, a density of the texture, a width of the texture, and the like corresponding to the target face, and if the living body attribute features conform to a living body rule of the face, the target face is considered to have biological activity, that is, the target face is the living body face.
It is to be understood that, when processing such as face detection, face matching, and living body detection is performed, the processing order may be changed as necessary. For example, the human face may be authenticated first, and then whether the human face is a living body may be detected. Or whether the human face is a living body can be detected firstly, and then the human face is authenticated.
In the embodiment provided by the application, in order to ensure the safety of data, when the face recognition model is transmitted, the compressed face recognition model can be encrypted, and the encrypted face recognition model is transmitted from the first operating environment to the second operating environment; and decrypting the face recognition model after the encryption processing in a second running environment, and storing the face recognition model after the decryption processing.
The first operating environment may be a normal operating environment, the second operating environment may be a safe operating environment, and the second operating environment may be safer than the first operating environment. The first execution environment is generally configured to process application operations with lower security, and the second execution environment is generally configured to process application operations with higher security. For example, operations with low security requirements, such as shooting and gaming, may be performed in a first operating environment, and operations with high security requirements, such as payment and unlocking, may be performed in a second operating environment.
The second operating environment is generally used for performing application operations with high security requirements, and therefore, when the face recognition model is sent to the second operating environment, the security of the face recognition model also needs to be ensured. After the face recognition model is compressed in the first operating environment, the compressed face recognition model may be encrypted, and then the encrypted face recognition model may be sent to the second operating environment through the shared buffer.
And after the encrypted face recognition model is transmitted into the shared buffer area from the first running environment, the encrypted face recognition model is transmitted into the second running environment from the shared buffer area. And the second operating environment decrypts the received face recognition model after encryption. The algorithm for encrypting the face recognition model is not limited in this embodiment. For example, the Encryption processing may be performed according to an Encryption Algorithm such as DES (Data Encryption Standard), MD5(Message-Digest Algorithm 5), HAVAL (Diffie-Hellman, key exchange Algorithm), or the like.
In one embodiment, after generating the target face recognition model in the second operating environment, the method may further include: and deleting the target face recognition model in the second operating environment when detecting that the duration of the non-called target face recognition model exceeds a duration threshold or detecting that the terminal is closed. This frees up storage space in the second operating environment to save space on the electronic device.
Furthermore, the operation condition can be detected in the operation process of the electronic equipment, and the storage space occupied by the target face recognition model is released according to the operation condition of the electronic equipment. Specifically, when it is detected that the electronic device is in a stuck state and the time length of the non-invoked target face recognition model exceeds a time length threshold, the target face recognition model in the second operating environment is deleted.
After the target face recognition model is released, the face recognition model stored in the first operating environment can be acquired when the electronic equipment is detected to recover to a normal operating state or a face recognition instruction is detected; then initializing the face recognition model in a first running environment, and dividing the initialized face recognition model into at least two model data packets; and sequentially transmitting the model data packets from the first operating environment to a second operating environment, and generating a target face recognition model according to the model data packets in the second operating environment.
The data processing method provided by the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, divide the initialized face recognition model into at least two model data packets, and transmit the data packets to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the face recognition model is divided into a plurality of data packets for transmission, so that the data transmission efficiency is improved. In addition, the processing is carried out in the first running environment or the second running environment according to the safety level selection of the face recognition instruction, all the applications are prevented from being processed in the second running environment, and the resource occupancy rate of the second running environment can be reduced.
It should be understood that, although the steps in the flowcharts of fig. 2, 3, 6, and 8 are shown in order as indicated by the arrows, the steps are not necessarily performed in order as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2, 3, 6, and 8 may include multiple sub-steps or multiple stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed alternately or alternatingly with other steps or at least some of the sub-steps or stages of other steps.
Fig. 9 is a hardware configuration diagram for implementing the data processing method in one embodiment. As shown in fig. 9, the electronic device may include a camera module 910, a Central Processing Unit (CPU) 920 and a Micro Control Unit (MCU) 930, where the camera module 910 includes a laser camera 912, a floodlight 914, an RGB camera 916 and a laser light 918. The mcu 930 includes a PWM (Pulse Width Modulation) module 932, an SPI/I2C (Serial Peripheral Interface/Inter-Integrated Circuit) module 934, a RAM (Random Access Memory) module 936, and a Depth Engine module 938. The central processing unit 920 may be in a multi-core operation mode, and a CPU core in the central processing unit 920 may operate under a TEE or a REE. Both the TEE and the REE are running modes of an ARM module (Advanced RISC Machines). The natural operating environment in the cpu 920 may be the first operating environment, and the security is low. The trusted operating environment in the central processing unit 920 is the second operating environment, and the security is high. It is understood that, since the mcu 930 is a processing module independent from the cpu 920 and the input and output of the mcu 930 are controlled by the cpu 920 under the trusted operating environment, the mcu 930 is also a processing module with higher security, and it can be considered that the mcu 930 is also under the secure operating environment, i.e. the mcu 930 is also under the second operating environment.
Generally, the operation behavior with higher security requirement needs to be executed in the second operation environment, and other operation behaviors can be executed in the first operation environment. In this embodiment, the central processing unit 920 may send a face recognition instruction to the SPI/I2C module 934 in the micro control unit 930 through the trusted operating environment control SECURE SPI/I2C. After receiving the face recognition instruction, if the safety level of the face recognition instruction is determined to be higher than the level threshold, the micro control unit 930 transmits a pulse wave through the PWM module 932 to control the opening of the floodlight 914 in the camera module 910 to collect an infrared image, and controls the opening of the laser light 918 in the camera module 910 to collect a speckle image. The camera module 910 can transmit the collected infrared image and speckle image to a Depth Engine module 938 in the micro-control unit 930, and the Depth Engine module 938 can calculate a Depth image according to the speckle image and transmit the infrared image and the Depth image to a trusted operating environment of the central processor 920. The trusted operating environment of the cpu 920 performs face recognition processing according to the received infrared image and depth image.
If the safety level of the face recognition instruction is lower than the level threshold value, the PWM module 932 emits pulse waves to control the laser lamp 918 in the camera module 910 to be turned on to collect speckle images, and the RGB camera 916 collects visible light images. The camera module 910 directly sends the collected visible light image to the natural operation environment of the central processor 920, transmits the speckle image to the DepthEngine module 938 in the micro control unit 930, and the Depth Engine module 938 can calculate the Depth image according to the speckle image and send the Depth image to the trusted operation environment of the central processor 920. And then the depth image is sent to a natural running environment by the credible running environment, and the face recognition processing is carried out in the natural running environment according to the visible light image and the depth image.
FIG. 10 is a block diagram of a data processing apparatus according to an embodiment. As shown in fig. 10, the data processing apparatus 1000 includes a model acquisition module 1002, a model segmentation module 1004, and a model transmission module 1006. Wherein:
a model obtaining module 1002, configured to obtain a face recognition model stored in a first operating environment.
A model segmentation module 1004 configured to initialize the face recognition model in the first operating environment, and segment the initialized face recognition model into at least two model data packets.
A model transmission module 1006, configured to sequentially transmit the model data packet from the first operating environment to a second operating environment, and generate a target face recognition model according to the model data packet in the second operating environment; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
The data processing apparatus provided in the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, divide the initialized face recognition model into at least two model data packets, and transmit the data packets to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the face recognition model is divided into a plurality of data packets for transmission, so that the data transmission efficiency is improved.
Fig. 11 is a schematic structural diagram of a data processing apparatus according to another embodiment. As shown in fig. 11, the data processing apparatus 1100 includes a model acquisition module 1102, a model segmentation module 1104, a model transmission module 1106, and a face recognition module 1108. Wherein:
a model obtaining module 1102, configured to obtain a face recognition model stored in a first operating environment.
A model segmentation module 1104, configured to initialize the face recognition model in the first operating environment, and segment the initialized face recognition model into at least two model data packets.
A model transmission module 1106, configured to sequentially transmit the model data packet from the first operating environment to a second operating environment, and generate a target face recognition model according to the model data packet in the second operating environment; the storage space of the first operating environment is larger than that of the second operating environment, and the target face recognition model is used for carrying out face recognition processing on the image.
A face recognition module 1108, configured to determine a security level of a face recognition instruction when the face recognition instruction is detected; if the safety level is lower than a level threshold, performing face recognition processing according to the face recognition model in the first operating environment; if the safety level is higher than a level threshold, performing face recognition processing according to the face recognition model in the second operating environment; and the safety of the second operation environment is higher than that of the first operation environment.
The data processing method provided by the above embodiment may store the face recognition model in the first operating environment, initialize the face recognition model in the first operating environment, divide the initialized face recognition model into at least two model data packets, and transmit the data packets to the second operating environment. Because the storage space in the second operating environment is smaller than that in the first operating environment, the face recognition model is initialized in the first operating environment, so that the initialization efficiency of the face recognition model can be improved, the resource occupancy rate in the second operating environment is reduced, and the data processing speed is increased. Meanwhile, the face recognition model is divided into a plurality of data packets for transmission, so that the data transmission efficiency is improved. In addition, the processing is carried out in the first running environment or the second running environment according to the safety level selection of the face recognition instruction, all the applications are prevented from being processed in the second running environment, and the resource occupancy rate of the second running environment can be reduced.
In one embodiment, the model segmentation module 1104 is further configured to obtain a space capacity of the shared buffer, and segment the face recognition model into at least two model data packets according to the space capacity; wherein the data volume of the model data packet is less than or equal to the space capacity.
In one embodiment, the model transmission module 1106 is further configured to sequentially transfer the model data packet from the first runtime environment to a shared buffer and transfer the model data packet from the shared buffer to a second runtime environment.
In one embodiment, the model transmission module 1106 is further configured to assign a corresponding data number to each model data packet, and sequentially transmit the model data packets from the first operating environment to the second operating environment according to the data numbers; and splicing the model data packets in the second operating environment according to the data numbers to generate a target face recognition model.
In one embodiment, the model transmission module 1106 is further configured to encrypt the model data packet and transmit the encrypted model data packet from the first operating environment to the second operating environment; and decrypting the encrypted model data packet in the second operating environment.
In one embodiment, the face recognition module 1108 is further configured to control the camera module to acquire a first target image and a speckle image, send the first target image to a first operating environment, and send the speckle image to a second operating environment; calculating to obtain a depth image according to the speckle image in the second operating environment, and sending the depth image to the first operating environment; and carrying out face recognition processing on the first target image and the depth image through a face recognition model in the first running environment.
In one embodiment, the face recognition module 1108 is further configured to control the camera module to acquire a second target image and a speckle image, and send the second target image and the speckle image to the second operating environment; calculating to obtain a depth image according to the speckle image in the second operating environment; and carrying out face recognition processing on the second target image and the depth image through the face recognition model in the second running environment.
The division of the modules in the data processing apparatus is only for illustration, and in other embodiments, the data processing apparatus may be divided into different modules as needed to complete all or part of the functions of the data processing apparatus.
The embodiment of the application also provides a computer readable storage medium. One or more non-transitory computer-readable storage media containing computer-executable instructions that, when executed by one or more processors, cause the processors to perform the data processing methods provided by the above-described embodiments.
A computer program product comprising instructions which, when run on a computer, cause the computer to perform the data processing method provided by the above embodiments.
Any reference to memory, storage, database, or other medium used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and bus dynamic RAM (RDRAM).
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the present application. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1.一种数据处理方法,其特征在于,所述方法包括:1. a data processing method, is characterized in that, described method comprises: 获取第一运行环境中存储的人脸识别模型;Obtain the face recognition model stored in the first operating environment; 在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型分割成至少两个模型数据包;Initializing the face recognition model in the first operating environment, and dividing the initialized face recognition model into at least two model data packets; 对每个模型数据包赋予对应的数据编号,按照所述数据编号依次将所述模型数据包从所述第一运行环境传入到第二运行环境;assigning a corresponding data number to each model data package, and sequentially passing the model data package from the first operating environment to the second operating environment according to the data number; 在所述第二运行环境中根据所述数据编号将所述模型数据包进行拼接,生成目标人脸识别模型;In the second operating environment, the model data packets are spliced according to the data number to generate a target face recognition model; 其中,所述第一运行环境的存储空间大于所述第二运行环境的存储空间,所述目标人脸识别模型用于对图像进行人脸识别处理。Wherein, the storage space of the first operating environment is larger than the storage space of the second operating environment, and the target face recognition model is used to perform face recognition processing on the image. 2.根据权利要求1所述的方法,其特征在于,所述依次将所述模型数据包从所述第一运行环境传入到第二运行环境,包括:2. The method according to claim 1, wherein the step of sequentially transferring the model data package from the first operating environment to the second operating environment comprises: 依次将所述模型数据包从所述第一运行环境传入到共享缓冲区,并将所述模型数据包从所述共享缓冲区传入到第二运行环境。The model data packets are sequentially transferred from the first operating environment to the shared buffer, and the model data packets are transferred from the shared buffer to the second operating environment. 3.根据权利要求2所述的方法,其特征在于,所述将初始化后的人脸识别模型分割成至少两个模型数据包,包括:3. The method according to claim 2, wherein the initialized face recognition model is divided into at least two model data packets, comprising: 获取所述共享缓冲区的空间容量,并根据所述空间容量将所述人脸识别模型分割成至少两个模型数据包;其中,所述模型数据包的数据量小于或等于所述空间容量。The space capacity of the shared buffer is acquired, and the face recognition model is divided into at least two model data packets according to the space capacity; wherein, the data volume of the model data packet is less than or equal to the space capacity. 4.根据权利要求1至3中任一项所述的方法,其特征在于,所述依次将所述模型数据包从所述第一运行环境传入到第二运行环境,包括:4. The method according to any one of claims 1 to 3, wherein the step of sequentially transferring the model data package from the first operating environment to the second operating environment comprises: 将所述模型数据包进行加密处理,并将加密处理后的模型数据包从所述第一运行环境传入到第二运行环境;Encrypting the model data package, and transmitting the encrypted model data package from the first operating environment to the second operating environment; 在所述第二运行环境中对所述加密处理后的模型数据包进行解密处理。The encrypted model data packet is decrypted in the second operating environment. 5.根据权利要求4所述的方法,其特征在于,所述在所述第二运行环境中根据所述数据编号将所述模型数据包进行拼接,生成目标人脸识别模型之后,还包括:5. method according to claim 4 is characterized in that, described in described second operating environment, described model data package is spliced according to described data number, after generating target face recognition model, also comprises: 当检测到人脸识别指令时,判断所述人脸识别指令的安全等级;When detecting a face recognition instruction, determine the security level of the face recognition instruction; 若所述安全等级低于等级阈值,则在所述第一运行环境中根据所述人脸识别模型进行人脸识别处理;If the security level is lower than the level threshold, perform face recognition processing according to the face recognition model in the first operating environment; 若所述安全等级高于等级阈值,则在所述第二运行环境中根据所述人脸识别模型进行人脸识别处理;其中,所述第二运行环境的安全性高于所述第一运行环境的安全性。If the security level is higher than the level threshold, perform face recognition processing according to the face recognition model in the second operating environment; wherein, the security of the second operating environment is higher than that of the first operating environment safety of the environment. 6.根据权利要求5所述的方法,其特征在于,所述在所述第一运行环境中根据所述人脸识别模型进行人脸识别处理,包括:6. The method according to claim 5, wherein the performing face recognition processing according to the face recognition model in the first operating environment comprises: 控制摄像头模组采集第一目标图像和散斑图像,并将所述第一目标图像发送到第一运行环境中,将所述散斑图像发送到所述第二运行环境中;controlling the camera module to collect the first target image and the speckle image, sending the first target image to the first operating environment, and sending the speckle image to the second operating environment; 在所述第二运行环境中根据所述散斑图像计算得到深度图像,并将所述深度图像发送到所述第一运行环境中;calculating a depth image according to the speckle image in the second operating environment, and sending the depth image to the first operating environment; 通过所述第一运行环境中的人脸识别模型,对所述第一目标图像和深度图像进行人脸识别处理;Perform face recognition processing on the first target image and the depth image through the face recognition model in the first operating environment; 所述在所述第二运行环境中根据所述人脸识别模型进行人脸识别处理,包括:The performing face recognition processing according to the face recognition model in the second operating environment includes: 控制摄像头模组采集第二目标图像和散斑图像,并将第二目标图像和散斑图像发送到所述第二运行环境中;controlling the camera module to collect the second target image and the speckle image, and sending the second target image and the speckle image to the second operating environment; 在所述第二运行环境中根据所述散斑图像计算得到深度图像;calculating a depth image according to the speckle image in the second operating environment; 通过所述第二运行环境中的人脸识别模型,对所述第二目标图像和深度图像进行人脸识别处理。Perform face recognition processing on the second target image and the depth image through the face recognition model in the second operating environment. 7.一种数据处理装置,其特征在于,所述装置包括:7. A data processing device, wherein the device comprises: 模型获取模块,用于获取第一运行环境中存储的人脸识别模型;a model acquisition module for acquiring the face recognition model stored in the first operating environment; 模型分割模块,用于在所述第一运行环境中将所述人脸识别模型进行初始化,并将初始化后的人脸识别模型分割成至少两个模型数据包;a model segmentation module for initializing the face recognition model in the first operating environment, and dividing the initialized face recognition model into at least two model data packets; 模型传输模块,用于对每个模型数据包赋予对应的数据编号,按照所述数据编号依次将所述模型数据包从所述第一运行环境传入到第二运行环境;在所述第二运行环境中根据所述数据编号将所述模型数据包进行拼接,生成目标人脸识别模型;其中,所述第一运行环境的存储空间大于所述第二运行环境的存储空间,所述目标人脸识别模型用于对图像进行人脸识别处理。A model transmission module, configured to assign a corresponding data number to each model data package, and sequentially transmit the model data package from the first operating environment to the second operating environment according to the data number; In the operating environment, the model data package is spliced according to the data number to generate a target face recognition model; wherein, the storage space of the first operating environment is larger than the storage space of the second operating environment, and the target person The face recognition model is used to perform face recognition processing on images. 8.根据权利要求7所述的装置,其特征在于,所述模型传输模块还用于依次将所述模型数据包从所述第一运行环境传入到共享缓冲区,并将所述模型数据包从所述共享缓冲区传入到第二运行环境。8 . The apparatus according to claim 7 , wherein the model transmission module is further configured to sequentially transmit the model data packets from the first operating environment to the shared buffer, and transmit the model data to the shared buffer. 9 . Packets are passed from the shared buffer to the second runtime environment. 9.一种计算机可读存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至6中任一项所述的方法。9. A computer-readable storage medium on which a computer program is stored, characterized in that, when the computer program is executed by a processor, the method according to any one of claims 1 to 6 is implemented. 10.一种电子设备,包括存储器及处理器,所述存储器中储存有计算机可读指令,所述指令被所述处理器执行时,使得所述处理器执行如权利要求1至6中任一项所述的方法。10. An electronic device comprising a memory and a processor, wherein computer-readable instructions are stored in the memory, and when the instructions are executed by the processor, the processor is made to execute any one of claims 1 to 6 method described in item.
CN201810866139.2A 2018-08-01 2018-08-01 Data processing method, apparatus, computer-readable storage medium and electronic device Expired - Fee Related CN108985255B (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN201810866139.2A CN108985255B (en) 2018-08-01 2018-08-01 Data processing method, apparatus, computer-readable storage medium and electronic device
PCT/CN2019/082696 WO2020024619A1 (en) 2018-08-01 2019-04-15 Data processing method and apparatus, computer-readable storage medium and electronic device
EP19843800.4A EP3671551A4 (en) 2018-08-01 2019-04-15 DATA PROCESSING METHOD AND DEVICE, COMPUTER-READABLE STORAGE MEDIUM AND ELECTRONIC DEVICE
US16/740,374 US11373445B2 (en) 2018-08-01 2020-01-10 Method and apparatus for processing data, and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810866139.2A CN108985255B (en) 2018-08-01 2018-08-01 Data processing method, apparatus, computer-readable storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN108985255A CN108985255A (en) 2018-12-11
CN108985255B true CN108985255B (en) 2021-05-18

Family

ID=64554593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810866139.2A Expired - Fee Related CN108985255B (en) 2018-08-01 2018-08-01 Data processing method, apparatus, computer-readable storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN108985255B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020024619A1 (en) * 2018-08-01 2020-02-06 Oppo广东移动通信有限公司 Data processing method and apparatus, computer-readable storage medium and electronic device
CN111339513B (en) * 2020-01-23 2023-05-09 华为技术有限公司 Method and device for data sharing
CN111291416B (en) * 2020-05-09 2020-07-31 支付宝(杭州)信息技术有限公司 Method and device for preprocessing data of business model based on privacy protection
CN119544535A (en) * 2020-12-21 2025-02-28 北京小米移动软件有限公司 Model transmission method, model transmission device and storage medium
CN113849565B (en) * 2021-09-26 2024-05-14 支付宝(杭州)信息技术有限公司 Method and terminal equipment for trusted uplink

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446713A (en) * 2014-08-13 2016-03-30 阿里巴巴集团控股有限公司 Safe storage method and equipment
CN107169343A (en) * 2017-04-25 2017-09-15 深圳市金立通信设备有限公司 A kind of method and terminal of control application program
CN107992729A (en) * 2016-10-26 2018-05-04 中国移动通信有限公司研究院 A kind of control method, terminal and subscriber identification module card

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8850535B2 (en) * 2011-08-05 2014-09-30 Safefaces LLC Methods and systems for identity verification in a social network using ratings

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446713A (en) * 2014-08-13 2016-03-30 阿里巴巴集团控股有限公司 Safe storage method and equipment
CN107992729A (en) * 2016-10-26 2018-05-04 中国移动通信有限公司研究院 A kind of control method, terminal and subscriber identification module card
CN107169343A (en) * 2017-04-25 2017-09-15 深圳市金立通信设备有限公司 A kind of method and terminal of control application program

Also Published As

Publication number Publication date
CN108985255A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
CN108985255B (en) Data processing method, apparatus, computer-readable storage medium and electronic device
TWI736883B (en) Method for image processing and electronic device
CN108804895B (en) Image processing method, apparatus, computer-readable storage medium and electronic device
CN111126146B (en) Image processing methods, devices, computer-readable storage media and electronic equipment
CN109213610B (en) Data processing method and device, computer readable storage medium and electronic equipment
CN109145653B (en) Data processing method and apparatus, electronic device, computer-readable storage medium
CN108805024B (en) Image processing method, apparatus, computer-readable storage medium and electronic device
US11275927B2 (en) Method and device for processing image, computer readable storage medium and electronic device
US11256903B2 (en) Image processing method, image processing device, computer readable storage medium and electronic device
CN108596061A (en) Face recognition method, device, mobile terminal, and storage medium
CN108668078A (en) Image processing method, device, computer-readable storage medium, and electronic device
TW201944290A (en) Face recognition method and apparatus, and mobile terminal and storage medium
CN108764053A (en) Image processing method, device, computer-readable storage medium, and electronic device
CN108711054A (en) Image processing method, device, computer-readable storage medium, and electronic device
CN108830141A (en) Image processing method, device, computer-readable storage medium, and electronic device
WO2020024619A1 (en) Data processing method and apparatus, computer-readable storage medium and electronic device
US20200151428A1 (en) Data Processing Method, Electronic Device and Computer-Readable Storage Medium
CN108564032A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108846310B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN109145772B (en) Data processing method and device, computer readable storage medium and electronic equipment
CN108564033A (en) Safety verification method and device based on structured light and terminal equipment
US11308636B2 (en) Method, apparatus, and computer-readable storage medium for obtaining a target image
HK40029104A (en) Data processing method and apparatus, computer-readable storage medium and electronic device
CN108881712A (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
HK40025202A (en) Image processing method and apparatus, and electronic device and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20210518

CF01 Termination of patent right due to non-payment of annual fee